9 points

That website only works in private mode on Firefox for me, and even then some pages display different things than it is saying it will. It feels like an easter egg almost, does someone have more info about this group?

permalink
report
reply
4 points

Privacy Browser with JS off (by default) can read the article and navigate, only minor eye-sore are the buttons at the top of the site which are on a transparent background and stay on top of the text as I scroll down

permalink
report
parent
reply
7 points
*

hmm maybe it was just a temporary issue. I was getting a 500 error while still seeing part of the site, and the about page had some pseudocode on it that I thought was intentional, but maybe it was just being a bit buggy because now it seems fine.

The blog post itself is an interesting read bytheway, forgot to mention that in my curiosity for interesting web pages

permalink
report
parent
reply
7 points

u wot? works fine in Firefox here. Try this archive or this archive.

permalink
report
parent
reply
7 points

I think our comments just crossed each other, read my follow up. I think it was an issue with the site at that specific moment (500 error) instead of the site being quirky

permalink
report
parent
reply
5 points

same here, works fine in Fx (133, aarch64)

permalink
report
parent
reply
3 points

After all, there’s almost nothing that ChatGPT is actually useful for.

It’s takes like this that just discredit the rest of the text.

You can dislike LLM AI for its environmental impact or questionable interpretation of fair use when it comes to intellectual property. But pretending it’s actually useless just makes someone seem like they aren’t dissimilar to a Drama YouTuber jumping in on whatever the latest on-trend thing to hate is.

permalink
report
reply
29 points

It’s useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don’t mind putting your name on low quality derivative slop in the first place.

permalink
report
parent
reply
29 points

Let’s be real here: when people hear the word AI or LLM they don’t think of any of the applications of ML that you might slap the label “potentially useful” on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold–kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.

permalink
report
parent
reply
-8 points

I’m saying ChatGPT is not useless.

I’m a senior software engineer and I make use of it several times a week either directly or via things built on top of it. Yes you can’t trust it will be perfect, but I can’t trust a junior engineer to be perfect either—code review is something I’ve done long before AI and will continue to do long into the future.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower. If it was useless this would not be the case.

permalink
report
parent
reply
44 points
*

I’m a senior software engineer

ah, a señor software engineer. excusé-moi monsoir, let me back up and try once more to respect your opinion

uh, wait:

but I can’t trust a junior engineer to be perfect either

whoops no, sorry, can’t do it.

jesus fuck I hope the poor bastards that are under you find some other place real soon, you sound like a godawful leader

and the engineers I know who are still avoiding it work noticeably slower

yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!

fucking christ almighty. step away from the keyboard. go become a logger instead. your opinions (and/or the shit you’re saying) is a big part of everything that’s wrong with industry.

permalink
report
parent
reply
-11 points

In this and other use cases I call it a pretty effective search engine, instead of scrolling through stackexchange after clicking between google ads, you get the cleaned up example code you needed. Not a Chat with any intelligence though.

permalink
report
parent
reply
27 points
*

I’m a senior software engineer

Nice, me too, and whenever some tech-brained C-suite bozo tries to mansplain to me why LLMs will make me more efficient, I smile, nod politely, and move on, because at this point I don’t think I can make the case that pasting AI slop into prod is objectively a worse idea than pasting Stack Overflow answers into prod.

At the end of the day, if I want to insert a snippet (which I don’t have to double-check, mind you), auto-format my code, or organize my imports, which are all things I might use ChatGPT for if I didn’t mind all the other baggage that comes along with it, Emacs (or Vim, if you swing that way) does this just fine and has done so for over 20 years.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower.

If LOC/min or a similar metric is used to measure efficiency at your company, I am genuinely sorry.

permalink
report
parent
reply
-12 points

Another professional here. Lemmy really isn’t a place where you’re going to find people listening to what you have to say and critically examining their existing positions. You’re right, and you’re going to get downvoted for it.

permalink
report
parent
reply
20 points

I’m a senior software engineer

Good. Thanks for telling us your opinion’s worthless.

permalink
report
parent
reply
18 points
*

Senior software engineer programmer here. I have had to tell coworkers “don’t trust anything chat-gpt tells you about text encoding” after it made something up about text encoding.

permalink
report
parent
reply
10 points

Oh my god, an actual senior softeare engineer??? Amidst all of us mortals??

permalink
report
parent
reply
45 points

“Almost nothing” is not the same as “actually useless”. The former is saying the applications are limited, which is true.

LLMs are fine for fictional interactions, as in things that appear to be real but aren’t. They suck at anything that involves being reliably factual, which is most things including all the stupid places LLMs and other AI are being jammed in to despite being consistely wrong, which tech bros love to call hallucinations.

They have LIMITED applications, but are being implemented as useful for everything.

permalink
report
parent
reply
29 points
*

To be honest, as someone who’s very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they’re too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don’t think they’re very good for the unfactual either. Probably a homegrown neural network would have better results.

permalink
report
parent
reply
18 points

GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

permalink
report
parent
reply
16 points

Agreed, our chat server ran a Markov chain bot for fun.

In comparison to ChatGPT on a 2nd server I frequent it had much funnier and random responses.

ChatGPT tends to just agree with whatever it chose to respond to.

As for real world use. ChatGPT 90% of the time produces the wrong answer. I’ve enjoyed Circuit AI however. While it also produces incorrect responses, it shares its sources so I can more easily get the right answer.

All I really want from a chatbot is a gremlin that finds the hard things to Google on my behalf.

permalink
report
parent
reply
6 points
*

I’m in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It’s one of those things where AI bros will go, “Look, it’s so good at poetry!!” but they have no taste and can’t even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It’s a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that’s a little rough around the edges always wins over smooth, featureless AI slop in my book.


slight tangent: I was interested in seeing how they’d work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is “small LLMs are really good at creative writing for their size!”)

I don’t think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.

Orange site example:

Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]

I had a similar idea, interesting to see that it actually works. [ . . . ]

Reddit:

I think that’s cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)

It’s the first time I’ve used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I’m very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.

For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I’ve actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can’t imagine ever getting something like that from an LLM.

permalink
report
parent
reply
17 points

actually you know what? with all the motte and baileying, you can take a month off. bye!

permalink
report
parent
reply
9 points

Petition to replace “motte and bailey” per the Batman clause with “lying like a dipshit”.

permalink
report
parent
reply
8 points

I fucking love how goddamn structured kym is ito semiotics and symbolism

it’s an amazing confluence that might not have happened if persons unknown didn’t care, but they did. and thus we have it! phenomenal

permalink
report
parent
reply
10 points

Isn’t this a case of ‘the good bits are not original and the original bits are not good’. According to wikipedia it is from 2005.

permalink
report
parent
reply
53 points
*

‘i am a stochastic parrot and so are u’

reminds me of

“In his desperation to have produced reality through computation, he denigrates actual reality by equating it to computation”

(from this review/analysis of the devs series). A pattern annoying common among the LLM AI fans.

E: Wow, I did not like the reactionary great man theory spin this article took there. Don’t think replacing the Altmans with Yarvins would be a big solution there. (At least that is how the NRx people would read this article). Quite a lot of the ‘we need more well read renaissance men’ people turned into hardcore trump supporters (and racists, and sexists and…). (Note this edit is after I already got 45 upvotes).

permalink
report
reply
7 points

I’m glad I’m not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.

Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; it’s the “because I’m your Dad and I said so” for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.

permalink
report
parent
reply
82 points

in response to Bender pointing out that ChatGPT and its competitors simply encode relationships between words and have no concept of referent or meaning, which is a devastating critique of what the technology actually does, the absolute best response he can muster for his work is “yeah, but humans don’t do anything more complicated than that”. I mean, speak for yourself Sam: the rest of us have some concept of semiotics, and we can do things like identify anagrams or count the number of letters in a word, which requires a level of recursivity that’s beyond what ChatGPT can muster.

Boom Shanka (emphasis added)

permalink
report
reply
21 points

MRW 38 of the 39 comments have almost nothing to do with the article

permalink
report
reply
9 points

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 416

    Posts

  • 11K

    Comments

Community moderators