“sigh”
(Preface: I work in AI)
This isn’t news. We’ve known this for many, many years. It’s one of the reasons why many companies didn’t bother using LLM’s in the first place, that paired with the sheer amount of hallucinations you’ll get that’ll often utterly destroy a company’s reputation (lol Google).
With that said, for commercial services that use LLM’s, it’s absolutely not true. The models won’t reason, but many will have separate expert agents or API endpoints that it will be told to use to disambiguate or better understand what is being asked, what context is needed, etc.
It’s kinda funny, because many AI bros rave about how LLM’s are getting super powerful, when in reality the real improvements we’re seeing is in smaller models that teach a LLM about things like Personas, where to seek expert opinion, what a user “might” mean if they misspell something or ask for something out of context, etc. The LLM’s themselves are only slightly getting better, but the thing that preceded them is propping them up to make them better
IMO, LLM’s are what they are, a good way to spit information out fast. They’re an orchestration mechanism at best. When you think about them this way, every improvement we see tends to make a lot of sense. The article is kinda true, but not in the way they want it to be.
(Preface: I work in AI)
Are they a serious researcher in ML with insights into some of the most interesting and complicated intersections of computer science and analytical mathematics, or a promptfondler that earns 3x the former’s salary for a nebulous AI startup that will never create anything of value to society? Read on to find out!
what a user “might” mean if they misspell something
this but with extra wasabi
*trying desperately not to say the thing* what if AI could automatically… round out… spelling
(Preface: I work in AI)
Preface: repent for your sins in sackcloth and ashes.
IMO, LLM’s are what they are, a good way to spit information out fast.
Buh bye now.
When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.
That’s why you can ask it. because it encodes semantics.
guy who totally gets what these words mean: “an llm simply encodes the semantics into the vectors”
all you gotta do is, you know, ground the symbols, and as long as you’re writing enough Lisp that should be sufficient for GAI
because it encodes semantics.
Please enlighten me on how? I admit I don’t know all the internals of the transformer model, but from what I know it encodes precisely only syntactical information, i.e. what next syntactical token is most likely to follow based on a syntactical context window.
How does it encode semantics? What is the semantics that it encodes? I doubt they have denatotational or operational semantics of natural language, I don’t think something like that even exists, so it has to be some smaller model. Actually, it would be enlightening if you could tell me at least what the semantical domain here is, because I don’t think there’s any naturally obvious choice for that.
What if I told you 90% of humans do that.
It’s always funny to see this because you think that you’re part of the smart 10% with original thoughts while actually you’re the insufferable 10% whose only thought is that of superiority with nothing to back it up.
My cat has more original thoughts than that and he’s currently stuck head-first in a cereal box.
it’s not shocking because we’ve seen worse, but it is remarkable how fascist the implications of this “most people don’t possess cognition” idea are
it’s also very funny how many of these presumed cognition-havers have come to this thread and our instance in general with effectively the same lazy, shitty, thoughtless take on the nature of humanity
actually speaking of fascism, I took a quick look at our guest’s post history:
- African countries and IQ
- COVID conspiracy theories
- constant right-wing conspiracies in general really
- fucking links to voat and zerohedge
- there’s more but I tapped out early
People keep saying this, but I’m not convinced our own brains are doing anything more.
Let the haters hate.
Despite the welcome growth of atheism, almost all humans at one level or another cling to the idea that our monkey brains are filled with some magic miraculous light that couldn’t possibly be replicated. The reality is that some of us only have glimmers of sapience, and many not even that. Most humans, most of the time, are mindless zombies following a script, whether due to individual capacity, or a civilization that largely doesn’t reward metacognition or pondering the questions that matter, as that doesn’t immediately feed individual productivity or make anyone materially wealthier, that maze doesn’t lead to any yummy cheese for us.
AI development isn’t finally progressing quickly and making people uncomfortable with its capability because it’s catching up to our supposedly transcendental superbrains (that en masse spent hundreds of thousands of years wandering around in the dirt before it finally occurred to any of them that we could grow food seasonally in one place). It’s making a lot of humans uncomfortable because it’s demonstrating that there isn’t a whole hell of a lot to catch up to, especially for an average human.
There’s a reason pretty much everyone immediately discarded the Turing Test and basically called it a bullshit metric after elevating it for decades as a major benchmark in the development of AI systems… The moment a technology and design that could readily pass it became available. That’s the blind hubris of man on grand display.
see I was just gonna go for “promptfondlin” but I’m glad I hesitated cause this is my new favorite ban reason
I think I’ll start using “metacognition” in a derogatory way. What a metacognitive post.
The reality is that some of us only have glimmers of sapience, and many not even that.
Funny how all the people saying this always include themselves in the select few sapient ones.
Where does this NPC meme even come from? It’s one thing to think most people are stupid or conformist or susceptible to propaganda, but believing a large fraction of the population are “mindless zombies following a script” goes beyond simple arrogance to straight up delusion.
Yea, most people don’t think about some things I care about as deeply as I do. As if that means they don’t have their own internal life going on.
The reality is that some of us only have glimmers of sapience, and many not even that. Most humans, most of the time, are mindless zombies following a script
It’s a funny thing, that there are certain kinds of people who are assured of their own cleverness and so alienated from society that they think that echoing the same dehumanising blurb produced by so many of their forebears is somehow novel or informative, rather than just following a script.
(the irony of responding with an xkcd is not lost on me)
Much like the promptfondlers proudly claiming they are stochastic parrots, flaunting your inability to recognise intelligence in other humans isn’t a great flex.
How nice it must be to never ponder how large humanity is, and how each and every person you see outside has a full and rich interior and exterior world, and you that only see a tiny fraction of the people outside.
Personally one of my “oh other people are real!” moment, was when our parents (along with my sisters) took us on a surprise ferry trip to England (from France) and our grandparents that—at least as far as kid me remembered—we only ever saw in their home city, were waiting for us in Portsmouth, and we visited the city together (Portsmouth Historic Dockyard is quite nice btw).
I knew they were real, but realizing that they weren’t geo-locked, made me more fully internalize that they had full and independent lives, and therefore that everyone had.
How about people here? When did you realize people are real?
thinking is so easy to model when you don’t do it and assume nobody else does either
Arxiv paper link referenced in the article: https://arxiv.org/pdf/2410.05229