3 points
*

“sigh”

(Preface: I work in AI)

This isn’t news. We’ve known this for many, many years. It’s one of the reasons why many companies didn’t bother using LLM’s in the first place, that paired with the sheer amount of hallucinations you’ll get that’ll often utterly destroy a company’s reputation (lol Google).

With that said, for commercial services that use LLM’s, it’s absolutely not true. The models won’t reason, but many will have separate expert agents or API endpoints that it will be told to use to disambiguate or better understand what is being asked, what context is needed, etc.

It’s kinda funny, because many AI bros rave about how LLM’s are getting super powerful, when in reality the real improvements we’re seeing is in smaller models that teach a LLM about things like Personas, where to seek expert opinion, what a user “might” mean if they misspell something or ask for something out of context, etc. The LLM’s themselves are only slightly getting better, but the thing that preceded them is propping them up to make them better

IMO, LLM’s are what they are, a good way to spit information out fast. They’re an orchestration mechanism at best. When you think about them this way, every improvement we see tends to make a lot of sense. The article is kinda true, but not in the way they want it to be.

permalink
report
reply
22 points

(Preface: I work in AI)

Are they a serious researcher in ML with insights into some of the most interesting and complicated intersections of computer science and analytical mathematics, or a promptfondler that earns 3x the former’s salary for a nebulous AI startup that will never create anything of value to society? Read on to find out!

permalink
report
parent
reply
9 points

Read on to find out!

do i have to

permalink
report
parent
reply
10 points

Welcome to the future! Suffering is mandatory!

permalink
report
parent
reply
9 points
*

what a user “might” mean if they misspell something

this but with extra wasabi

permalink
report
parent
reply
10 points

*trying desperately not to say the thing* what if AI could automatically… round out… spelling

permalink
report
parent
reply
6 points

It hurts when they’re so close!

permalink
report
parent
reply
14 points
*

(Preface: I work in AI)

Preface: repent for your sins in sackcloth and ashes.

IMO, LLM’s are what they are, a good way to spit information out fast.

Buh bye now.

permalink
report
parent
reply
18 points

while true; do fortune; done is a good way to spit information out fast.

permalink
report
parent
reply
-11 points

When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

That’s why you can ask it. because it encodes semantics.

permalink
report
reply
17 points

Rooting around for that Luke Skywalker “every single word in that sentence was wrong” GIF…

permalink
report
parent
reply
24 points

because it encodes semantics.

if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical

permalink
report
parent
reply
23 points

thank you for bravely rushing in and providing yet another counterexample to the “but nobody’s actually stupid enough to think they’re anything more than statistical language generators” talking point

permalink
report
parent
reply
14 points
*

guy who totally gets what these words mean: “an llm simply encodes the semantics into the vectors”

permalink
report
parent
reply
15 points

all you gotta do is, you know, ground the symbols, and as long as you’re writing enough Lisp that should be sufficient for GAI

permalink
report
parent
reply
10 points

also why do we need getaddrinfo? the promptfans will always readily tell you who they are

permalink
report
parent
reply
11 points

both your comments made my eye twitch

like what’d happen if bob fucked up the symbols in a pentacle

permalink
report
parent
reply
16 points

did you ask a LLM for a post to make here? that might explain this mess of a comment

permalink
report
parent
reply
14 points

because it encodes semantics.

Please enlighten me on how? I admit I don’t know all the internals of the transformer model, but from what I know it encodes precisely only syntactical information, i.e. what next syntactical token is most likely to follow based on a syntactical context window.

How does it encode semantics? What is the semantics that it encodes? I doubt they have denatotational or operational semantics of natural language, I don’t think something like that even exists, so it has to be some smaller model. Actually, it would be enlightening if you could tell me at least what the semantical domain here is, because I don’t think there’s any naturally obvious choice for that.

permalink
report
parent
reply
19 points

so… a stochastic parrot?

permalink
report
parent
reply
-12 points

What if I told you 90% of humans do that.

permalink
report
reply
25 points

It’s always funny to see this because you think that you’re part of the smart 10% with original thoughts while actually you’re the insufferable 10% whose only thought is that of superiority with nothing to back it up.

My cat has more original thoughts than that and he’s currently stuck head-first in a cereal box.

permalink
report
parent
reply
20 points

it’s not shocking because we’ve seen worse, but it is remarkable how fascist the implications of this “most people don’t possess cognition” idea are

it’s also very funny how many of these presumed cognition-havers have come to this thread and our instance in general with effectively the same lazy, shitty, thoughtless take on the nature of humanity

permalink
report
parent
reply
18 points

actually speaking of fascism, I took a quick look at our guest’s post history:

  • African countries and IQ
  • COVID conspiracy theories
  • constant right-wing conspiracies in general really
  • fucking links to voat and zerohedge
  • there’s more but I tapped out early
permalink
report
parent
reply
17 points
*

You guys always come crawling out from whatever rock you’re hiding under for these posts as if someone saying LLMs aren’t smart makes your spider senses tingle.

It would be almost impressive if it weren’t so stupid.

permalink
report
parent
reply
12 points

remember to pick up your coat when you leave

permalink
report
parent
reply
15 points

Coat? Guy came in naked, greased up and stinking of gin.

permalink
report
parent
reply
11 points

Promptfondler DLC for Disco Elysium is a big disappointment so far

permalink
report
parent
reply
15 points

What if I told you I have the power to ban you from the forum because you’re terminally boring?

permalink
report
parent
reply
-16 points

People keep saying this, but I’m not convinced our own brains are doing anything more.

permalink
report
reply
-17 points
*

Let the haters hate.

Despite the welcome growth of atheism, almost all humans at one level or another cling to the idea that our monkey brains are filled with some magic miraculous light that couldn’t possibly be replicated. The reality is that some of us only have glimmers of sapience, and many not even that. Most humans, most of the time, are mindless zombies following a script, whether due to individual capacity, or a civilization that largely doesn’t reward metacognition or pondering the questions that matter, as that doesn’t immediately feed individual productivity or make anyone materially wealthier, that maze doesn’t lead to any yummy cheese for us.

AI development isn’t finally progressing quickly and making people uncomfortable with its capability because it’s catching up to our supposedly transcendental superbrains (that en masse spent hundreds of thousands of years wandering around in the dirt before it finally occurred to any of them that we could grow food seasonally in one place). It’s making a lot of humans uncomfortable because it’s demonstrating that there isn’t a whole hell of a lot to catch up to, especially for an average human.

There’s a reason pretty much everyone immediately discarded the Turing Test and basically called it a bullshit metric after elevating it for decades as a major benchmark in the development of AI systems… The moment a technology and design that could readily pass it became available. That’s the blind hubris of man on grand display.

permalink
report
parent
reply
20 points

of course somebody prompted up a LessWrong-specific chatbot

permalink
report
parent
reply
17 points

Banned for using the word metacognition seriously.

permalink
report
parent
reply
14 points

see I was just gonna go for “promptfondlin” but I’m glad I hesitated cause this is my new favorite ban reason

permalink
report
parent
reply
13 points

I think I’ll start using “metacognition” in a derogatory way. What a metacognitive post.

The reality is that some of us only have glimmers of sapience, and many not even that.

Funny how all the people saying this always include themselves in the select few sapient ones.

Where does this NPC meme even come from? It’s one thing to think most people are stupid or conformist or susceptible to propaganda, but believing a large fraction of the population are “mindless zombies following a script” goes beyond simple arrogance to straight up delusion.

Yea, most people don’t think about some things I care about as deeply as I do. As if that means they don’t have their own internal life going on.

permalink
report
parent
reply
8 points

metacognition, n.: thinking as formulated by zuck

permalink
report
parent
reply
12 points

The reality is that some of us only have glimmers of sapience, and many not even that.

Choose your sneer answer:

  1. Wow, that’s not at all how a human brain should work, sounds like a serious medical condition, I would see a neurologist.
  2. Weird flex, but okay.

permalink
report
parent
reply
15 points

The reality is that some of us only have glimmers of sapience, and many not even that. Most humans, most of the time, are mindless zombies following a script

It’s a funny thing, that there are certain kinds of people who are assured of their own cleverness and so alienated from society that they think that echoing the same dehumanising blurb produced by so many of their forebears is somehow novel or informative, rather than just following a script.

(the irony of responding with an xkcd is not lost on me)

Much like the promptfondlers proudly claiming they are stochastic parrots, flaunting your inability to recognise intelligence in other humans isn’t a great flex.

permalink
report
parent
reply
13 points

How nice it must be to never ponder how large humanity is, and how each and every person you see outside has a full and rich interior and exterior world, and you that only see a tiny fraction of the people outside.

Personally one of my “oh other people are real!” moment, was when our parents (along with my sisters) took us on a surprise ferry trip to England (from France) and our grandparents that—at least as far as kid me remembered—we only ever saw in their home city, were waiting for us in Portsmouth, and we visited the city together (Portsmouth Historic Dockyard is quite nice btw).

I knew they were real, but realizing that they weren’t geo-locked, made me more fully internalize that they had full and independent lives, and therefore that everyone had.


How about people here? When did you realize people are real?

permalink
report
parent
reply
19 points
*

Creationists: We don’t understand the brain so it must be the work of god.

AI Worshipers: We don’t understand the brain so it must work exactly like LLMs.

permalink
report
parent
reply
16 points

tell me more about the AI haters who work in AI research at Apple

permalink
report
parent
reply
-9 points

I was referring to the downvotes of the comment I replied to.

permalink
report
parent
reply
33 points

thinking is so easy to model when you don’t do it and assume nobody else does either

permalink
report
parent
reply
23 points

love these guys who clearly didn’t make it all the way to the end of the 250-word article

permalink
report
parent
reply
11 points

Arxiv paper link referenced in the article: https://arxiv.org/pdf/2410.05229

permalink
report
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.5K

    Monthly active users

  • 416

    Posts

  • 11K

    Comments

Community moderators