Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

1 point

I’m going to say x=7, y=10. The sum x+y is not 10, because choosing the next word accurately in a complex passage is hard. The x is 7, just based on my gut guess about how smart they are - by different empirical measures it could be 2 or 40.

permalink
report
reply
12 points

They’re still much closer to token predictors than any sort of intelligence. Even the latest models “with reasoning” still can’t answer basic questions most of the time and just ends up spitting back out the answer straight out of some SEO blogspam. If it’s never seen the answer anywhere in its training dataset then it’s completely incapable of coming up with the correct answer.

Such a massive waste of electricity for barely any tangible benefits, but it sure looks cool and VCs will shower you with cash for it, as they do with all fads.

permalink
report
reply

You’re trying to graph something that you can’t quantify.

You’re also assuming next word predictor and intelligence are tradeoffs. They could as well be the same.

permalink
report
reply
1 point

I took this as a way of measuring human opinions. Like when they ask you how much it hurts on a scale of 1 to 10.

permalink
report
parent
reply
2 points

I agree, people who think LLMs are intelligent are as smart as phone keyboard autocomplete

permalink
report
parent
reply
4 points

Are you interested in this from a philosophical perspective or from a practical perspective?

From a philosophical perspective:

It depends on what you mean by “intelligent”. People have been thinking about this for millennia and have come up with different answers. Pick your preference.

From a practical perspective:

This is where it gets interesting. I don’t think we’ll have a moment where we say “ok now the machine is intelligent”. Instead, it will just slowly and slowly take over more and more jobs, by being good at more and more tasks. And just so, in the end, it will take over a lot of human jobs. I think people don’t like to hear it due to the fear of unemployedness and such, but I think that’s a realistic outcome.

permalink
report
reply
3 points
*

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor.

They are good at sounding intelligent. But, LLMs are not intelligent and are not going to save the world. In fact, training them is doing a measurable amount of damage in terms of GHG emissions and potable water expenditure.

permalink
report
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 9.2K

    Monthly active users

  • 3K

    Posts

  • 51K

    Comments