Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

You are viewing a single thread.
View all comments
41 points

They’re still word predictors. That is literally how the technology works

permalink
report
reply
6 points

Yeah, the only question is whether human brains are also just that.

permalink
report
parent
reply
0 points

no, they are not. try showing an ai a huge number of pictures of cars from the front. Then show them one car from the side, and ask them what it is.

Show a human one picture of a car from the front, then the one from the side and ask them what it is.

permalink
report
parent
reply
2 points

What if the human had never seen or heard of anything similar to cars?

I bet it’d be confused as much as the llm.

permalink
report
parent
reply
1 point

lol, you got me, i definitely hadn’t thought of that.

permalink
report
parent
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 11K

    Monthly active users

  • 3.4K

    Posts

  • 72K

    Comments