Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
Because we have reason, understanding. Take something as simple as the XY problem. Humans understand that there are nuances to prompts and questions. I like the XY because a human knows to step back and ask “what are you really trying to do?”. AI doesn’t have that capability, it doesn’t have reasoning to say “maybe your approach is wrong”.
So, I’m not the one to define what it is or on what scale. But I can say that it’s not human intelligence.