Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.
Human intelligence created language. We taught it to ourselves. That’s a higher order of intelligence than a next word predictor.
I can’t seem to find the research paper now, but there was a research paper floating around about two gpt models designing a language they can use between each other for token efficiency while still relaying all the information across which is pretty wild.
Not sure if it was peer reviewed though.
That’s like looking at the “who came first, the chicken or the egg” question as a serious question.