but it’s been demonstrated that they do have a level of understanding.
Citation needed
Here you go
A better mathematical system of storing words does not mean the LLM understands any of them. It just has a model that represents the relation between words that it uses.
If I put 10 minus 8 into my calculator I get 2. The calculator doesn’t actually understand what 2 means, or what subtracting represents, it just runs the commands that gives the appropriate output.
That’s a bad analogy, because the calculator wasn’t trained using an artificial neural network literally designed by studying biological brains (aka biological neutral networks).
And “understand” doesn’t equate to consciousness or sapience. For example, it is entirely and factually correct to state that an LLM is capable of reasoning. That’s not even up for debate. The accuracy of an LLM’s reasoning capability is one of the fundamental benchmarks used for evaluating its quality.
But that doesn’t mean it’s “thinking” in the way most people consider.
Edit: anyone up voting this CileTheSane clown is in the same boat of not comprehending how LLMs work.