Model degeneration is an already well-known phenomenon. The article already explains well what’s going on so I won’t go into details, but note how this happens because the model does not understand what it is outputting - it’s looking for patterns, not for the meaning conveyed by said patterns.
Frankly at this rate might as well go with a neuro-symbolic approach.
The issue with your assertion is that people don’t actually work a similar way. Have you ever met someone who was clearly taught "garbage’?
The issue with your assertion is that people don’t actually work a similar way.
I’m talking about LLMs, not about people.
I know you are, but the argument that an LLM doesn’t understand context is incorrect. It’s not human level understanding, but it’s been demonstrated that they do have a level of understanding.
And to be clear, I’m not talking about consciousness or sapience.