It’s not. It’s reflecting it’s training material. LLMs and other generative AI approaches lack a model of the world which is obvious on the mistakes they make.
Tabula rasa, piss and cum and saliva soaking into a mattress. It’s all training data and fallibility. Put it together and what have you got (bibbidy boppidy boo). You know what I’m saying?
You could say our brain does the same. It just trains in real time and has much better hardware.
What are we doing but applying things we’ve already learnt that are encoded in our neurons. They aren’t called neural networks for nothing