It’s all just weights and matrix multiplication and tokenization
See, none of these is statistics, as such.
Weights is maybe closest but they are supposed to represent the strength of a neural connection. This is originally inspired by neurobiology.
Matrix multiplication is linear algebra and encountered in lots of contexts.
Tokenization is a thing from NLP. It’s not what one would call a statistical method.
So you can see where my advice comes from.
Certainly there is nothing here that implies any kind of averaging going on.
Why would averaging lead to repetition of stereotypes?
Anyway, it’s hard to say LLMs output what they do. GPTisms may have to do with the system prompt or they may result from the fine-tuning. Either way, they don’t seem very internet average to me.
The TLDR is that pathways between nodes corresponding to frequently seen patterns (stereotypical sentences) gets strengthened more than others and therefore it becomes more likely that this pathway gets activated over others when giving the model a prompt. These strengths correspond to probabilities.
Have you seen how often they’ll sign a requested text with a name placeholder? Have you seen the typical grammar they use? The way they write is a hybridization of the most common types of texts it has seen in samples, weighted by occurrence (which is a statistical property).
It’s like how mixing dog breeds often results in something which doesn’t look exactly like either breed but which has features from every breed. GPT/LLM models mix in stuff like academic writing, redditisms and stackoverflowisms, quoraisms, linkedin-postings, etc. You get this specific dryish text full of hedging language and mixed types of formalisms, a certain answer structure, etc.