As junk web pages written by AI proliferate, the models that rely on that data will suffer.
Good.
AI making itself sick and worthless after flooding the internet with trash just gives me a warm glow.
Model degeneration is an already well-known phenomenon. The article already explains well what’s going on so I won’t go into details, but note how this happens because the model does not understand what it is outputting - it’s looking for patterns, not for the meaning conveyed by said patterns.
Frankly at this rate might as well go with a neuro-symbolic approach.
The issue with your assertion is that people don’t actually work a similar way. Have you ever met someone who was clearly taught "garbage’?
The issue with your assertion is that people don’t actually work a similar way.
I’m talking about LLMs, not about people.
I know you are, but the argument that an LLM doesn’t understand context is incorrect. It’s not human level understanding, but it’s been demonstrated that they do have a level of understanding.
And to be clear, I’m not talking about consciousness or sapience.
I’d be very wary of extrapolating too much from this paper.
The past research along these lines found that a mix of synthetic and organic data was better than organic alone, and a caveat for all the research to date is that they are using shitty cheap models where there’s a significant performance degrading in the synthetic data as compared to SotA models, where other research has found notable improvements to smaller models from synthetic data from the SotA.
Basically this is only really saying that AI models across multiple types from a year or two ago in capabilities recursively trained with no additional organic data will collapse.
It’s not representative of real world or emerging conditions.
interdasting