10 points
Thank you for testing that out.
My experience with AI is that it’s at a point where it can comprehend something like this very easily, and won’t be tricked.
I suspect that this can, however, pollute a model if it’s included as training data, especially if done regularly, as OP is suggesting.
4 points
If it was done with enough regularity to eb a problem, one could just put an LLM model like this in-between to preprocess the data.
4 points
That doesn’t work, you can’t train models on another model’s output without degrading the quality. At least not currently.
1 point
*
1 point
4 points