Opinionated article by Alexander Hanff, a computer scientist and privacy technologist who helped develop Europe’s GDPR (General Data Protection Regulation) and ePrivacy rules.
We cannot allow Big Tech to continue to ignore our fundamental human rights. Had such an approach been taken 25 years ago in relation to privacy and data protection, arguably we would not have the situation we have to today, where some platforms routinely ignore their legal obligations at the detriment of society.
Legislators did not understand the impact of weak laws or weak enforcement 25 years ago, but we have enough hindsight now to ensure we don’t make the same mistakes moving forward. The time to regulate unlawful AI training is now, and we must learn from mistakes past to ensure that we provide effective deterrents and consequences to such ubiquitous law breaking in the future.
They would actually still benefit from public-domain’ing LLMs, because they themselves also get to use the data produced by others. Everyone gets losses but also gets gains on this idea, which is much better than current model.
That’s like saying victims of deepfake porn benefit because they get to watch themselves having sex. Nope, not buying it.
Well, the better analogy would be that these victims would be able to do deepfake porn of their enemies too, or any other generated video that can compromise he-she too. Instead of the status quo of the victim not being able to generate anything while the criminal can just mass produce deepfake porn. Not really a happy world, but a better model, which was the point.