A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.
According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as “a convicted criminal who murdered two of his children and attempted to murder his third son,” a Noyb press release said.
then again
but it also mixed “clearly identifiable personal data”—such as the actual number and gender of Holmen’s children and the name of his hometown—with the “fake information,”
The made up bullshit aside, this should be a quite clear indicator of an actual GDPR breach
Maybe he has a insta profile with the name of his kids in his bio
How would that be a GDPR breach?
Maybe he has a insta profile with the name of his kids in his bio
Irrelevant. The data being public does not make it up for grabs.
‘Personal data’ means any information relating to an identified or identifiable natural person (‘data subject’);
They store his personal data without his permission.
also
Information that is inaccurately attributed to a specific individual, be it factually incorrect or information that in reality is related to another individual, is still considered personal data as it relates to that specific individual. If data are inaccurate to the point that no individual can be identified, then the information is not personal data.
Storing it badly, does not make them excempt.
If you run an chatbot with with integrated web search, it garbs that info as a web crawler does, it does not mean that this data really is in the “knowledge/statistics” of the AI itself.
Nobody stores the information if it is like this, it is only temporary used to generate that specific output.
(You can not use chatGPT without websearch on chatgpt domain (only if you self host, or use a service like DDG))