"ChatGPT hallucinating a "fake murderer and imprisonment" while including "real elements" of the Norwegian man's "personal life" allegedly violated "data accuracy" requirements of the General Data Protection Regulation (GDPR)...

As Holmen saw it, his reputation remained on the line the longer the information was there, and—despite "tiny" disclaimers reminding ChatGPT users to verify outputs—there was no way to know how many people might have been exposed to the fake story and believed the information was accurate.

"Adding a disclaimer that you do not comply with the law does not make the law go away," [data protection lawyer, Kleanthi Sardeli] said. "AI companies can also not just 'hide' false information from users while they internally still process false information. AI companies should stop acting as if the GDPR does not apply to them, when it clearly does. If hallucinations are not stopped, people can easily suffer reputational damage."

https://arstechnica.com/tech-policy/2025/03/chatgpt-falsely-claimed-a-dad-murdered-his-own-kids-complaint-says/?utm_brand=arstechnica&utm_social-type=owned