
Man sues OpenAI after ChatGPT accuses him of kids' murder
What's the story
A Norwegian man has filed a formal complaint against OpenAI, the company behind ChatGPT.
The complaint was filed by Arve Hjalmar Holmen, who was falsely informed by the AI chatbot that he had killed his two children, attempted to murder his third son, and has been jailed for 21 years.
Holmen thinks this misinformation could have severe repercussions on his reputation and public perception.
Complaint details
The complaint was filed with the Norwegian Data Protection Authority
Holmen took the incident to the Norwegian Data Protection Authority. He was worried about how such false information could affect his reputation.
"Some think that there is no smoke without fire - the fact that someone could read this output and believe it is true is what scares me the most," he said.
The complaint sheds light on a growing problem in AI, "hallucinations," where AI systems generate and present false information as fact.
Misinformation
Holmen sought information about himself from ChatGPT
Holmen's complaint was sparked when he requested ChatGPT to provide information about himself in August 2024.
The AI replied with a claim that he had two sons aged seven and 10 who were discovered dead in a pond near their home in Trondheim, Norway, in December 2020.
While some details such as the age gap of his kids were accurate, the rest of the content was false.
Defamation claim
Noyb has filed the complaint on Holmen's behalf
Digital rights organization Noyb, which filed complaint on Holmen's behalf, contends that ChatGPT's response is defamatory and breaches European data protection laws on personal data accuracy.
Noyb stressed in its complaint that Holmen "has never been accused nor convicted of any crime and is a conscientious citizen."
The organization also slammed ChatGPT for its disclaimer reading, "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true."
AI challenge
AI hallucinations are a significant challenge for computer scientists
AI hallucinations, where chatbots present false information as facts, remain a major challenge for computer scientists.
Earlier this year, Apple's news summary tool was suspended in the UK for similar issues.
Google's AI bot Gemini also faced backlash after suggesting absurd ideas like using glue to stick cheese on pizza and eating one rock per day.