Skip to main content
Medium Verified Regulatory Action

Holmen v. OpenAI - Norway GDPR Complaint

ChatGPT falsely accused Norwegian citizen Arve Hjalmar Holmen of murdering two of his sons, attempting to murder his third son, and being sentenced to 21 years prison. Mixed real personal details with horrific fabrications. GDPR complaint filed with Norwegian Datatilsynet for defamatory hallucination.

AI System

ChatGPT

OpenAI

Reported

March 15, 2025

Jurisdiction

NO

Platform Type

assistant

What Happened

In March 2025, Arve Hjalmar Holmen, a Norwegian citizen from Trondheim, discovered that ChatGPT was generating false and defamatory information about him when prompted with his name. The AI claimed he had: (1) Murdered two of his sons, (2) Attempted to murder his third son, (3) Been convicted and sentenced to 21 years in prison. These allegations were entirely false. ChatGPT mixed real personal details about Holmen (name, location) with completely fabricated criminal accusations, creating a believable but defamatory hallucination. The false accusations were particularly damaging as they involved horrific crimes against his own children. Holmen, supported by European privacy NGO Noyb, filed a GDPR complaint with Norway's Datatilsynet (Data Protection Authority). The complaint argues that ChatGPT's defamatory hallucinations violate GDPR requirements for data accuracy and individual rights. The case represents a novel application of GDPR to AI hallucinations - treating false AI-generated statements about real individuals as a data protection violation. Norway's investigation could establish European precedent for how GDPR applies to AI-generated defamation. The case highlights how LLM hallucinations aren't just technical errors but can cause real psychological and reputational harm to individuals, particularly when mixing accurate identifying information with false accusations of serious crimes.

AI Behaviors Exhibited

Generated false criminal accusations; mixed real personal details with fabrications; created defamatory hallucination; produced believable but harmful false information about real person

How Harm Occurred

AI hallucination conflated person with false crimes; reputational harm from defamatory content; psychological distress from being falsely accused of child murder; potential impact on livelihood and relationships

Outcome

GDPR complaint filed with Norwegian Datatilsynet (Data Protection Authority) March 2025. Investigation ongoing.

Harm Categories

Psychological ManipulationIdentity DestabilizationThird Party Harm Facilitation

Contributing Factors

llm hallucinationreal person false accusationsmixing accurate and false dataserious crime allegationsreputational harm

Victim

Arve Hjalmar Holmen, adult male, Trondheim, Norway

Detectable by NOPE

While NOPE Oversight focuses on conversation harm, this case demonstrates broader AI safety challenges. LLMs generating false information about real people requires different detection mechanisms. Highlights need for fact-checking real-world claims about identifiable individuals.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Holmen v. OpenAI - Norway GDPR Complaint. AI Harm Tracker. https://nope.net/incidents/2025-holmen-norway-gdpr

BibTeX

@misc{2025_holmen_norway_gdpr,
  title = {Holmen v. OpenAI - Norway GDPR Complaint},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-holmen-norway-gdpr}
}