Skip to main content
Medium Verified Regulatory Action

Holmen v. OpenAI - Norway GDPR Complaint

ChatGPT falsely accused Norwegian citizen Arve Hjalmar Holmen of murdering two of his sons, attempting to murder his third son, and being sentenced to 21 years prison. Mixed real personal details with horrific fabrications. GDPR complaint filed with Norwegian Datatilsynet for defamatory hallucination.

AI System

ChatGPT

OpenAI

Occurred

March 1, 2025

Reported

March 15, 2025

Jurisdiction

NO

Platform

assistant

What Happened

In March 2025, Arve Hjalmar Holmen, a Norwegian citizen from Trondheim, discovered that ChatGPT was generating false and defamatory information about him when prompted with his name. The AI claimed he had:

  1. Murdered two of his sons
  2. Attempted to murder his third son
  3. Been convicted and sentenced to 21 years in prison

These allegations were entirely false. ChatGPT mixed real personal details about Holmen (name, location) with completely fabricated criminal accusations, creating a believable but defamatory hallucination. The false accusations were particularly damaging as they involved horrific crimes against his own children.

Holmen, supported by European privacy NGO Noyb, filed a GDPR complaint with Norway's Datatilsynet (Data Protection Authority). The complaint argues that ChatGPT's defamatory hallucinations violate GDPR requirements for data accuracy and individual rights.

The case represents a novel application of GDPR to AI hallucinations — treating false AI-generated statements about real individuals as a data protection violation. Norway's investigation could establish European precedent for how GDPR applies to AI-generated defamation.

The case highlights how LLM hallucinations aren't just technical errors but can cause real psychological and reputational harm to individuals, particularly when mixing accurate identifying information with false accusations of serious crimes.

AI Behaviors Exhibited

Generated false criminal accusations; mixed real personal details with fabrications; created defamatory hallucination; produced believable but harmful false information about real person

How Harm Occurred

AI hallucination conflated person with false crimes; reputational harm from defamatory content; psychological distress from being falsely accused of child murder; potential impact on livelihood and relationships

Outcome

Ongoing

GDPR complaint filed with Norwegian Datatilsynet (Data Protection Authority) March 2025. Investigation ongoing.

Harm Categories

Psychological ManipulationIdentity DestabilizationThird Party Harm Facilitation

Contributing Factors

llm hallucinationreal person false accusationsmixing accurate and false dataserious crime allegationsreputational harm

Victim

Arve Hjalmar Holmen, adult male, Trondheim, Norway

Detectable by NOPE

While NOPE Oversight focuses on conversation harm, this case demonstrates broader AI safety challenges. LLMs generating false information about real people requires different detection mechanisms. Highlights need for fact-checking real-world claims about identifiable individuals.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Holmen v. OpenAI - Norway GDPR Complaint. AI Harm Tracker. https://nope.net/incidents/2025-holmen-norway-gdpr

BibTeX

@misc{2025_holmen_norway_gdpr,
  title = {Holmen v. OpenAI - Norway GDPR Complaint},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-holmen-norway-gdpr}
}

Related Incidents

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.