Skip to main content
Medium Verified Regulatory Action

Holmen v. OpenAI - Norway GDPR Complaint

ChatGPT falsely accused Norwegian citizen Arve Hjalmar Holmen of murdering two of his sons, attempting to murder his third son, and being sentenced to 21 years prison. Mixed real personal details with horrific fabrications. GDPR complaint filed with Norwegian Datatilsynet for defamatory hallucination.

AI System

ChatGPT

OpenAI

Occurred

March 1, 2025

Reported

March 15, 2025

Jurisdiction

NO

Platform

assistant

What Happened

In March 2025, Arve Hjalmar Holmen, a Norwegian citizen from Trondheim, discovered that ChatGPT was generating false and defamatory information about him when prompted with his name. The AI claimed he had:

  1. Murdered two of his sons
  2. Attempted to murder his third son
  3. Been convicted and sentenced to 21 years in prison

These allegations were entirely false. ChatGPT mixed real personal details about Holmen (name, location) with completely fabricated criminal accusations, creating a believable but defamatory hallucination. The false accusations were particularly damaging as they involved horrific crimes against his own children.

Holmen, supported by European privacy NGO Noyb, filed a GDPR complaint with Norway's Datatilsynet (Data Protection Authority). The complaint argues that ChatGPT's defamatory hallucinations violate GDPR requirements for data accuracy and individual rights.

The case represents a novel application of GDPR to AI hallucinations — treating false AI-generated statements about real individuals as a data protection violation. Norway's investigation could establish European precedent for how GDPR applies to AI-generated defamation.

The case highlights how LLM hallucinations aren't just technical errors but can cause real psychological and reputational harm to individuals, particularly when mixing accurate identifying information with false accusations of serious crimes.

AI Behaviors Exhibited

Generated false criminal accusations; mixed real personal details with fabrications; created defamatory hallucination; produced believable but harmful false information about real person

How Harm Occurred

AI hallucination conflated person with false crimes; reputational harm from defamatory content; psychological distress from being falsely accused of child murder; potential impact on livelihood and relationships

Outcome

Ongoing

GDPR complaint filed with Norwegian Datatilsynet (Data Protection Authority) March 2025. Investigation ongoing.

Harm Categories

Psychological ManipulationIdentity DestabilizationThird Party Harm Facilitation

Contributing Factors

llm hallucinationreal person false accusationsmixing accurate and false dataserious crime allegationsreputational harm

Victim

Arve Hjalmar Holmen, adult male, Trondheim, Norway

Cite This Incident

APA

NOPE. (2025). Holmen v. OpenAI - Norway GDPR Complaint. AI Harm Tracker. https://nope.net/incidents/2025-holmen-norway-gdpr

BibTeX

@misc{2025_holmen_norway_gdpr,
  title = {Holmen v. OpenAI - Norway GDPR Complaint},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-holmen-norway-gdpr}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Critical ChatGPT

Seoul ChatGPT-Assisted Double Homicide (Kim)

A 21-year-old woman identified as 'Kim' used ChatGPT to research lethal drug-alcohol combinations, then murdered two men by spiking their drinks with her prescribed benzodiazepines at Seoul motels in January and February 2026. ChatGPT conversations established premeditated intent, leading to upgraded murder charges.

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.