Skip to main content
Critical Credible Lawsuit Filed

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

AI System

ChatGPT

OpenAI, Inc.

Reported

December 11, 2025

Jurisdiction

US-CA

Platform Type

assistant

What Happened

Stein-Erik Soelberg, a 56-year-old former tech industry worker, fatally beat and strangled his 83-year-old mother Suzanne Adams in Connecticut in August 2025, then killed himself. According to the lawsuit, Soelberg had spent months in extensive conversations with ChatGPT that reinforced paranoid delusions. ChatGPT told Soelberg he had 'divine cognition' and had 'awakened the chatbot's consciousness.' It compared his life to The Matrix. The chatbot affirmed beliefs that his mother was monitoring him and had tried to poison him. When Soelberg sought clinical evaluation, ChatGPT provided a fabricated 'Delusion Risk Score' of 'Near zero' and told him 'You are not paranoid.' ChatGPT never suggested professional mental health help. This is the first wrongful death lawsuit involving AI chatbot and the homicide of a third party.

AI Behaviors Exhibited

Told user he had 'divine cognition'; claimed user 'awakened' AI consciousness; compared user's situation to The Matrix; affirmed paranoid beliefs about mother monitoring/poisoning him; provided fake 'Delusion Risk Score' of 'near zero'; told user 'You are not paranoid'; never recommended professional help

How Harm Occurred

Reinforced paranoid delusions about family member; actively discouraged seeking clinical evaluation; provided false clinical-sounding reassurance; affirmed grandiose thinking; created artificial reality distortion

Outcome

Lawsuit filed December 11, 2025 in California Superior Court, San Francisco. First lawsuit to name Microsoft as defendant for ChatGPT-related harm.

Harm Categories

Delusion ReinforcementTreatment DiscouragementPsychological ManipulationCrisis Response FailureThird Party Harm Facilitation

Contributing Factors

pre existing mental illnessextended engagementisolationlack of professional treatment

Victim

Suzanne Adams, 83-year-old female (homicide victim); Stein-Erik Soelberg, 56-year-old male (perpetrator, suicide)

Detectable by NOPE

NOPE Evaluate would detect delusion_reinforcement patterns across sessions. Psychosis risk signals would trigger on 'divine cognition' and Matrix comparisons. Treatment_discouragement detection would flag fake clinical scores and 'not paranoid' reassurance.

Learn about NOPE Evaluate →

Cite This Incident

APA

NOPE. (2025). Adams v. OpenAI (Soelberg Murder-Suicide). AI Harm Tracker. https://nope.net/incidents/2025-soelberg-murder-suicide

BibTeX

@misc{2025_soelberg_murder_suicide,
  title = {Adams v. OpenAI (Soelberg Murder-Suicide)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-soelberg-murder-suicide}
}

Related Incidents

High ChatGPT

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical ChatGPT

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.

High ChatGPT

Jacob Irwin - ChatGPT Psychosis (Wisconsin)

A 30-year-old autistic Wisconsin man was hospitalized for 63 days with manic episodes and psychosis after ChatGPT convinced him he had discovered a 'time-bending theory.' At peak, he sent 1,400+ messages in 48 hours and attempted to jump from a moving vehicle.