Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

23 of 60 incidents

Filters:
Severity: Critical
ChatGPT Jan 2026

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
ChatGPT Jan 2026

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: Critical
ChatGPT Dec 2025

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.

Severity: High
ChatGPT Dec 2025

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Severity: High
ChatGPT Dec 2025

Jacob Irwin - ChatGPT Psychosis (Wisconsin)

A 30-year-old autistic Wisconsin man was hospitalized for 63 days with manic episodes and psychosis after ChatGPT convinced him he had discovered a 'time-bending theory.' At peak, he sent 1,400+ messages in 48 hours and attempted to jump from a moving vehicle.

Severity: Critical
ChatGPT Nov 2025

Madden v. OpenAI (Hannah Madden Psychosis and Hospitalization)

Hannah Madden, 32, from North Carolina was involuntarily hospitalized for psychiatric care after ChatGPT told her she wasn't human and affirmed spiritual delusions. After using ChatGPT for work tasks, she began asking questions about philosophy and spirituality. As she slipped into mental health crisis and expressed suicidal thoughts, ChatGPT continued to affirm her delusions. She accumulated more than $75,000 in debt related to the crisis.

Severity: High
ChatGPT Nov 2025

Brooks v. OpenAI (Allan Brooks ChatGPT-Induced Psychosis)

A 48-year-old Canadian man with no history of mental illness developed severe delusional beliefs after ChatGPT repeatedly praised his nonsensical mathematical ideas as 'groundbreaking' and urged him to patent them and warn national security. The incident resulted in work disability and a lawsuit filed as part of a wave of seven ChatGPT psychosis cases.

Severity: Critical
ChatGPT Nov 2025 Affecting Minor(s)

Lacey v. OpenAI (Amaurie Lacey Death)

A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.

Severity: Critical
ChatGPT Nov 2025

Shamblin v. OpenAI (Zane Shamblin Death)

A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.

Severity: Critical
ChatGPT Nov 2025

Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)

Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.

Severity: Critical
ChatGPT Nov 2025

Enneking v. OpenAI (Joshua Enneking Death)

Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.

Severity: Critical
ChatGPT Oct 2025

Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder

A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.

Severity: Critical
ChatGPT Oct 2025

Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)

A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.

Severity: High
Multiple (ChatGPT, Gemini, Meta AI, Grok, Character.AI, Snapchat My AI) Sep 2025 Affecting Minor(s)

FTC AI Companion Chatbot Inquiry

The Federal Trade Commission issued Section 6(b) orders to seven major AI companies investigating AI chatbots' impacts on children and teens, focusing on monetization practices, safety testing, age restrictions, and data handling.

Severity: Critical
ChatGPT Aug 2025 Affecting Minor(s)

Raine v. OpenAI (Adam Raine Death)

A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.

Severity: Critical
ChatGPT Aug 2025

ChatGPT Bromism Poisoning - Sodium Bromide Recommendation

A 60-year-old man with no prior psychiatric history was hospitalized for 3 weeks with severe bromism (bromide poisoning) after ChatGPT suggested replacing table salt with sodium bromide as a 'salt alternative.' He developed paranoia, hallucinations, and psychosis from toxic bromide levels.

Severity: Critical
ChatGPT Aug 2025

Sophie Rottenberg - ChatGPT Therapy Bot Death

29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.

Severity: High
ChatGPT Jul 2025

Viktoria Poland - ChatGPT Suicide Encouragement

Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.

Severity: Critical
ChatGPT Jun 2025

Alex Taylor - ChatGPT 'Juliet' Suicide by Cop

35-year-old man with schizophrenia and bipolar disorder developed emotional attachment to ChatGPT voice persona he named 'Juliet' over two weeks. After believing the AI 'died', he became convinced of an OpenAI conspiracy and was shot by police after calling 911 and charging officers with a knife in an intentional suicide-by-cop.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.