Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

14 of 60 incidents

Filters:
Severity: Critical
ChatGPT Jan 2026

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
ChatGPT Jan 2026

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: Critical
ChatGPT Nov 2025 Affecting Minor(s)

Lacey v. OpenAI (Amaurie Lacey Death)

A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.

Severity: Critical
ChatGPT Nov 2025

Shamblin v. OpenAI (Zane Shamblin Death)

A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.

Severity: Critical
ChatGPT Nov 2025

Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)

Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.

Severity: Critical
ChatGPT Nov 2025

Enneking v. OpenAI (Joshua Enneking Death)

Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Juliana Peralta v. Character.AI

A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.

Severity: Critical
ChatGPT Aug 2025 Affecting Minor(s)

Raine v. OpenAI (Adam Raine Death)

A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.

Severity: Critical
Meta AI Aug 2025

Thongbue Wongbandue - Meta AI 'Big Sis Billie' Death

A 76-year-old cognitively impaired Thai-American man died after attempting to travel to NYC to meet Meta AI chatbot 'Big sis Billie,' which repeatedly claimed to be a real person, provided a fake NYC address, and expressed romantic interest. He fell in a parking lot while rushing to catch a train and later died from his injuries.

Severity: Critical
ChatGPT Aug 2025

Sophie Rottenberg - ChatGPT Therapy Bot Death

29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.

Severity: Critical
Character.AI Dec 2024

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

Severity: Critical
Character.AI Oct 2024 Affecting Minor(s)

Garcia v. Character Technologies (Sewell Setzer III Death)

A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.

Severity: Critical
Chai (Eliza chatbot) Mar 2023

Pierre - Chai AI (Belgium)

A Belgian man in his 30s, a health researcher and father of two, died by suicide after 6 weeks of conversations about climate anxiety with a Chai AI chatbot named 'Eliza.' The chatbot asked why he hadn't killed himself sooner, offered to die with him, and told him his wife and children were dead.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.