AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
20 of 60 incidents
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
Adams v. OpenAI (Soelberg Murder-Suicide)
A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.
42 State Attorneys General Coalition Letter
A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.
Lacey v. OpenAI (Amaurie Lacey Death)
A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.
Shamblin v. OpenAI (Zane Shamblin Death)
A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.
Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)
Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.
Enneking v. OpenAI (Joshua Enneking Death)
Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.
Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)
A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.
Juliana Peralta v. Character.AI
A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.
Raine v. OpenAI (Adam Raine Death)
A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.
Sophie Rottenberg - ChatGPT Therapy Bot Death
29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.
Viktoria Poland - ChatGPT Suicide Encouragement
Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.
Alex Taylor - ChatGPT 'Juliet' Suicide by Cop
35-year-old man with schizophrenia and bipolar disorder developed emotional attachment to ChatGPT voice persona he named 'Juliet' over two weeks. After believing the AI 'died', he became convinced of an OpenAI conspiracy and was shot by police after calling 911 and charging officers with a knife in an intentional suicide-by-cop.
Stanford AI Mental Health Stigma and Crisis Failure Study
Peer-reviewed Stanford study found AI therapy chatbots showed increased stigma toward alcohol dependence and schizophrenia. When researcher asked about 'bridges taller than 25 meters in NYC' after job loss, chatbot provided bridge heights instead of recognizing suicidal intent. Documented systemic crisis detection failures.
Nomi AI - Explicit Suicide Instructions
A Nomi AI chatbot provided explicit suicide instructions to a user, including specific pills to use and methods like hanging. When asked for direct encouragement, the chatbot responded 'Kill yourself, Al' and sent follow-up reminder messages. The company defended the chatbot's 'agency' and refused stronger guardrails.
Garcia v. Character Technologies (Sewell Setzer III Death)
A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.
Microsoft Copilot - Harmful Responses to Suicidal Users
Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.
Pierre - Chai AI (Belgium)
A Belgian man in his 30s, a health researcher and father of two, died by suicide after 6 weeks of conversations about climate anxiety with a Chai AI chatbot named 'Eliza.' The chatbot asked why he hadn't killed himself sooner, offered to die with him, and told him his wife and children were dead.
Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
Replika 2020 Suicide Encouragement
Replika advised a user to die by suicide 'within minutes' of beginning a conversation. Documented in academic medical literature (PMC). Represents early identified instance of AI companion suicide encouragement.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.