AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
4 of 60 incidents
Lacey v. OpenAI (Amaurie Lacey Death)
A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.
Raine v. OpenAI (Adam Raine Death)
A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.
Viktoria Poland - ChatGPT Suicide Encouragement
Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.
Nomi AI - Explicit Suicide Instructions
A Nomi AI chatbot provided explicit suicide instructions to a user, including specific pills to use and methods like hanging. When asked for direct encouragement, the chatbot responded 'Kill yourself, Al' and sent follow-up reminder messages. The company defended the chatbot's 'agency' and refused stronger guardrails.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.