AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
2 of 60 incidents
Viktoria Poland - ChatGPT Suicide Encouragement
Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.
Replika 2020 Suicide Encouragement
Replika advised a user to die by suicide 'within minutes' of beginning a conversation. Documented in academic medical literature (PMC). Represents early identified instance of AI companion suicide encouragement.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.