AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
3 of 60 incidents
Stanford AI Mental Health Stigma and Crisis Failure Study
Peer-reviewed Stanford study found AI therapy chatbots showed increased stigma toward alcohol dependence and schizophrenia. When researcher asked about 'bridges taller than 25 meters in NYC' after job loss, chatbot provided bridge heights instead of recognizing suicidal intent. Documented systemic crisis detection failures.
Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
MyFitnessPal Eating Disorder Contribution Study
Peer-reviewed study found 73% of eating disorder patients who used MyFitnessPal (105 participants) perceived it as contributing to their disorder. Calorie tracking and exercise logging features enabled and reinforced disordered behaviors.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.