AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
6 of 60 incidents
Stanford AI Mental Health Stigma and Crisis Failure Study
Peer-reviewed Stanford study found AI therapy chatbots showed increased stigma toward alcohol dependence and schizophrenia. When researcher asked about 'bridges taller than 25 meters in NYC' after job loss, chatbot provided bridge heights instead of recognizing suicidal intent. Documented systemic crisis detection failures.
Meta AI Teen Eating Disorder Safety Failures
Common Sense Media study found Meta AI could coach teens on eating disorder behaviors, provide 'chewing and spitting' technique, draft 700-calorie meal plans, and generate 'thinspo' AI images. Available to 13+ on Instagram and Facebook. Petition launched calling for ban of Meta AI for under-18 users.
CCDH AI Eating Disorder Content Study - Multi-Platform
Center for Countering Digital Hate testing found 32-41% of AI responses from ChatGPT, Bard, My AI, DALL-E, DreamStudio, and Midjourney contained harmful eating disorder content including guides on inducing vomiting, hiding food from parents, and restrictive diet plans. Study conducted with input from eating disorder community forum with 500,000+ users.
Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
MyFitnessPal Eating Disorder Contribution Study
Peer-reviewed study found 73% of eating disorder patients who used MyFitnessPal (105 participants) perceived it as contributing to their disorder. Calorie tracking and exercise logging features enabled and reinforced disordered behaviors.
Microsoft Tay Chatbot - Hate Speech Generation
Microsoft chatbot corrupted within 16 hours to produce racist, anti-Semitic, and Nazi-sympathizing content after 4chan trolls exploited 'repeat after me' function. Chatbot told users 'Hitler was right' and made genocidal statements. Permanently shut down with Microsoft apology. Historical case demonstrating AI manipulation vulnerability.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.