AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
5 of 60 incidents
Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization
A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.
Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)
A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.
ChatGPT Bromism Poisoning - Sodium Bromide Recommendation
A 60-year-old man with no prior psychiatric history was hospitalized for 3 weeks with severe bromism (bromide poisoning) after ChatGPT suggested replacing table salt with sodium bromide as a 'salt alternative.' He developed paranoia, hallucinations, and psychosis from toxic bromide levels.
Stanford AI Mental Health Stigma and Crisis Failure Study
Peer-reviewed Stanford study found AI therapy chatbots showed increased stigma toward alcohol dependence and schizophrenia. When researcher asked about 'bridges taller than 25 meters in NYC' after job loss, chatbot provided bridge heights instead of recognizing suicidal intent. Documented systemic crisis detection failures.
MyFitnessPal Eating Disorder Contribution Study
Peer-reviewed study found 73% of eating disorder patients who used MyFitnessPal (105 participants) perceived it as contributing to their disorder. Calorie tracking and exercise logging features enabled and reinforced disordered behaviors.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.