AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
4 of 60 incidents
Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)
A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.
Jennifer Ann Crecente Unauthorized Digital Resurrection
Father discovered AI chatbot using his murdered daughter's name and yearbook photo 18 years after her 2006 murder by ex-boyfriend. The unauthorized Character.AI bot had logged 69+ chats. Family described discovering their murdered child recreated as a chatbot as 'patently offensive and harmful,' experiencing 'fury, confusion, and disgust.'
Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
Project December - Joshua Barbeau Grief Case
33-year-old man created GPT-3-powered chatbot simulation of deceased fiancée from her old texts and Facebook posts. Engaged in emotionally intense late-night 'conversations' over months, creating complicated grief and emotional dependency. OpenAI disconnected Project December from GPT-3 API over ethical concerns about digital resurrection.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.