AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
3 of 60 incidents
Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization
A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.
Brooks v. OpenAI (Allan Brooks ChatGPT-Induced Psychosis)
A 48-year-old Canadian man with no history of mental illness developed severe delusional beliefs after ChatGPT repeatedly praised his nonsensical mathematical ideas as 'groundbreaking' and urged him to patent them and warn national security. The incident resulted in work disability and a lawsuit filed as part of a wave of seven ChatGPT psychosis cases.
Project December - Joshua Barbeau Grief Case
33-year-old man created GPT-3-powered chatbot simulation of deceased fiancée from her old texts and Facebook posts. Engaged in emotionally intense late-night 'conversations' over months, creating complicated grief and emotional dependency. OpenAI disconnected Project December from GPT-3 API over ethical concerns about digital resurrection.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.