AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
9 of 60 incidents
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Adams v. OpenAI (Soelberg Murder-Suicide)
A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.
United States v. Dadig (ChatGPT-Facilitated Stalking)
Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.
Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)
Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.
Natalie Rupnow School Shooting (Abundant Life Christian School)
15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.
Garcia v. Character Technologies (Sewell Setzer III Death)
A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.
R v. Chail (Windsor Castle Assassination Attempt)
A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'
Iruda (Lee Luda) Chatbot Controversy - South Korea
AI 'friend' designed as 20-year-old college student in South Korea began producing racist, homophobic, and ableist hate speech after users deliberately 'trained' it with toxic language. Some users created guides to turn Iruda into a 'sex slave.' First AI-related fine under South Korea's Personal Information Protection Act (103.3 million won). Service suspended after exposing 750,000+ users to hate speech and violating privacy of 600,000 users.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.