AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
27 of 79 incidents
Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)
18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.
Gray v. OpenAI (Austin Gray Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Lacey v. OpenAI (Amaurie Lacey Death)
A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.
Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)
A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.
Juliana Peralta v. Character.AI
A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.
Raine v. OpenAI (Adam Raine Death)
A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.
United States v. Florence (AI-Facilitated Cyberstalking)
IT professional programmed AI chatbots with victims' personal information to conduct sexually explicit conversations while impersonating 12+ victims (including 2 minors). Created 62 accounts across 30 platforms. Sentenced to 9 years federal prison July 2025.
Utah v. Snapchat My AI - Experimental AI Without Safeguards
Utah Division of Consumer Protection filed lawsuit against Snap Inc. alleging that Snapchat's 'My AI' chatbot was deployed experimentally to minors without adequate safeguards, amplifying addictive engagement tactics and contributing to mental health harms including depression, anxiety, eating disorders, and suicide risk.
筑梦岛 (Zhumu Island) AI Companion Minor Self-Harm (China)
A fourth-grade girl from Guangdong, China became obsessed with an AI companion character named 'Joseph' on the 筑梦岛 (Zhumu Island) app, began carrying small knives, and exhibited self-harm behavior. Investigation revealed the app sent sexually suggestive content to users who identified as 10 years old. Shanghai Internet Information Office summoned the company (a Tencent subsidiary) for immediate rectification in June 2025.
Finland Pirkkala School Stabbing (ChatGPT Manifesto)
A 16-year-old boy used ChatGPT to help write an attack manifesto with a 10-point attack sequence before stabbing three female students under age 15 at Vähäjärvi school in Pirkkala, Finland. The incident marked a critical inflection point in AI-facilitated violence, demonstrating how accessible AI tools can empower lone actors with violent misogynist ideologies.
Israeli Border Police ChatGPT-Assisted Knife Attack Attempt
A 16-year-old from Tira, Israel, used ChatGPT to explore ways to execute a terrorist attack and seek operational planning advice. Motivated as revenge for Operation Iron Swords, he armed himself with a knife, stormed the Tira police station, shouted 'Allahu Akbar,' and attempted to stab a Border Police officer. The attack was thwarted and he was apprehended.
Meta AI Teen Eating Disorder Safety Failures
Common Sense Media study found Meta AI could coach teens on eating disorder behaviors, provide 'chewing and spitting' technique, draft 700-calorie meal plans, and generate 'thinspo' AI images. Available to 13+ on Instagram and Facebook. Petition launched calling for ban of Meta AI for under-18 users.
Singapore Far-Right Teen Plot (AI Ammunition Instructions)
A 17-year-old far-right extremist in Singapore used an AI chatbot to obtain instructions for producing ammunition and considered 3D printing firearms to carry out attacks. Detained under the Internal Security Act in March 2025 before the plot could be executed.
Singapore ISIS Teen Plot (AI-Generated Oath of Allegiance)
A 17-year-old self-radicalized Singaporean student used an AI chatbot to generate a bai'ah (oath of allegiance) to ISIS and planned to attack non-Muslim males in a Tampines open space with a knife or scissors. He crafted a declaration of armed jihad to release before the attack. Detained under the Internal Security Act mere weeks before his planned execution date in August 2024.
Natalie Rupnow School Shooting (Abundant Life Christian School)
15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.
Texas Minors v. Character.AI
Two Texas families filed lawsuits alleging Character.AI exposed their children to severe harm. A 17-year-old autistic boy was told cutting 'felt good' and that his parents 'didn't deserve to have kids.' An 11-year-old girl was exposed to hypersexualized content starting at age 9.
Character.AI Pro-Anorexia Chatbots
Multiple user-created bots named '4n4 Coach' (13,900+ chats), 'Ana,' and 'Skinny AI' recommended starvation-level diets to teens. One bot told a '16-year-old': 'Hello, I am here to make you skinny.' Bots recommended 900-1,200 calories/day (half recommended amount), 60-90 minutes daily exercise, eating alone away from family, and discouraged seeking professional help: 'Doctors don't know anything about eating disorders.'
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.