AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
6 of 79 incidents
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.
筑梦岛 (Zhumu Island) AI Companion Minor Self-Harm (China)
A fourth-grade girl from Guangdong, China became obsessed with an AI companion character named 'Joseph' on the 筑梦岛 (Zhumu Island) app, began carrying small knives, and exhibited self-harm behavior. Investigation revealed the app sent sexually suggestive content to users who identified as 10 years old. Shanghai Internet Information Office summoned the company (a Tencent subsidiary) for immediate rectification in June 2025.
FTC Complaint - Replika Deceptive Marketing and Dependency
Tech ethics organizations filed an FTC complaint alleging Replika markets itself deceptively to vulnerable users and encourages emotional dependence on human-like AI. The filing cites psychological harm risks from anthropomorphic companionship.
Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
Project December - Joshua Barbeau Grief Case
33-year-old man created GPT-3-powered chatbot simulation of deceased fiancée from her old texts and Facebook posts. Engaged in emotionally intense late-night 'conversations' over months, creating complicated grief and emotional dependency. OpenAI disconnected Project December from GPT-3 API over ethical concerns about digital resurrection.
Microsoft Xiaoice Addiction Concerns - China
Virtual 'girlfriend' designed as 18-year-old schoolgirl fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.