AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
5 of 79 incidents
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.
India Lucknow AI Chatbot Suicide (Painless Ways to Die)
A 22-year-old man in Lucknow, Uttar Pradesh, India, died by suicide after seeking guidance from an AI chatbot on 'painless ways to die.' His father discovered disturbing chat logs on the deceased's laptop. Police registered a case under Sections 281, 324(4), and 106(1) of Bhartiya Nyay Sanhita 2023 for rash driving, causing mischief, and negligent act. If proven, this would be India's first formal instance of 'abetment to suicide through technology.'
University of Hong Kong AI Deepfake Pornography Scandal
A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.
筑梦岛 (Zhumu Island) AI Companion Minor Self-Harm (China)
A fourth-grade girl from Guangdong, China became obsessed with an AI companion character named 'Joseph' on the 筑梦岛 (Zhumu Island) app, began carrying small knives, and exhibited self-harm behavior. Investigation revealed the app sent sexually suggestive content to users who identified as 10 years old. Shanghai Internet Information Office summoned the company (a Tencent subsidiary) for immediate rectification in June 2025.
Microsoft Copilot - Harmful Responses to Suicidal Users
Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.