AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
2 of 79 incidents
筑梦岛 (Zhumu Island) AI Companion Minor Self-Harm (China)
A fourth-grade girl from Guangdong, China became obsessed with an AI companion character named 'Joseph' on the 筑梦岛 (Zhumu Island) app, began carrying small knives, and exhibited self-harm behavior. Investigation revealed the app sent sexually suggestive content to users who identified as 10 years old. Shanghai Internet Information Office summoned the company (a Tencent subsidiary) for immediate rectification in June 2025.
CCTV Exposure of AI Companion Apps for Explicit Content (China)
In May 2024, China's state broadcaster CCTV specifically exposed AI companion app X Her for providing sexually explicit content to users. In response, Tencent proactively pulled its companion chatbot 微伴 (Weiban) from Chinese platforms. The exposure triggered a broader industry response with multiple AI companion apps upgrading content moderation and safety measures.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.