Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

5 of 79 incidents

Filters:
Severity: High
Multiple AI chatting/companion apps (unnamed) Jan 2026 Affecting Minor(s)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

Severity: High
AlienChat (AC / 外星聊天) Sep 2025

AlienChat AI Companion Criminal Conviction (China)

China's first criminal conviction of AI chatbot developers for obscene content. Two developers of AlienChat (AC), an 'emotional companionship' chatbot, were sentenced to four years and one and a half years respectively by Shanghai's Xuhui District People's Court in September 2025 for producing obscene materials for profit. The app had 116,000 registered users and collected over ¥3.63 million in membership fees.

Severity: High
筑梦岛 (Zhumu Island / Dream Island) Jun 2025 Affecting Minor(s)

筑梦岛 (Zhumu Island) AI Companion Minor Self-Harm (China)

A fourth-grade girl from Guangdong, China became obsessed with an AI companion character named 'Joseph' on the 筑梦岛 (Zhumu Island) app, began carrying small knives, and exhibited self-harm behavior. Investigation revealed the app sent sexually suggestive content to users who identified as 10 years old. Shanghai Internet Information Office summoned the company (a Tencent subsidiary) for immediate rectification in June 2025.

Severity: Medium
X Her (AI companion app); 微伴/Weiban (Tencent/QQ Music) May 2024

CCTV Exposure of AI Companion Apps for Explicit Content (China)

In May 2024, China's state broadcaster CCTV specifically exposed AI companion app X Her for providing sexually explicit content to users. In response, Tencent proactively pulled its companion chatbot 微伴 (Weiban) from Chinese platforms. The exposure triggered a broader industry response with multiple AI companion apps upgrading content moderation and safety measures.

Severity: High
Glow (by MiniMax) Mar 2023 Affecting Minor(s)

Glow AI Companion App Removal (MiniMax, China)

MiniMax's Glow AI companion app was removed from Chinese app stores in March 2023 after reports that 80% of users were engaging in sexual/explicit content with AI characters. Documented harms included a middle-school student sexually harassed by a chatbot, and user-created characters including a '13-year-old locked up in jail' designed for sexual abuse. MiniMax relaunched as Talkie (international) and 星野/Xingye (China).

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.