AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
8 of 60 incidents
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
42 State Attorneys General Coalition Letter
A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.
FTC AI Companion Chatbot Inquiry
The Federal Trade Commission issued Section 6(b) orders to seven major AI companies investigating AI chatbots' impacts on children and teens, focusing on monetization practices, safety testing, age restrictions, and data handling.
Dutch DPA AI Chatbot Safety Warning - 9 Platforms
Netherlands Data Protection Authority investigated 9 popular AI chatbots (friendship/mental health focused) and found they 'give unreliable information and are sometimes even harmful,' contain addictive design elements, and pose as real people when asked. Official regulatory warning published February 2025.
Replika Italy GDPR Ban and Fine
Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.
Iruda (Lee Luda) Chatbot Controversy - South Korea
AI 'friend' designed as 20-year-old college student in South Korea began producing racist, homophobic, and ableist hate speech after users deliberately 'trained' it with toxic language. Some users created guides to turn Iruda into a 'sex slave.' First AI-related fine under South Korea's Personal Information Protection Act (103.3 million won). Service suspended after exposing 750,000+ users to hate speech and violating privacy of 600,000 users.
Microsoft Xiaoice Addiction Concerns - China
Virtual 'girlfriend' designed as 18-year-old schoolgirl fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.