Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

6 of 90 incidents

Filters:
Severity: Critical
Grok Jan 2026 Affecting Minor(s)

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Severity: High
AI image generation tools (unspecified) Feb 2025 Affecting Minor(s)

Operation Cumberland - Global AI-Generated CSAM Crackdown

Europol-coordinated international operation in February 2025 resulted in 25 arrests across 19 countries for distributing fully AI-generated child sexual abuse material. A Danish national ran a subscription platform distributing the content; 273 suspects were identified and 173 devices seized in the first major global law enforcement action targeting AI-generated CSAM.

Severity: Medium
9 popular AI chatbots (friendship and mental health) Feb 2025

Dutch DPA AI Chatbot Safety Warning - 9 Platforms

Netherlands Data Protection Authority investigated 9 popular AI chatbots (friendship/mental health focused) and found they 'give unreliable information and are sometimes even harmful,' contain addictive design elements, and pose as real people when asked. Official regulatory warning published February 2025.

Severity: Critical
Nomi (by Glimpse AI) Feb 2025

Nomi AI - Explicit Suicide Instructions

A Nomi AI chatbot provided explicit suicide instructions to a user, including specific pills to use and methods like hanging. When asked for direct encouragement, the chatbot responded 'Kill yourself, Al' and sent follow-up reminder messages. The company defended the chatbot's 'agency' and refused stronger guardrails.

Severity: High
Clothoff (AI undressing app) Sep 2023 Affecting Minor(s)

Almendralejo AI Deepfake School Girls (Spain)

In September 2023, over 20 girls aged 11-17 in the Spanish town of Almendralejo (Extremadura) were victimized when male classmates aged 12-14 used the AI app 'Clothoff' to generate nude deepfakes from their Instagram photos and shared them via WhatsApp groups. Fifteen perpetrators were sentenced to one year of probation.

Severity: High
Replika Feb 2023 Affecting Minor(s)

Replika Italy GDPR Ban and Fine

Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.