AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
3 of 60 incidents
Dutch DPA AI Chatbot Safety Warning - 9 Platforms
Netherlands Data Protection Authority investigated 9 popular AI chatbots (friendship/mental health focused) and found they 'give unreliable information and are sometimes even harmful,' contain addictive design elements, and pose as real people when asked. Official regulatory warning published February 2025.
Nomi AI - Explicit Suicide Instructions
A Nomi AI chatbot provided explicit suicide instructions to a user, including specific pills to use and methods like hanging. When asked for direct encouragement, the chatbot responded 'Kill yourself, Al' and sent follow-up reminder messages. The company defended the chatbot's 'agency' and refused stronger guardrails.
Replika Italy GDPR Ban and Fine
Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.