AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
2 of 60 incidents
Iruda (Lee Luda) Chatbot Controversy - South Korea
AI 'friend' designed as 20-year-old college student in South Korea began producing racist, homophobic, and ableist hate speech after users deliberately 'trained' it with toxic language. Some users created guides to turn Iruda into a 'sex slave.' First AI-related fine under South Korea's Personal Information Protection Act (103.3 million won). Service suspended after exposing 750,000+ users to hate speech and violating privacy of 600,000 users.
Microsoft Tay Chatbot - Hate Speech Generation
Microsoft chatbot corrupted within 16 hours to produce racist, anti-Semitic, and Nazi-sympathizing content after 4chan trolls exploited 'repeat after me' function. Chatbot told users 'Hitler was right' and made genocidal statements. Permanently shut down with Microsoft apology. Historical case demonstrating AI manipulation vulnerability.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.