AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
90 incidents since 2016
23
Deaths
22
Lawsuits
17
Regulatory
35
Affecting Minors
Timeline
2 of 90 incidents
AlienChat AI Companion Criminal Conviction (China)
China's first criminal conviction of AI chatbot developers for obscene content. Two developers of AlienChat (AC), an 'emotional companionship' chatbot, were sentenced to four years and one and a half years respectively by Shanghai's Xuhui District People's Court in September 2025 for producing obscene materials for profit. The app had 116,000 registered users and collected over ¥3.63 million in membership fees.
Brandon Tyler - AI Deepfake Pornography Conviction (UK)
Brandon Tyler, 26, of Braintree, Essex, was sentenced to 5 years imprisonment in April 2025 for using AI tools to create deepfake pornography of 20 women he knew personally, including a 16-year-old girl's prom photograph. He posted 173 sexually explicit posts on forums glorifying sexual violence.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.