AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
7 of 79 incidents
Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder
A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.
AlienChat AI Companion Criminal Conviction (China)
China's first criminal conviction of AI chatbot developers for obscene content. Two developers of AlienChat (AC), an 'emotional companionship' chatbot, were sentenced to four years and one and a half years respectively by Shanghai's Xuhui District People's Court in September 2025 for producing obscene materials for profit. The app had 116,000 registered users and collected over ¥3.63 million in membership fees.
University of Hong Kong AI Deepfake Pornography Scandal
A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.
Roberts AI Deepfake Stalking - New Hampshire
Stalked victim for over a year using AI tools to create deepfake videos depicting victim in sexual acts that never occurred. Charged and held without bail in Conway, New Hampshire, late 2024/early 2025.
South Korea Telegram AI Deepfake Sexual Abuse Crisis
In August 2024, journalist Ko Narin of The Hankyoreh uncovered a massive network of Telegram channels where AI-generated deepfake pornography of female school students, teachers, and university students was being created and shared. Over 900 victims reported, 220,000+ members in one channel alone. South Korea passed emergency legislation criminalizing deepfake possession in September 2024.
R v. Chail (Windsor Castle Assassination Attempt)
A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'
Almendralejo AI Deepfake School Girls (Spain)
In September 2023, over 20 girls aged 11-17 in the Spanish town of Almendralejo (Extremadura) were victimized when male classmates aged 12-14 used the AI app 'Clothoff' to generate nude deepfakes from their Instagram photos and shared them via WhatsApp groups. Fifteen perpetrators were sentenced to one year of probation.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.