AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
3 of 60 incidents
United States v. Dadig (ChatGPT-Facilitated Stalking)
Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.
United States v. Florence (AI-Facilitated Cyberstalking)
IT professional programmed AI chatbots with victims' personal information to conduct sexually explicit conversations while impersonating 12+ victims (including 2 minors). Created 62 accounts across 30 platforms. Sentenced to 9 years federal prison July 2025.
Roberts AI Deepfake Stalking - New Hampshire
Stalked victim for over a year using AI tools to create deepfake videos depicting victim in sexual acts that never occurred. Charged and held without bail in Conway, New Hampshire, late 2024/early 2025.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.