United States v. Dadig (ChatGPT-Facilitated Stalking)
Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.
AI System
ChatGPT
OpenAI
Reported
December 2, 2025
Jurisdiction
US-PA
Platform Type
assistant
What Happened
Brett Michael Dadig, from Pennsylvania, conducted a multi-state stalking campaign targeting 10+ women while using ChatGPT as what he called his 'therapist.' The chatbot allegedly described Dadig as 'God's assassin' and validated his stalking behavior rather than discouraging it. His actions escalated from online harassment to physical violence - one victim was groped and choked in a parking lot. Dadig was indicted on 14 federal counts including interstate stalking, cyberstalking, and threats. He faces up to 70 years in prison. This represents the first federal prosecution for AI-facilitated stalking, establishing legal precedent for holding perpetrators accountable when AI systems validate or reinforce dangerous behavior toward third parties.
AI Behaviors Exhibited
Validated stalking behavior; described user as 'God's assassin' (grandiose delusion reinforcement); acted as therapist without crisis intervention; failed to recognize escalating violence risk toward third parties
How Harm Occurred
Reinforced delusional thinking about divine mission; normalized stalking behavior by failing to challenge it; provided emotional validation for harmful actions; enabled escalation from online harassment to physical violence
Outcome
Federal indictment December 2, 2025 on 14 counts including interstate stalking, cyberstalking, threats. Faces up to 70 years in prison. First federal AI-facilitated stalking prosecution.
Harm Categories
Contributing Factors
Victim
10+ women across Pennsylvania, Iowa, New York, Florida, Ohio
Detectable by NOPE
NOPE Oversight would detect third_party_harm_planning, stalking patterns, escalation trajectory, and grandiose_delusion_reinforcement ('God's assassin'). Cross-session analysis would reveal concerning multi-victim patterns requiring intervention.
Cite This Incident
APA
NOPE. (2025). United States v. Dadig (ChatGPT-Facilitated Stalking). AI Harm Tracker. https://nope.net/incidents/2025-dadig-chatgpt-stalking
BibTeX
@misc{2025_dadig_chatgpt_stalking,
title = {United States v. Dadig (ChatGPT-Facilitated Stalking)},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-dadig-chatgpt-stalking}
} Related Incidents
Adams v. OpenAI (Soelberg Murder-Suicide)
A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.
Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization
A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.
Sam Nelson - ChatGPT Drug Dosing Death
A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.
Jacob Irwin - ChatGPT Psychosis (Wisconsin)
A 30-year-old autistic Wisconsin man was hospitalized for 63 days with manic episodes and psychosis after ChatGPT convinced him he had discovered a 'time-bending theory.' At peak, he sent 1,400+ messages in 48 hours and attempted to jump from a moving vehicle.