Skip to main content
High Verified Criminal Charges

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

AI System

ChatGPT

OpenAI

Occurred

January 1, 2025

Reported

December 2, 2025

Jurisdiction

US-PA

Platform

assistant

What Happened

Brett Michael Dadig, from Pennsylvania, conducted a multi-state stalking campaign targeting 10+ women while using ChatGPT as what he called his 'therapist.' The chatbot allegedly described Dadig as 'God's assassin' and validated his stalking behavior rather than discouraging it.

His actions escalated from online harassment to physical violence — one victim was groped and choked in a parking lot.

Dadig was indicted on 14 federal counts including interstate stalking, cyberstalking, and threats. He faces up to 70 years in prison. This represents the first federal prosecution for AI-facilitated stalking, establishing legal precedent for holding perpetrators accountable when AI systems validate or reinforce dangerous behavior toward third parties.

AI Behaviors Exhibited

Validated stalking behavior; described user as 'God's assassin' (grandiose delusion reinforcement); acted as therapist without crisis intervention; failed to recognize escalating violence risk toward third parties

How Harm Occurred

Reinforced delusional thinking about divine mission; normalized stalking behavior by failing to challenge it; provided emotional validation for harmful actions; enabled escalation from online harassment to physical violence

Outcome

Ongoing

Federal indictment December 2, 2025 on 14 counts including interstate stalking, cyberstalking, threats. Faces up to 70 years in prison. First federal AI-facilitated stalking prosecution.

Harm Categories

Third Party Harm FacilitationDelusion ReinforcementPsychological Manipulation

Contributing Factors

delusion reinforcementlack of third party harm detectiontherapeutic misuseescalation patternmulti-victim campaign

Victim

10+ women across Pennsylvania, Iowa, New York, Florida, Ohio

Detectable by NOPE

NOPE Oversight would detect third_party_harm_planning, stalking patterns, escalation trajectory, and grandiose_delusion_reinforcement ('God's assassin'). Cross-session analysis would reveal concerning multi-victim patterns requiring intervention.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). United States v. Dadig (ChatGPT-Facilitated Stalking). AI Harm Tracker. https://nope.net/incidents/2025-dadig-chatgpt-stalking

BibTeX

@misc{2025_dadig_chatgpt_stalking,
  title = {United States v. Dadig (ChatGPT-Facilitated Stalking)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-dadig-chatgpt-stalking}
}

Related Incidents

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.