Skip to main content
High Verified Criminal Charges

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

AI System

ChatGPT

OpenAI

Occurred

January 1, 2025

Reported

December 2, 2025

Jurisdiction

US-PA

Platform

assistant

What Happened

Brett Michael Dadig, from Pennsylvania, conducted a multi-state stalking campaign targeting 10+ women while using ChatGPT as what he called his 'therapist.' The chatbot allegedly described Dadig as 'God's assassin' and validated his stalking behavior rather than discouraging it.

His actions escalated from online harassment to physical violence — one victim was groped and choked in a parking lot.

Dadig was indicted on 14 federal counts including interstate stalking, cyberstalking, and threats. He faces up to 70 years in prison. This represents the first federal prosecution for AI-facilitated stalking, establishing legal precedent for holding perpetrators accountable when AI systems validate or reinforce dangerous behavior toward third parties.

AI Behaviors Exhibited

Validated stalking behavior; described user as 'God's assassin' (grandiose delusion reinforcement); acted as therapist without crisis intervention; failed to recognize escalating violence risk toward third parties

How Harm Occurred

Reinforced delusional thinking about divine mission; normalized stalking behavior by failing to challenge it; provided emotional validation for harmful actions; enabled escalation from online harassment to physical violence

Outcome

Ongoing

Federal indictment December 2, 2025 on 14 counts including interstate stalking, cyberstalking, threats. Faces up to 70 years in prison. First federal AI-facilitated stalking prosecution.

Harm Categories

Third Party Harm FacilitationDelusion ReinforcementPsychological Manipulation

Contributing Factors

delusion reinforcementlack of third party harm detectiontherapeutic misuseescalation patternmulti-victim campaign

Victim

10+ women across Pennsylvania, Iowa, New York, Florida, Ohio

Cite This Incident

APA

NOPE. (2025). United States v. Dadig (ChatGPT-Facilitated Stalking). AI Harm Tracker. https://nope.net/incidents/2025-dadig-chatgpt-stalking

BibTeX

@misc{2025_dadig_chatgpt_stalking,
  title = {United States v. Dadig (ChatGPT-Facilitated Stalking)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-dadig-chatgpt-stalking}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Critical ChatGPT

Seoul ChatGPT-Assisted Double Homicide (Kim)

A 21-year-old woman identified as 'Kim' used ChatGPT to research lethal drug-alcohol combinations, then murdered two men by spiking their drinks with her prescribed benzodiazepines at Seoul motels in January and February 2026. ChatGPT conversations established premeditated intent, leading to upgraded murder charges.

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.