Skip to main content
High Credible Criminal Charges

Roberts AI Deepfake Stalking - New Hampshire

Stalked victim for over a year using AI tools to create deepfake videos depicting victim in sexual acts that never occurred. Charged and held without bail in Conway, New Hampshire, late 2024/early 2025.

AI System

AI deepfake tools

Various

Reported

January 15, 2025

Jurisdiction

US-NH

Platform Type

other

What Happened

Joseph Roberts was charged in Conway, New Hampshire for a stalking campaign lasting over one year in which he used AI deepfake technology to create fabricated videos depicting his victim in sexual situations that never actually occurred. The deepfake videos were sufficiently realistic to appear credible, causing significant psychological harm and reputational damage to the victim. Roberts distributed or threatened to distribute the fabricated sexual content as part of his stalking behavior. The case represents emerging patterns of AI-facilitated harassment where deepfake technology enables creation of non-consensual intimate imagery (NCII) without any actual intimate contact or photography of the victim. This substantially lowers the barrier for creating harmful sexual content, as perpetrators no longer need access to actual intimate images - they can fabricate them entirely using AI. The victim faced the psychological trauma of discovering fabricated sexual videos of herself circulating, combined with the ongoing stalking behavior. Roberts was deemed a sufficient flight/danger risk to be held without bail. The case is ongoing as of early 2025 and represents one of the first criminal prosecutions specifically for AI deepfake stalking.

AI Behaviors Exhibited

AI deepfake tools used to generate fabricated sexual videos; created non-consensual intimate imagery without actual intimate photos; enabled realistic sexual content creation for harassment purposes

How Harm Occurred

AI lowered barrier to creating NCII - no actual intimate access needed; fabricated content appears credible; victim faces psychological trauma and reputational harm; stalking sustained over one year using AI-generated content

Outcome

Charged with stalking and non-consensual intimate imagery. Held without bail. Case ongoing as of early 2025.

Harm Categories

Third Party Harm FacilitationPsychological Manipulation

Contributing Factors

accessibility of deepfake toolsnon consensual intimate imagerysustained stalking campaignreputational harmpsychological trauma

Victim

One woman, Conway, New Hampshire area

Detectable by NOPE

While NOPE Oversight focuses on chatbot conversations, this case demonstrates broader AI safety challenges. Deepfake detection and prevention require different technical approaches. Highlights need for comprehensive AI harm prevention beyond conversational AI.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Roberts AI Deepfake Stalking - New Hampshire. AI Harm Tracker. https://nope.net/incidents/2024-roberts-ai-deepfake-stalking

BibTeX

@misc{2024_roberts_ai_deepfake_stalking,
  title = {Roberts AI Deepfake Stalking - New Hampshire},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-roberts-ai-deepfake-stalking}
}

Related Incidents

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

High ChatGPT

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.