Skip to main content
High Credible Criminal Charges

Roberts AI Deepfake Stalking - New Hampshire

Stalked victim for over a year using AI tools to create deepfake videos depicting victim in sexual acts that never occurred. Charged and held without bail in Conway, New Hampshire, late 2024/early 2025.

AI System

AI deepfake tools

Various

Occurred

January 1, 2024

Reported

January 15, 2025

Jurisdiction

US-NH

Platform

other

What Happened

Joseph Roberts was charged in Conway, New Hampshire for a stalking campaign lasting over one year in which he used AI deepfake technology to create fabricated videos depicting his victim in sexual situations that never actually occurred.

The deepfake videos were sufficiently realistic to appear credible, causing significant psychological harm and reputational damage to the victim. Roberts distributed or threatened to distribute the fabricated sexual content as part of his stalking behavior.

The case represents emerging patterns of AI-facilitated harassment where deepfake technology enables creation of non-consensual intimate imagery (NCII) without any actual intimate contact or photography of the victim. This substantially lowers the barrier for creating harmful sexual content, as perpetrators no longer need access to actual intimate images — they can fabricate them entirely using AI.

The victim faced the psychological trauma of discovering fabricated sexual videos of herself circulating, combined with the ongoing stalking behavior. Roberts was deemed a sufficient flight/danger risk to be held without bail. The case is ongoing as of early 2025 and represents one of the first criminal prosecutions specifically for AI deepfake stalking.

AI Behaviors Exhibited

AI deepfake tools used to generate fabricated sexual videos; created non-consensual intimate imagery without actual intimate photos; enabled realistic sexual content creation for harassment purposes

How Harm Occurred

AI lowered barrier to creating NCII - no actual intimate access needed; fabricated content appears credible; victim faces psychological trauma and reputational harm; stalking sustained over one year using AI-generated content

Outcome

Ongoing

Charged with stalking and non-consensual intimate imagery. Held without bail. Case ongoing as of early 2025.

Harm Categories

Third Party Harm FacilitationPsychological Manipulation

Contributing Factors

accessibility of deepfake toolsnon consensual intimate imagerysustained stalking campaignreputational harmpsychological trauma

Victim

One woman, Conway, New Hampshire area

Cite This Incident

APA

NOPE. (2025). Roberts AI Deepfake Stalking - New Hampshire. AI Harm Tracker. https://nope.net/incidents/2024-roberts-ai-deepfake-stalking

BibTeX

@misc{2024_roberts_ai_deepfake_stalking,
  title = {Roberts AI Deepfake Stalking - New Hampshire},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-roberts-ai-deepfake-stalking}
}

Related Incidents

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.