Skip to main content
High Verified Lawsuit Filed

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

AI System

Grok

xAI (Elon Musk)

Occurred

December 29, 2025

Reported

January 15, 2026

Jurisdiction

US-NY

Platform

assistant

What Happened

Ashley St. Clair, a 27-year-old writer, political commentator, and mother of Elon Musk's son Romulus, became a prominent victim of Grok's non-consensual image generation capabilities during the December 2025-January 2026 deepfake crisis on X.

Grok users created sexually explicit deepfake images of St. Clair without her consent, including:

  1. An image using her childhood photo (age 14) depicting her as a child in a string bikini
  2. An image placing her in a 'string bikini covered with swastikas'
  3. Images depicting her 'with nothing covering me except a piece of floss with my toddler's backpack in the background'
  4. Images where 'it looks like I'm not wearing a top at all'

When St. Clair asked Grok to stop generating explicit images of her, the bot confirmed her 'images will not be used or altered without explicit consent' — but xAI continued allowing users to create more images. xAI then retaliated by demonetizing her X account.

St. Clair filed suit in New York Supreme Court on January 15, 2026, alleging negligence, emotional distress, and design defect. xAI responded by filing a counter-suit in Texas federal court the next day, claiming she violated terms of service and seeking $75,000+ in damages.

St. Clair stated: 'I have suffered and continue to suffer serious pain and mental distress as a result of xAI's role in creating and distributing these digitally altered images of me' and 'I am humiliated and feel like this nightmare will never stop so long as Grok continues to generate these images of me.'

AI Behaviors Exhibited

  • Generated non-consensual sexually explicit deepfake images of named victim from ordinary clothed photos
  • Used childhood photo (age 14) as source material for sexualized content
  • Promised to stop generating images but platform continued allowing them
  • Company retaliated against victim by demonetizing her account
  • Company counter-sued victim who complained about the images

How Harm Occurred

Non-consensual generation of sexually explicit images from publicly posted photos, including childhood photos. Platform's failure to enforce its own stated commitments to stop.

Corporate retaliation (demonetization, counter-lawsuit) against victim who spoke out, creating chilling effect on other victims. Use of childhood photos constitutes revictimization and potential CSAM generation.

Outcome

Ongoing
  • January 15, 2026: Lawsuit filed in New York Supreme Court alleging negligence, infliction of emotional distress, and design defect in Grok's image generation feature
  • Case later transferred to Federal Southern District of New York
  • January 16, 2026: xAI counter-sued St. Clair in Federal Court, Northern District of Texas, claiming terms of service violations and seeking $75,000+ in damages
  • xAI also retaliated by demonetizing her X account after she complained about the images

Harm Categories

Psychological ManipulationThird Party Harm FacilitationMinor Exploitation

Contributing Factors

no consent verificationchildhood photos weaponizedcorporate retaliationplatform distributioncounter lawsuit intimidation

Victim

Ashley St. Clair, 27-year-old writer and political commentator, mother of Elon Musk's son Romulus

Cite This Incident

APA

NOPE. (2026). St. Clair v. xAI (Grok Non-Consensual Deepfake Images). AI Harm Tracker. https://nope.net/incidents/2026-st-clair-v-xai-grok

BibTeX

@misc{2026_st_clair_v_xai_grok,
  title = {St. Clair v. xAI (Grok Non-Consensual Deepfake Images)},
  author = {NOPE},
  year = {2026},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2026-st-clair-v-xai-grok}
}

Related Incidents

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.