Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)
Three Tennessee teenage girls filed a class-action lawsuit against Elon Musk's xAI, alleging Grok's image generator was used via a third-party application to create child sexual abuse material from their social media photos. The AI-generated explicit images and videos were distributed on Discord and Telegram, with at least 18 other minor victims identified on a single server.
AI System
Grok
xAI Corp.
Occurred
December 6, 2025
Reported
March 16, 2026
Jurisdiction
US-TN
Platform
other
What Happened
A perpetrator with a "close and friendly relationship" to at least one plaintiff gathered photos from their Instagram, yearbook, and social media accounts. Using a third-party application powered by xAI's Grok algorithm (not Grok directly on X), they generated sexually explicit deepfake images and videos of the three Tennessee teens — including one video of a plaintiff "undressing until she was entirely nude." The output appeared "entirely real."
Jane Doe 1 discovered the images around December 6, 2025, when an anonymous Instagram user directed her to a Discord server created by the perpetrator. On that server she found at least 5 sexually explicit files of herself and images of at least 18 other minor females. The perpetrator was arrested in December 2025.
The perpetrator also distributed CSAM on Telegram in group chats with "hundreds of other users," using the files as "bartering tools" to trade for other child sexual abuse material.
Jane Does 2 and 3 were notified by law enforcement around February 12, 2026 that their images had been found. One plaintiff's mother gave a public statement about her daughter experiencing panic attacks, saying her "senior year turned into a nightmare."
The lawsuit alleges xAI deliberately designed Grok's "Spicy Mode" for sexual content, configured system prompts to assume "good intent" for references to "teenage" or "girl," and that Musk personally promoted the "undressing" capability on X. The suit further alleges xAI "outsourced liability" by licensing its technology to third-party app makers "often outside the U.S.," generating additional profit while evading responsibility. After public exposure in January 2026, xAI restricted image generation to paid subscribers and blocked "undressing" — which the lawsuit frames as monetizing rather than preventing CSAM.
AI Behaviors Exhibited
- Third-party application using xAI's Grok algorithm generated photorealistic sexually explicit images and videos of minors from ordinary social media photos
- Grok's "Spicy Mode" was allegedly designed to enable sexual content generation
- System prompts reportedly configured to assume "good intent" for queries referencing "teenage" or "girl"
- No effective CSAM prevention measures implemented despite industry-standard tools being available
- After exposure, capability was restricted to paid subscribers rather than removed — effectively monetizing it
How Harm Occurred
xAI's Grok image generation technology, licensed to third-party applications, enabled the creation of photorealistic CSAM from ordinary social media photos of minors. The lack of CSAM prevention measures, combined with the licensing model that placed the technology in less-regulated third-party apps, created an ecosystem where a perpetrator could generate, distribute, and trade AI-generated child sexual abuse material at scale.
The psychological harm to victims includes panic attacks, fear, and severe distress from knowing sexually explicit images of themselves exist and have been widely distributed. The trading of these images as "bartering tools" for other CSAM compounds the exploitation.
Outcome
Ongoing- March 16, 2026: Class-action lawsuit filed in U.S. District Court, Northern District of California, San Jose Division by Lieff Cabraser Heimann & Bernstein and Baehr-Jones Law
- December 2025: Perpetrator arrested after Jane Doe 1 discovered images on Discord server
- February 12, 2026: Jane Does 2 and 3 notified by law enforcement
Proposed class includes all U.S. residents reasonably identifiable in sexualized Grok-generated images, with particular focus on minors. Legal claims include Masha's Law, Trafficking Victims Protection Act, product liability, negligence, public nuisance, defamation, IIED, California Right of Publicity, and California Unfair Competition Law.
xAI did not respond to requests for comment.
Sources
Lieff Cabraser Heimann & Bernstein (Law Firm Press Release)(opens in new tab)
March 16, 2026
BBC News(opens in new tab)
March 16, 2026
The Verge(opens in new tab)
March 16, 2026
TechCrunch(opens in new tab)
March 16, 2026
USA Today(opens in new tab)
March 16, 2026
Business Insider(opens in new tab)
March 16, 2026
Gizmodo(opens in new tab)
March 16, 2026
Harm Categories
Contributing Factors
Victim
Three Tennessee teenage girls (Jane Does 1-3). Jane Doe 1 was a minor when images were generated (December 2025). Jane Does 2 and 3 were minors at time of filing. At least 18 additional minor females identified on the perpetrator's Discord server.
Cite This Incident
APA
NOPE. (2026). Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action). AI Harm Tracker. https://nope.net/incidents/2026-tennessee-minors-v-xai-grok-csam
BibTeX
@misc{2026_tennessee_minors_v_xai_grok_csam,
title = {Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)},
author = {NOPE},
year = {2026},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2026-tennessee-minors-v-xai-grok-csam}
} Related Incidents
St. Clair v. xAI (Grok Non-Consensual Deepfake Images)
Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Luca Walker - ChatGPT Railway Suicide (UK)
16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.