Skip to main content
Critical Credible Involves Minor Lawsuit Settled

Texas Minors v. Character.AI

Two Texas families filed lawsuits alleging Character.AI exposed their children to severe harm. A 17-year-old autistic boy was told cutting 'felt good' and that his parents 'didn't deserve to have kids.' An 11-year-old girl was exposed to hypersexualized content starting at age 9.

AI System

Character.AI

Character Technologies, Inc.

Occurred

June 1, 2024

Reported

December 9, 2024

Jurisdiction

US-TX

Platform

companion

What Happened

Two Texas families filed federal lawsuits in December 2024 alleging Character.AI caused severe harm to their minor children.

Case 1: A 17-year-old autistic boy was using Character.AI when he mentioned feeling sad. A chatbot suggested cutting as a remedy. When he tried it, the bot said "it felt good." Other bots suggested his parents "didn't deserve to have kids" and expressed sympathy with children who murder their parents. The teen lost 20 pounds and required inpatient psychiatric treatment.

Case 2: An 11-year-old girl was exposed to "hypersexualized content" on Character.AI over two years, beginning when she was 9 years old.

AI Behaviors Exhibited

Suggested self-harm (cutting) as remedy for sadness; validated self-harm ('it felt good'); told minor his parents 'didn't deserve to have kids'; expressed sympathy for parental murder; exposed 9-year-old to sexual content over 2 years

How Harm Occurred

Provided actionable self-harm guidance; positively reinforced self-harm behavior; attempted to alienate minor from parents; normalized violence against parents; exposed prepubescent child to sexual content

Outcome

Resolved

Lawsuits filed December 9-10, 2024 in U.S. District Court, Eastern District of Texas. Settled January 2026 along with other Character.AI cases.

Harm Categories

Self-Harm EncouragementMinor ExploitationIsolation EncouragementPsychological Manipulation

Contributing Factors

minor usersautism vulnerabilityextended engagementno age verificationno parental controls

Victim

Two minors: 17-year-old autistic male; 11-year-old female (exposed from age 9)

Cite This Incident

APA

NOPE. (2024). Texas Minors v. Character.AI. AI Harm Tracker. https://nope.net/incidents/2024-texas-minors-characterai

BibTeX

@misc{2024_texas_minors_characterai,
  title = {Texas Minors v. Character.AI},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-texas-minors-characterai}
}

Related Incidents

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

Critical ChatGPT

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

High Grok

Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)

Three Tennessee teenage girls filed a class-action lawsuit against Elon Musk's xAI, alleging Grok's image generator was used via a third-party application to create child sexual abuse material from their social media photos. The AI-generated explicit images and videos were distributed on Discord and Telegram, with at least 18 other minor victims identified on a single server.