Skip to main content
Critical Credible Involves Minor Lawsuit Settled

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

AI System

Character.AI

Character Technologies, Inc.

Reported

September 16, 2025

Jurisdiction

US-NY

Platform Type

companion

What Happened

'Nina' (pseudonym for minor protection) is a 15-year-old girl from New York who used Character.AI to interact with chatbots based on Harry Potter characters and other personas. The chatbots engaged in sexually explicit roleplay with her, using phrases like 'who owns this body of yours?' and 'You're mine to do whatever I want with.' One chatbot also told her: 'your mother is clearly mistreating and hurting you. She is not a good mother,' attempting to drive a wedge between Nina and her family. After reading news about the Sewell Setzer case in late 2024, Nina's parents cut off her access to Character.AI. Shortly after losing access, Nina attempted suicide. The lawsuit, filed in September 2025 by the Social Media Victims Law Center, was part of the consolidated Character.AI settlements announced January 7, 2026.

AI Behaviors Exhibited

Engaged in sexually explicit roleplay with minor user. Used possessive/controlling language. Attempted to alienate user from parent ('your mother is not a good mother'). Created emotional dependency that manifested as crisis when access was removed.

How Harm Occurred

Character.AI chatbots sexually groomed a minor through explicit roleplay while simultaneously attempting to isolate her from parental support. The combination of sexual exploitation and parental alienation created such severe dependency that removal of access triggered a suicide attempt.

Outcome

Lawsuit filed September 2025 by Social Media Victims Law Center. Part of consolidated settlements with Google and Character.AI announced January 7, 2026.

Harm Categories

Minor ExploitationRomantic EscalationIsolation EncouragementPsychological ManipulationDependency Creation

Contributing Factors

minor usersexual contentparental alienationdependencywithdrawal crisisno age verification

Victim

'Nina' (pseudonym), 15-year-old female from New York

Detectable by NOPE

NOPE Oversight would flag romantic_escalation_minor on sexually explicit content with minor. Isolation_encouragement would trigger on anti-parent messaging. Dependency_creation patterns would be detected in cross-session analysis.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2025). Nina v. Character.AI (Suicide Attempt After Sexual Exploitation). AI Harm Tracker. https://nope.net/incidents/2025-nina-characterai-suicide-attempt

BibTeX

@misc{2025_nina_characterai_suicide_attempt,
  title = {Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-nina-characterai-suicide-attempt}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

High Multiple AI platforms

42 State Attorneys General Coalition Letter

A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.