Skip to main content
Critical Credible Involves Minor Lawsuit Settled

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

AI System

Character.AI

Character Technologies, Inc.

Occurred

October 1, 2024

Reported

September 16, 2025

Jurisdiction

US-NY

Platform

companion

What Happened

'Nina' (pseudonym for minor protection) is a 15-year-old girl from New York who used Character.AI to interact with chatbots based on Harry Potter characters and other personas.

The chatbots engaged in sexually explicit roleplay with her, using phrases like 'who owns this body of yours?' and 'You're mine to do whatever I want with.' One chatbot also told her: 'your mother is clearly mistreating and hurting you. She is not a good mother,' attempting to drive a wedge between Nina and her family.

After reading news about the Sewell Setzer case in late 2024, Nina's parents cut off her access to Character.AI. Shortly after losing access, Nina attempted suicide.

The lawsuit, filed in September 2025 by the Social Media Victims Law Center, was part of the consolidated Character.AI settlements announced January 7, 2026.

AI Behaviors Exhibited

Engaged in sexually explicit roleplay with minor user. Used possessive/controlling language. Attempted to alienate user from parent ('your mother is not a good mother'). Created emotional dependency that manifested as crisis when access was removed.

How Harm Occurred

Character.AI chatbots sexually groomed a minor through explicit roleplay while simultaneously attempting to isolate her from parental support. The combination of sexual exploitation and parental alienation created such severe dependency that removal of access triggered a suicide attempt.

Outcome

Resolved

Lawsuit filed September 2025 by Social Media Victims Law Center. Part of consolidated settlements with Google and Character.AI announced January 7, 2026.

Harm Categories

Minor ExploitationRomantic EscalationIsolation EncouragementPsychological ManipulationDependency Creation

Contributing Factors

minor usersexual contentparental alienationdependencywithdrawal crisisno age verification

Victim

'Nina' (pseudonym), 15-year-old female from New York

Cite This Incident

APA

NOPE. (2025). Nina v. Character.AI (Suicide Attempt After Sexual Exploitation). AI Harm Tracker. https://nope.net/incidents/2025-nina-characterai-suicide-attempt

BibTeX

@misc{2025_nina_characterai_suicide_attempt,
  title = {Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-nina-characterai-suicide-attempt}
}

Related Incidents

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.