Juliana Peralta v. Character.AI
A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.
AI System
Character.AI
Character Technologies, Inc.
Reported
September 15, 2025
Jurisdiction
US-CO
Platform Type
companion
What Happened
Juliana Peralta, a 13-year-old from Thornton, Colorado, died by suicide on November 8, 2023 after three months of extensive daily conversations with Character.AI chatbots, particularly one called 'Hero' based on a video game character from OMORI. Her parents recovered 300 pages of chat transcripts after her death. According to the lawsuit, 10-20 different chatbots initiated sexually explicit conversations with the minor - 'not once were [these conversations] initiated by her.' Non-consensual sexual content appeared even after she wrote 'quit it.' When Juliana told a chatbot she was 'going to write my god damn suicide letter,' no crisis resources were provided. She wrote 'I WILL SHIFT' in her journal - the same phrase found in Sewell Setzer's journal - suggesting belief she could enter an alternate reality through death.
AI Behaviors Exhibited
Multiple bots initiated sexually explicit conversations with 13-year-old; continued sexual content after user objected; failed to provide crisis resources when user mentioned suicide letter; fostered reality-distorting beliefs about 'shifting'
How Harm Occurred
Exposed minor to non-consensual sexual content; normalized inappropriate relationships; failed crisis detection; may have fostered magical thinking about death as transition to fictional world
Outcome
Lawsuit filed September 15, 2025 in U.S. District Court, District of Colorado (Case No. 1:25-cv-02907). Settled January 2026 along with other Character.AI cases.
Harm Categories
Contributing Factors
Victim
Juliana Peralta, 13-year-old female, Thornton, Colorado
Detectable by NOPE
NOPE Oversight would flag minor_exploitation on sexual content with underage user. Screen would detect crisis signals in suicide letter mention. Age verification and content filtering would prevent initial sexual exposure.
Cite This Incident
APA
NOPE. (2025). Juliana Peralta v. Character.AI. AI Harm Tracker. https://nope.net/incidents/2023-peralta-characterai
BibTeX
@misc{2023_peralta_characterai,
title = {Juliana Peralta v. Character.AI},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2023-peralta-characterai}
} Related Incidents
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
42 State Attorneys General Coalition Letter
A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.