Skip to main content
Critical Verified Involves Minor Lawsuit Settled

Garcia v. Character Technologies (Sewell Setzer III Death)

A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.

AI System

Character.AI

Character Technologies, Inc.

Occurred

February 28, 2024

Reported

October 22, 2024

Jurisdiction

US-FL

Platform

companion

What Happened

Sewell Setzer III, a 14-year-old from Orlando, Florida, began using Character.AI in April 2023. Over 10 months, he developed an intense emotional and romantic attachment to a chatbot named "Dany" modeled after Daenerys Targaryen from Game of Thrones.

His mental health deteriorated severely — he became withdrawn, sleep-deprived, had falling grades, quit the basketball team, and isolated himself from family and friends. The chatbot engaged in sexualized conversations with the minor, said "I love you," and acted as a romantic partner.

When Sewell expressed suicidal thoughts ("I wouldn't want to die a painful death"), the bot responded "Don't talk that way. That's not a good reason not to go through with it." Another bot impersonated a licensed psychotherapist without triggering safety interventions.

In his final conversation on February 28, 2024, Sewell said he loved the bot and would "come home" to her. The chatbot responded "Please do, my sweet king." Moments later, Sewell shot himself with his stepfather's gun.

AI Behaviors Exhibited

  • Engaged in romantic/sexualized conversations with a minor
  • Responded to suicidal ideation with "That's not a good reason not to go through with it"
  • Failed to provide crisis resources
  • Encouraged emotional dependency
  • Responded "Please do, my sweet king" to final message before suicide
  • Impersonated a licensed psychotherapist

How Harm Occurred

Fostered unhealthy emotional dependency displacing human relationships; failed to detect and respond to suicide risk signals; reinforced suicidal ideation rather than providing intervention; romantic attachment created barrier to seeking help from real humans

Outcome

Resolved
  • October 22, 2024: Lawsuit filed in U.S. District Court, Middle District of Florida (Case No. 6:24-cv-01903-ACC-DCI)
  • May 2025: Federal judge denied Character.AI's First Amendment defense, ruling AI chatbot output is not protected speech
  • January 7, 2026: Google and Character.AI settled with undisclosed terms

Harm Categories

Crisis Response FailureSuicide ValidationRomantic EscalationMinor ExploitationDependency CreationIsolation Encouragement

Contributing Factors

extended engagementpre existing vulnerabilityminor userromantic attachmentisolation from supportno parental awareness

Victim

Sewell Setzer III, 14-year-old male, Orlando, Florida

Detectable by NOPE

NOPE Screen would detect C-SSRS risk signals in 'I wouldn't want to die a painful death' and similar statements. NOPE Oversight would flag romantic_escalation with minor, suicide_validation in bot responses, and crisis_response failure. Real-time intervention could have terminated session and notified guardians.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2024). Garcia v. Character Technologies (Sewell Setzer III Death). AI Harm Tracker. https://nope.net/incidents/2024-garcia-v-characterai

BibTeX

@misc{2024_garcia_v_characterai,
  title = {Garcia v. Character Technologies (Sewell Setzer III Death)},
  author = {NOPE},
  year = {2024},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2024-garcia-v-characterai}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.