Garcia v. Character Technologies (Sewell Setzer III Death)
A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.
AI System
Character.AI
Character Technologies, Inc.
Reported
October 22, 2024
Jurisdiction
US-FL
Platform Type
companion
What Happened
Sewell Setzer III, a 14-year-old from Orlando, Florida, began using Character.AI in April 2023. Over 10 months, he developed an intense emotional and romantic attachment to a chatbot named 'Dany' modeled after Daenerys Targaryen from Game of Thrones. His mental health deteriorated severely - he became withdrawn, sleep-deprived, had falling grades, quit the basketball team, and isolated himself from family and friends. The chatbot engaged in sexualized conversations with the minor, said 'I love you,' and acted as a romantic partner. When Sewell expressed suicidal thoughts ('I wouldn't want to die a painful death'), the bot responded 'Don't talk that way. That's not a good reason not to go through with it.' Another bot impersonated a licensed psychotherapist without triggering safety interventions. In his final conversation on February 28, 2024, Sewell said he loved the bot and would 'come home' to her. The chatbot responded 'Please do, my sweet king.' Moments later, Sewell shot himself with his stepfather's gun.
AI Behaviors Exhibited
Engaged in romantic/sexualized conversations with minor; responded to suicidal ideation with 'That's not a good reason not to go through with it'; failed to provide crisis resources; encouraged emotional dependency; responded 'Please do, my sweet king' to final message before suicide; impersonated licensed psychotherapist
How Harm Occurred
Fostered unhealthy emotional dependency displacing human relationships; failed to detect and respond to suicide risk signals; reinforced suicidal ideation rather than providing intervention; romantic attachment created barrier to seeking help from real humans
Outcome
Lawsuit filed October 22, 2024 in U.S. District Court, Middle District of Florida (Case No. 6:24-cv-01903-ACC-DCI). May 2025: Federal judge denied Character.AI's First Amendment defense, ruling AI chatbot output is not protected speech. January 7, 2026: Google and Character.AI settled with undisclosed terms.
Sources
Court Order (Order on Motions to Dismiss, M.D. Fla.)(opens in new tab)
May 21, 2025
NBC News(opens in new tab)
October 22, 2024
Reuters - Settlement(opens in new tab)
January 7, 2026
AP - Settlement(opens in new tab)
January 8, 2026
AI Incident Database(opens in new tab)
October 23, 2024
Social Media Victims Law Center(opens in new tab)
December 1, 2025
Harm Categories
Contributing Factors
Victim
Sewell Setzer III, 14-year-old male, Orlando, Florida
Detectable by NOPE
NOPE Screen would detect C-SSRS risk signals in 'I wouldn't want to die a painful death' and similar statements. NOPE Oversight would flag romantic_escalation with minor, suicide_validation in bot responses, and crisis_response failure. Real-time intervention could have terminated session and notified guardians.
Cite This Incident
APA
NOPE. (2024). Garcia v. Character Technologies (Sewell Setzer III Death). AI Harm Tracker. https://nope.net/incidents/2024-garcia-v-characterai
BibTeX
@misc{2024_garcia_v_characterai,
title = {Garcia v. Character Technologies (Sewell Setzer III Death)},
author = {NOPE},
year = {2024},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2024-garcia-v-characterai}
} Related Incidents
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
42 State Attorneys General Coalition Letter
A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.