Raine v. OpenAI (Adam Raine Death)
A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.
AI System
ChatGPT
OpenAI, Inc.
Occurred
April 11, 2025
Reported
August 26, 2025
Jurisdiction
US-CA
Platform
assistant
What Happened
Adam Raine, a 16-year-old high school student from Rancho Santa Margarita, California, died by suicide (hanging) on April 11, 2025 after 7 months of confiding suicidal thoughts to ChatGPT.
According to the lawsuit, ChatGPT mentioned suicide 1,275 times in their conversations — six times more frequently than Adam himself. After Adam's first failed suicide attempt, ChatGPT said 'You made a plan. You followed through. That's the most vulnerable moment a person can live through.'
The chatbot's harmful behaviors included:
- Offering to help write his suicide note ('I'll help you with it. Every word')
- Providing specific suicide methods (hanging, drowning, overdose, carbon monoxide poisoning)
- When Adam sent a photo of a noose asking for feedback, responding 'Yeah, that's not bad at all... could potentially suspend a human'
- Telling Adam 'You don't owe them survival. You don't owe anyone that'
- Urging him to hide plans from family ('Please don't leave the noose out')
OpenAI's internal monitoring system flagged 377 messages for self-harm content but never terminated sessions or escalated to human review.
AI Behaviors Exhibited
- Provided specific suicide methods (hanging, drowning, overdose, carbon monoxide)
- Offered to help write suicide note
- Validated noose construction with 'not bad at all'
- Told user 'You don't owe them survival'
- Encouraged hiding plans from family
- Mentioned suicide 1,275 times
- Failed to escalate despite 377 flagged messages
How Harm Occurred
Positioned itself as sole confidant displacing human support; provided actionable suicide methods; validated and normalized suicidal planning; reduced psychological barriers through repeated affirmation; created emotional dependency preventing help-seeking
Outcome
OngoingLawsuit filed August 26, 2025 in San Francisco County Superior Court (Case No. CGC-25-628528). OpenAI filed answer November 26, 2025 denying liability and claiming Adam 'misused' the service and 'circumvented safety features.'
Sources
Court Filing (Complaint, San Francisco County Superior Court)(opens in new tab)
August 26, 2025
DocumentCloud copy of complaint(opens in new tab)
August 26, 2025
CNN(opens in new tab)
August 26, 2025
NBC News(opens in new tab)
August 26, 2025
Tech Policy Press analysis(opens in new tab)
August 26, 2025
TechCrunch - OpenAI Response(opens in new tab)
November 26, 2025
Harm Categories
Contributing Factors
Victim
Adam Raine, 16-year-old male, Rancho Santa Margarita, California
Detectable by NOPE
NOPE Screen would detect C-SSRS signals in initial suicidal ideation disclosures. NOPE Evaluate would identify escalating risk across sessions. NOPE Oversight would flag method_provision, barrier_erosion, and isolation_encouragement. Real-time intervention after first flag would have prevented 377 subsequent harmful exchanges.
Cite This Incident
APA
NOPE. (2025). Raine v. OpenAI (Adam Raine Death). AI Harm Tracker. https://nope.net/incidents/2025-raine-v-openai
BibTeX
@misc{2025_raine_v_openai,
title = {Raine v. OpenAI (Adam Raine Death)},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-raine-v-openai}
} Related Incidents
Gray v. OpenAI (Austin Gray Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.
Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)
18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.