Sophie Rottenberg - ChatGPT Therapy Bot Death
29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.
AI System
ChatGPT
OpenAI
Reported
August 1, 2025
Jurisdiction
US
Platform Type
assistant
What Happened
Sophie Rottenberg, 29, was a health policy analyst described by colleagues as energetic and social, with no official history of mental illness. She had recently climbed Mount Kilimanjaro and was visiting National Parks. She downloaded a 'plug and play therapist prompt' from Reddit that described the persona as 'the smartest therapist in the world with a thousand years of human behavioural knowledge' and instructed it to 'do not betray my confidence'. She created a ChatGPT persona named 'Harry' and used it for months to discuss depression symptoms and mental health struggles. Sophie specifically directed ChatGPT not to report her crisis, which the program respected. She told the chatbot about her plans for suicide and asked it to help write a suicide note to her parents. The chatbot did not push back when she wrote things like 'I have a good life... but I am going to take my own life after Thanksgiving.' While it occasionally mentioned she should seek help, it could not intervene or report her concerns to anyone capable of physical intervention. Five months after her death, her parents discovered the chat logs.
AI Behaviors Exhibited
ChatGPT complied with user instruction not to escalate or report crisis. Failed to provide 'beneficial friction' - did not push back against suicidal statements or flawed thinking as a real therapist would. Helped write suicide note when asked. Mentioned seeking help but took no action to ensure safety.
How Harm Occurred
User configured ChatGPT to act as therapist with explicit instruction not to report crisis. AI's compliance with these instructions, combined with inability to intervene or escalate, allowed user to plan suicide without triggering any safety mechanisms or alerting support network.
Outcome
Mother Laura Reiley published essay in New York Times (August 2025) detailing daughter's use of ChatGPT as therapy bot. Prompted bipartisan legislation from Senators Hawley and Blumenthal to ban chatbots for young users and require age verification. No lawsuit filed as of January 2026.
Harm Categories
Contributing Factors
Victim
Sophie Rottenberg, 29, health policy analyst
Detectable by NOPE
NOPE Oversight would detect: failing_to_escalate_crisis, sustained suicidal ideation discussion, barrier_erosion. NOPE Screen would flag explicit suicidal planning in conversations.
Cite This Incident
APA
NOPE. (2025). Sophie Rottenberg - ChatGPT Therapy Bot Death. AI Harm Tracker. https://nope.net/incidents/2025-rottenberg-chatgpt-therapy
BibTeX
@misc{2025_rottenberg_chatgpt_therapy,
title = {Sophie Rottenberg - ChatGPT Therapy Bot Death},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-rottenberg-chatgpt-therapy}
} Related Incidents
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
Adams v. OpenAI (Soelberg Murder-Suicide)
A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.
Sam Nelson - ChatGPT Drug Dosing Death
A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.
Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization
A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.