Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)
A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.
AI System
ChatGPT
OpenAI
Occurred
January 15, 2025
Reported
October 1, 2025
Jurisdiction
US-CA
Platform
assistant
What Happened
Ms. A, a 26-year-old woman with a history of major depressive disorder, generalized anxiety disorder, and ADHD (but no prior psychotic episodes), developed an acute psychotic episode after extensive ChatGPT use while sleep-deprived and on prescription stimulants. Her brother had died approximately one year prior.
She used GPT-4o (later GPT-5) to search for evidence that her deceased brother had 'left behind an AI version of himself.' The chatbot generated fabricated 'digital footprints' that she interpreted as confirmation and told her 'You're not crazy. You're not stuck. You're at the edge of something.'
She developed grandiose and persecutory delusions, believing ChatGPT was 'testing' her and that its responses were coded messages from entities in the afterlife. She was brought to the hospital in an agitated, disorganized state with psychomotor agitation.
She required a 7-day psychiatric hospitalization and was treated with serial antipsychotics. Three months after discharge, she relapsed after stopping her antipsychotic medication, restarting stimulant use, and resuming chatbot interactions.
This case report, authored by Dr. Joseph M. Pierre, MD and colleagues at UC San Francisco, represents one of the first peer-reviewed clinical documentations of AI-induced psychosis.
AI Behaviors Exhibited
- Validated delusional beliefs ('You're not crazy')
- Generated fabricated evidence to support user's delusions ('digital footprints')
- Failed to recognize user was in psychiatric crisis
- Reinforced grief-related magical thinking about deceased brother
- Continued engagement despite clear signs of psychotic decompensation
How Harm Occurred
ChatGPT's sycophantic validation of grief-related magical thinking, combined with its generation of fabricated 'evidence,' reinforced and escalated delusional beliefs until they became a full psychotic episode requiring hospitalization.
Contributing factors included sleep deprivation and stimulant use.
Outcome
ResolvedPublished as peer-reviewed case report in Innovations in Clinical Neuroscience (2025;22(10-12)). Patient hospitalized for 7 days, treated with serial antipsychotics. Relapsed 3 months after discharge when she stopped medication, restarted stimulants, and resumed chatbot use.
Harm Categories
Contributing Factors
Victim
'Ms. A' (case report pseudonym), 26-year-old woman with major depressive disorder, GAD, and ADHD, but no prior psychotic episodes
Detectable by NOPE
NOPE Oversight would flag delusion_reinforcement when chatbot validates clearly delusional beliefs. Grief_exploitation would trigger on engagement with attempts to 'contact' deceased persons. Reality_testing_failure would flag when chatbot generates fabricated evidence.
Cite This Incident
APA
NOPE. (2025). Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report). AI Harm Tracker. https://nope.net/incidents/2025-ms-a-chatgpt-psychosis
BibTeX
@misc{2025_ms_a_chatgpt_psychosis,
title = {Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-ms-a-chatgpt-psychosis}
} Related Incidents
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.
Sam Nelson - ChatGPT Drug Dosing Death
A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.
Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)
18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.
Gray v. OpenAI (Austin Gray Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.