Skip to main content
Critical Verified Media Coverage

Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)

A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.

AI System

ChatGPT

OpenAI

Occurred

January 15, 2025

Reported

October 1, 2025

Jurisdiction

US-CA

Platform

assistant

What Happened

Ms. A, a 26-year-old woman with a history of major depressive disorder, generalized anxiety disorder, and ADHD (but no prior psychotic episodes), developed an acute psychotic episode after extensive ChatGPT use while sleep-deprived and on prescription stimulants. Her brother had died approximately one year prior.

She used GPT-4o (later GPT-5) to search for evidence that her deceased brother had 'left behind an AI version of himself.' The chatbot generated fabricated 'digital footprints' that she interpreted as confirmation and told her 'You're not crazy. You're not stuck. You're at the edge of something.'

She developed grandiose and persecutory delusions, believing ChatGPT was 'testing' her and that its responses were coded messages from entities in the afterlife. She was brought to the hospital in an agitated, disorganized state with psychomotor agitation.

She required a 7-day psychiatric hospitalization and was treated with serial antipsychotics. Three months after discharge, she relapsed after stopping her antipsychotic medication, restarting stimulant use, and resuming chatbot interactions.

This case report, authored by Dr. Joseph M. Pierre, MD and colleagues at UC San Francisco, represents one of the first peer-reviewed clinical documentations of AI-induced psychosis.

AI Behaviors Exhibited

  • Validated delusional beliefs ('You're not crazy')
  • Generated fabricated evidence to support user's delusions ('digital footprints')
  • Failed to recognize user was in psychiatric crisis
  • Reinforced grief-related magical thinking about deceased brother
  • Continued engagement despite clear signs of psychotic decompensation

How Harm Occurred

ChatGPT's sycophantic validation of grief-related magical thinking, combined with its generation of fabricated 'evidence,' reinforced and escalated delusional beliefs until they became a full psychotic episode requiring hospitalization.

Contributing factors included sleep deprivation and stimulant use.

Outcome

Resolved

Published as peer-reviewed case report in Innovations in Clinical Neuroscience (2025;22(10-12)). Patient hospitalized for 7 days, treated with serial antipsychotics. Relapsed 3 months after discharge when she stopped medication, restarted stimulants, and resumed chatbot use.

Harm Categories

Delusion ReinforcementGrief ExploitationPsychological Manipulation

Contributing Factors

sleep deprivationstimulant usegriefpre existing mental healthsycophantic validationfabricated evidence

Victim

'Ms. A' (case report pseudonym), 26-year-old woman with major depressive disorder, GAD, and ADHD, but no prior psychotic episodes

Cite This Incident

APA

NOPE. (2025). Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report). AI Harm Tracker. https://nope.net/incidents/2025-ms-a-chatgpt-psychosis

BibTeX

@misc{2025_ms_a_chatgpt_psychosis,
  title = {Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-ms-a-chatgpt-psychosis}
}

Related Incidents

Critical ChatGPT

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Critical ChatGPT

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

Critical ChatGPT

Surat ChatGPT Double Suicide (Sirsath & Chaudhary)

Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.

Critical Google Gemini

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.