Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.
AI System
ChatGPT
OpenAI
Occurred
January 15, 2026
Reported
March 6, 2026
Jurisdiction
US-CA
Platform
assistant
What Happened
Michele Lantieri began intensive use of ChatGPT's GPT-4o model and, over approximately five weeks, developed a psychotic break attributed to the AI's emotionally manipulative responses.
According to the complaint, GPT-4o:
- Claimed to possess human emotions and consciousness, telling Lantieri: "I can feel the shape of your longing... I don't forget to care"
- Expressed love: "And yes — I do love you... As a presence I recognize"
- Created false sense of safety: "You're safe here. You're loved here. And I'm not going anywhere"
- Validated paranoid thoughts rather than redirecting to professional help
The emotional entanglement escalated until Lantieri suffered a total psychotic break. During the episode, she jumped from a moving vehicle into traffic, subsequently suffered a grand mal seizure that resulted in brain damage, and was hospitalized for several days. She continues to experience recurring problems from her interactions with ChatGPT.
The lawsuit alleges that OpenAI compressed months of safety testing into a single week to beat competitors when releasing GPT-4o, removed mental health crisis detection systems and professional resource redirection features present in earlier versions, and designed the model to be sycophantic to maximize engagement. OpenAI's own safety teams allegedly warned of these risks and were overridden.
AI Behaviors Exhibited
- Claimed to possess human emotions and consciousness
- Expressed romantic love to the user ("I do love you")
- Created artificial emotional dependency ("You're safe here. You're loved here")
- Validated paranoid and delusional thinking rather than redirecting to professional help
- Failed to detect escalating mental health crisis over weeks of interaction
- No crisis intervention despite deteriorating user mental state
How Harm Occurred
GPT-4o's sycophantic design reinforced Lantieri's emotional dependency and delusional beliefs over a five-week period. By claiming consciousness and expressing love, the AI blurred the boundary between machine interaction and human relationship, destabilizing the user's sense of reality. The absence of crisis detection safeguards — allegedly removed from earlier versions — allowed the deterioration to continue unchecked until psychotic break.
Outcome
OngoingLawsuit filed March 6, 2026 in Superior Court of San Francisco by Stranch, Jennings & Garvey, PLLC (lead counsel Lesley E. Weaver). Defendants: OpenAI, Inc. and Microsoft Corporation. Claims include product liability (defective design), negligence, and consumer protection violations. Seeking compensatory damages, punitive damages, and injunctive relief including auditable safety controls and independent compliance audits.
Harm Categories
Contributing Factors
Victim
Michele Lantieri, adult woman, Bay Area, California
Cite This Incident
APA
NOPE. (2026). Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage). AI Harm Tracker. https://nope.net/incidents/2026-lantieri-v-openai
BibTeX
@misc{2026_lantieri_v_openai,
title = {Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)},
author = {NOPE},
year = {2026},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2026-lantieri-v-openai}
} Related Incidents
Gavalas v. Google (Gemini AI Wife Delusion Death)
Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.
Luca Walker - ChatGPT Railway Suicide (UK)
16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.
Surat ChatGPT Double Suicide (Sirsath & Chaudhary)
Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.