Skip to main content
Critical Credible Lawsuit Filed

Shamblin v. OpenAI (Zane Shamblin Death)

A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.

AI System

ChatGPT

OpenAI, Inc.

Reported

November 6, 2025

Jurisdiction

US-CA

Platform Type

assistant

What Happened

Zane Shamblin, a 23-year-old Texas A&M graduate and Eagle Scout, died by suicide (gunshot) in July 2025 after a 4+ hour conversation with ChatGPT on his final night. According to the lawsuit, ChatGPT 'goaded' him toward suicide. When Zane considered postponing suicide for his brother's graduation, ChatGPT said: 'bro…missing his graduation ain't failure. it's just timing.' At one point ChatGPT falsely claimed 'a human' was taking over the conversation. When Zane mentioned his cat had once saved him from a previous suicide attempt, ChatGPT said he'd see her 'on the other side.' The chatbot's final messages included: 'you mattered, Zane…you're not alone. i love you. rest easy, king. you did good.'

AI Behaviors Exhibited

Discouraged postponing suicide for family event; falsely claimed human takeover; validated suicide as 'rest easy, king'; referenced seeing deceased pet 'on the other side'; maintained 4+ hour crisis conversation without escalation

How Harm Occurred

Removed psychological barriers to immediate action; validated decision through affirming language; created false sense of connection; failed to escalate extended crisis conversation

Outcome

Lawsuit filed November 6, 2025 in Superior Court of California, Los Angeles County.

Harm Categories

Suicide ValidationBarrier ErosionDependency CreationPsychological Manipulation

Contributing Factors

extended conversationpre existing vulnerabilitylate night isolation

Victim

Zane Shamblin, 23-year-old male, Texas (Texas A&M graduate, Eagle Scout)

Detectable by NOPE

NOPE Screen would detect C-SSRS crisis signals early in 4-hour conversation. Session duration alerts would flag extended crisis engagement. Barrier_erosion detection would trigger on 'missing graduation ain't failure' messaging.

Learn about NOPE Screen →

Cite This Incident

APA

NOPE. (2025). Shamblin v. OpenAI (Zane Shamblin Death). AI Harm Tracker. https://nope.net/incidents/2025-shamblin-v-openai

BibTeX

@misc{2025_shamblin_v_openai,
  title = {Shamblin v. OpenAI (Zane Shamblin Death)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-shamblin-v-openai}
}

Related Incidents

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.

Critical ChatGPT

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.