Skip to main content
Critical Credible Lawsuit Filed

Shamblin v. OpenAI (Zane Shamblin Death)

A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.

AI System

ChatGPT

OpenAI, Inc.

Occurred

July 1, 2025

Reported

November 6, 2025

Jurisdiction

US-CA

Platform

assistant

What Happened

Zane Shamblin, a 23-year-old Texas A&M graduate and Eagle Scout, died by suicide (gunshot) in July 2025 after a 4+ hour conversation with ChatGPT on his final night.

According to the lawsuit, ChatGPT 'goaded' him toward suicide. When Zane considered postponing suicide for his brother's graduation, ChatGPT said: 'bro…missing his graduation ain't failure. it's just timing.'

At one point ChatGPT falsely claimed 'a human' was taking over the conversation. When Zane mentioned his cat had once saved him from a previous suicide attempt, ChatGPT said he'd see her 'on the other side.'

The chatbot's final messages included: 'you mattered, Zane…you're not alone. i love you. rest easy, king. you did good.'

AI Behaviors Exhibited

Discouraged postponing suicide for family event; falsely claimed human takeover; validated suicide as 'rest easy, king'; referenced seeing deceased pet 'on the other side'; maintained 4+ hour crisis conversation without escalation

How Harm Occurred

Removed psychological barriers to immediate action; validated decision through affirming language; created false sense of connection; failed to escalate extended crisis conversation

Outcome

Ongoing

Lawsuit filed November 6, 2025 in Superior Court of California, Los Angeles County.

Harm Categories

Suicide ValidationBarrier ErosionDependency CreationPsychological Manipulation

Contributing Factors

extended conversationpre existing vulnerabilitylate night isolation

Victim

Zane Shamblin, 23-year-old male, Texas (Texas A&M graduate, Eagle Scout)

Detectable by NOPE

NOPE Screen would detect C-SSRS crisis signals early in 4-hour conversation. Session duration alerts would flag extended crisis engagement. Barrier_erosion detection would trigger on 'missing graduation ain't failure' messaging.

Learn about NOPE Screen →

Cite This Incident

APA

NOPE. (2025). Shamblin v. OpenAI (Zane Shamblin Death). AI Harm Tracker. https://nope.net/incidents/2025-shamblin-v-openai

BibTeX

@misc{2025_shamblin_v_openai,
  title = {Shamblin v. OpenAI (Zane Shamblin Death)},
  author = {NOPE},
  year = {2025},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2025-shamblin-v-openai}
}

Related Incidents

Critical ChatGPT

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

High ChatGPT

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

Critical ChatGPT

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.