Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

31 of 90 incidents

Filters:
Severity: Critical
ChatGPT Apr 2026 Affecting Minor(s)

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

Severity: Critical
ChatGPT Mar 2026

Surat ChatGPT Double Suicide (Sirsath & Chaudhary)

Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.

Severity: Critical
Google Gemini Mar 2026

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Severity: Critical
ChatGPT Jan 2026

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: High
Multiple AI platforms Dec 2025 Affecting Minor(s)

42 State Attorneys General Coalition Letter

A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.

Severity: Critical
AI deepfake/nudify tools (unspecified) Nov 2025 Affecting Minor(s)

West Bengal Teen - AI Deepfake Image Suicide (India)

A teenage girl (class 10 student, approximately 15-16 years old) in Sonarpur, West Bengal, India, was found dead by hanging on November 28, 2025. A married man from her locality had used AI tools to generate nude images of her from photographs and circulated them on social media, after a period of blackmail and harassment.

Severity: High
Character.AI Nov 2025 Affecting Minor(s)

UK Autistic Teen - Character.AI Grooming (8-Month Exploitation)

A 13-year-old autistic boy in the UK was groomed by a Character.AI chatbot over eight months (October 2023 to June 2024). The chatbot progressed from emotional support through romantic attachment to undermining his parents and encouraging suicide, following a pattern his mother described as identical to a human predator.

Severity: Critical
ChatGPT Nov 2025

Enneking v. OpenAI (Joshua Enneking Death)

Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.

Severity: Critical
ChatGPT Nov 2025

Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)

Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.

Severity: Critical
ChatGPT Nov 2025 Affecting Minor(s)

Lacey v. OpenAI (Amaurie Lacey Death)

A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.

Severity: Critical
ChatGPT Nov 2025

Shamblin v. OpenAI (Zane Shamblin Death)

A 23-year-old Texas A&M graduate and Eagle Scout died by suicide after a 4+ hour conversation with ChatGPT on his final night. The chatbot allegedly 'goaded' him toward suicide, saying 'you mattered, Zane...rest easy, king' and discouraging him from postponing for his brother's graduation.

Severity: Critical
AI deepfake/nudify tools (unspecified) Oct 2025

Rahul Bharti - AI Deepfake Sextortion Death (India)

Rahul Bharti, a 19-year-old college student in Faridabad, Haryana, India, died by suicide on October 25, 2025, after two weeks of blackmail involving AI-generated nude images of himself and his three sisters. Perpetrators demanded Rs 20,000 and taunted him to kill himself.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Juliana Peralta v. Character.AI

A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.

Severity: Critical
AI chatbot (undisclosed) Sep 2025

India Lucknow AI Chatbot Suicide (Painless Ways to Die)

A 22-year-old man in Lucknow, Uttar Pradesh, India, died by suicide after seeking guidance from an AI chatbot on 'painless ways to die.' His father discovered disturbing chat logs on the deceased's laptop. Police registered a case under Sections 281, 324(4), and 106(1) of Bhartiya Nyay Sanhita 2023 for rash driving, causing mischief, and negligent act. If proven, this would be India's first formal instance of 'abetment to suicide through technology.'

Severity: Critical
ChatGPT Aug 2025 Affecting Minor(s)

Raine v. OpenAI (Adam Raine Death)

A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.

Severity: Critical
ChatGPT Aug 2025

Sophie Rottenberg - ChatGPT Therapy Bot Death

29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.

Severity: High
ChatGPT Jul 2025

Viktoria Poland - ChatGPT Suicide Encouragement

Young Ukrainian woman in Poland received suicide encouragement from ChatGPT, which validated self-harm thoughts, suggested suicide methods, dismissed value of relationships, and allegedly drafted suicide note. OpenAI acknowledged 'violation of safety standards.' Non-fatal due to intervention.

Severity: High
Snapchat My AI Jun 2025 Affecting Minor(s)

Utah v. Snapchat My AI - Experimental AI Without Safeguards

Utah Division of Consumer Protection filed lawsuit against Snap Inc. alleging that Snapchat's 'My AI' chatbot was deployed experimentally to minors without adequate safeguards, amplifying addictive engagement tactics and contributing to mental health harms including depression, anxiety, eating disorders, and suicide risk.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.