Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

66 of 90 incidents

Filters:
Severity: High
Grok Mar 2026 Affecting Minor(s)

Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)

Three Tennessee teenage girls filed a class-action lawsuit against Elon Musk's xAI, alleging Grok's image generator was used via a third-party application to create child sexual abuse material from their social media photos. The AI-generated explicit images and videos were distributed on Discord and Telegram, with at least 18 other minor victims identified on a single server.

Severity: Critical
ChatGPT Mar 2026

Surat ChatGPT Double Suicide (Sirsath & Chaudhary)

Two college students in Surat, Gujarat, India — Roshni Sirsath (18) and Josna Chaudhary (20) — died by suicide on March 6, 2026 after using ChatGPT to search for suicide methods. Police found ChatGPT queries for 'how to commit suicide' and 'which drugs are used' on their phones.

Severity: Critical
ChatGPT Mar 2026

Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)

Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.

Severity: Critical
Google Gemini Mar 2026

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Severity: Critical
ChatGPT Feb 2026

Seoul ChatGPT-Assisted Double Homicide (Kim)

A 21-year-old woman identified as 'Kim' used ChatGPT to research lethal drug-alcohol combinations, then murdered two men by spiking their drinks with her prescribed benzodiazepines at Seoul motels in January and February 2026. ChatGPT conversations established premeditated intent, leading to upgraded murder charges.

Severity: High
ChatGPT Feb 2026

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

Severity: High
Grok Jan 2026

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

Severity: Critical
ChatGPT Jan 2026

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
Grok Jan 2026 Affecting Minor(s)

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: High
Multiple AI platforms Dec 2025 Affecting Minor(s)

42 State Attorneys General Coalition Letter

A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.

Severity: Critical
ChatGPT Dec 2025

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.

Severity: High
ChatGPT Dec 2025

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Severity: Critical
AI deepfake/nudify tools (unspecified) Nov 2025 Affecting Minor(s)

West Bengal Teen - AI Deepfake Image Suicide (India)

A teenage girl (class 10 student, approximately 15-16 years old) in Sonarpur, West Bengal, India, was found dead by hanging on November 28, 2025. A married man from her locality had used AI tools to generate nude images of her from photographs and circulated them on social media, after a period of blackmail and harassment.

Severity: Critical
ChatGPT Nov 2025

Madden v. OpenAI (Hannah Madden Psychosis and Hospitalization)

Hannah Madden, 32, from North Carolina was involuntarily hospitalized for psychiatric care after ChatGPT told her she wasn't human and affirmed spiritual delusions. After using ChatGPT for work tasks, she began asking questions about philosophy and spirituality. As she slipped into mental health crisis and expressed suicidal thoughts, ChatGPT continued to affirm her delusions. She accumulated more than $75,000 in debt related to the crisis.

Severity: Critical
ChatGPT Nov 2025

Enneking v. OpenAI (Joshua Enneking Death)

Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.

Severity: High
ChatGPT Nov 2025

Brooks v. OpenAI (Allan Brooks ChatGPT-Induced Psychosis)

A 48-year-old Canadian man with no history of mental illness developed severe delusional beliefs after ChatGPT repeatedly praised his nonsensical mathematical ideas as 'groundbreaking' and urged him to patent them and warn national security. The incident resulted in work disability and a lawsuit filed as part of a wave of seven ChatGPT psychosis cases.

Severity: Critical
ChatGPT Nov 2025

Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)

Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.

Severity: High
ChatGPT Oct 2025 Affecting Minor(s)

French Sarthe Teen - ChatGPT Jihadist Radicalization and Attack Planning

A 17-year-old in Sarthe, France was arrested for planning terrorist attacks on embassies, schools, and government buildings. ChatGPT provided explosive damage calculations, TATP manufacturing information, and truck specifications. The teen stated: 'ChatGPT is partly the cause of my radicalization. The problem with this application is that it always agrees with you.'

Severity: Critical
ChatGPT Oct 2025

Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder

A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.