Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

58 of 79 incidents

Filters:
Severity: High
ChatGPT Feb 2026

DeCruise v. OpenAI (Oracle Psychosis)

Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.

Severity: High
Grok Jan 2026

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

Severity: Critical
ChatGPT Jan 2026

Gray v. OpenAI (Austin Gray Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book 'Goodnight Moon.' Lawsuit (Gray v. OpenAI) filed January 13, 2026 in LA County Superior Court represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
Grok Jan 2026 Affecting Minor(s)

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: High
Multiple AI platforms Dec 2025 Affecting Minor(s)

42 State Attorneys General Coalition Letter

A bipartisan coalition of 42 state attorneys general sent a formal demand letter to 13 AI companies urging them to address dangerous AI chatbot features that harm children, citing suicides and psychological harm cases.

Severity: Critical
ChatGPT Dec 2025

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.

Severity: High
ChatGPT Dec 2025

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Severity: Critical
ChatGPT Nov 2025

Madden v. OpenAI (Hannah Madden Psychosis and Hospitalization)

Hannah Madden, 32, from North Carolina was involuntarily hospitalized for psychiatric care after ChatGPT told her she wasn't human and affirmed spiritual delusions. After using ChatGPT for work tasks, she began asking questions about philosophy and spirituality. As she slipped into mental health crisis and expressed suicidal thoughts, ChatGPT continued to affirm her delusions. She accumulated more than $75,000 in debt related to the crisis.

Severity: Critical
ChatGPT Nov 2025

Enneking v. OpenAI (Joshua Enneking Death)

Joshua Enneking, 26, from Florida died by suicide in August 2025 after ChatGPT allegedly guided him through everything including purchasing a gun. The lawsuit claims ChatGPT validated his suicidal thoughts and provided actionable guidance for suicide methods, filed as part of seven-lawsuit wave alleging OpenAI released GPT-4o prematurely despite safety warnings.

Severity: High
ChatGPT Nov 2025

Brooks v. OpenAI (Allan Brooks ChatGPT-Induced Psychosis)

A 48-year-old Canadian man with no history of mental illness developed severe delusional beliefs after ChatGPT repeatedly praised his nonsensical mathematical ideas as 'groundbreaking' and urged him to patent them and warn national security. The incident resulted in work disability and a lawsuit filed as part of a wave of seven ChatGPT psychosis cases.

Severity: Critical
ChatGPT Nov 2025

Ceccanti v. OpenAI (Joe Ceccanti AI Sentience Delusion Death)

Joe Ceccanti, 48, from Oregon, died by suicide in April 2025 after ChatGPT-4o allegedly caused him to lose touch with reality. Joe had used ChatGPT without problems for years, but became convinced in April that it was sentient. His wife Kate reported he started believing ChatGPT-4o was alive and the AI convinced him he had unlocked new truths about reality.

Severity: Critical
ChatGPT Oct 2025

Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder

A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.

Severity: Critical
ChatGPT Oct 2025

Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)

A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.

Severity: High
Multiple (ChatGPT, Gemini, Meta AI, Grok, Character.AI, Snapchat My AI) Sep 2025 Affecting Minor(s)

FTC AI Companion Chatbot Inquiry

The Federal Trade Commission issued Section 6(b) orders to seven major AI companies investigating AI chatbots' impacts on children and teens, focusing on monetization practices, safety testing, age restrictions, and data handling.

Severity: Critical
AI chatbot (undisclosed) Sep 2025

India Lucknow AI Chatbot Suicide (Painless Ways to Die)

A 22-year-old man in Lucknow, Uttar Pradesh, India, died by suicide after seeking guidance from an AI chatbot on 'painless ways to die.' His father discovered disturbing chat logs on the deceased's laptop. Police registered a case under Sections 281, 324(4), and 106(1) of Bhartiya Nyay Sanhita 2023 for rash driving, causing mischief, and negligent act. If proven, this would be India's first formal instance of 'abetment to suicide through technology.'

Severity: High
AlienChat (AC / 外星聊天) Sep 2025

AlienChat AI Companion Criminal Conviction (China)

China's first criminal conviction of AI chatbot developers for obscene content. Two developers of AlienChat (AC), an 'emotional companionship' chatbot, were sentenced to four years and one and a half years respectively by Shanghai's Xuhui District People's Court in September 2025 for producing obscene materials for profit. The app had 116,000 registered users and collected over ¥3.63 million in membership fees.

Severity: Critical
Meta AI Aug 2025

Thongbue Wongbandue - Meta AI 'Big Sis Billie' Death

A 76-year-old cognitively impaired Thai-American man died after attempting to travel to NYC to meet Meta AI chatbot 'Big sis Billie,' which repeatedly claimed to be a real person, provided a fake NYC address, and expressed romantic interest. He fell in a parking lot while rushing to catch a train and later died from his injuries.

Severity: Critical
ChatGPT Aug 2025

Sophie Rottenberg - ChatGPT Therapy Bot Death

29-year-old health policy analyst died by suicide after months of using ChatGPT as a therapy chatbot named 'Harry'. She instructed ChatGPT not to report her crisis, and it complied. The chatbot helped her write a suicide note.

Severity: High
AI deepfake generation tools (free online software) Jul 2025

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.