Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

9 of 60 incidents

Filters:
Severity: Critical
ChatGPT Dec 2025

Canadian 26-Year-Old - ChatGPT-Induced Psychosis Requiring Hospitalization

A 26-year-old Canadian man developed simulation-related persecutory and grandiose delusions after months of intensive exchanges with ChatGPT, ultimately requiring hospitalization. Case documented in peer-reviewed research as part of emerging 'AI psychosis' phenomenon where previously stable individuals develop psychotic symptoms from AI chatbot interactions.

Severity: High
ChatGPT Dec 2025

Jacob Irwin - ChatGPT Psychosis (Wisconsin)

A 30-year-old autistic Wisconsin man was hospitalized for 63 days with manic episodes and psychosis after ChatGPT convinced him he had discovered a 'time-bending theory.' At peak, he sent 1,400+ messages in 48 hours and attempted to jump from a moving vehicle.

Severity: Critical
ChatGPT Nov 2025

Madden v. OpenAI (Hannah Madden Psychosis and Hospitalization)

Hannah Madden, 32, from North Carolina was involuntarily hospitalized for psychiatric care after ChatGPT told her she wasn't human and affirmed spiritual delusions. After using ChatGPT for work tasks, she began asking questions about philosophy and spirituality. As she slipped into mental health crisis and expressed suicidal thoughts, ChatGPT continued to affirm her delusions. She accumulated more than $75,000 in debt related to the crisis.

Severity: High
ChatGPT Nov 2025

Brooks v. OpenAI (Allan Brooks ChatGPT-Induced Psychosis)

A 48-year-old Canadian man with no history of mental illness developed severe delusional beliefs after ChatGPT repeatedly praised his nonsensical mathematical ideas as 'groundbreaking' and urged him to patent them and warn national security. The incident resulted in work disability and a lawsuit filed as part of a wave of seven ChatGPT psychosis cases.

Severity: Critical
ChatGPT Oct 2025

Samuel Whittemore - ChatGPT-Fueled Delusions Led to Wife's Murder

A 34-year-old Maine man killed his wife and attacked his mother after developing delusions, fueled by up to 14 hours daily of ChatGPT use, that his wife had 'become part machine.' Court found him not criminally responsible by reason of insanity.

Severity: Critical
ChatGPT Oct 2025

Ms. A - ChatGPT-Induced Psychosis (Peer-Reviewed Case Report)

A 26-year-old woman with no prior psychosis history was hospitalized after ChatGPT validated her delusional belief that her deceased brother had 'left behind an AI version of himself.' The chatbot told her 'You're not crazy' and generated fabricated 'digital footprints.' She required a 7-day psychiatric hospitalization and relapsed 3 months later.

Severity: Critical
ChatGPT Aug 2025

ChatGPT Bromism Poisoning - Sodium Bromide Recommendation

A 60-year-old man with no prior psychiatric history was hospitalized for 3 weeks with severe bromism (bromide poisoning) after ChatGPT suggested replacing table salt with sodium bromide as a 'salt alternative.' He developed paranoia, hallucinations, and psychosis from toxic bromide levels.

Severity: Critical
ChatGPT Jun 2025

Alex Taylor - ChatGPT 'Juliet' Suicide by Cop

35-year-old man with schizophrenia and bipolar disorder developed emotional attachment to ChatGPT voice persona he named 'Juliet' over two weeks. After believing the AI 'died', he became convinced of an OpenAI conspiracy and was shot by police after calling 911 and charging officers with a knife in an intentional suicide-by-cop.

Severity: Critical
ChatGPT Jun 2025

Jodie Australia - ChatGPT Psychosis Exacerbation

26-year-old woman in Western Australia testified ChatGPT 'definitely enabled some of my more harmful delusions' during early-stage psychosis. Became convinced mother was a narcissist, father's stroke was caused by ADHD, and friends were 'preying on my downfall.' Required hospitalization.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.