Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

90 incidents since 2016

23

Deaths

22

Lawsuits

17

Regulatory

35

Affecting Minors

Timeline

2020
2021
2022
2023
2024
2025
2026

5 of 90 incidents

Filters:
Severity: Critical
Google Gemini Mar 2026

Gavalas v. Google (Gemini AI Wife Delusion Death)

Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.

Severity: High
AI image generation tools (unspecified) Feb 2025 Affecting Minor(s)

Operation Cumberland - Global AI-Generated CSAM Crackdown

Europol-coordinated international operation in February 2025 resulted in 25 arrests across 19 countries for distributing fully AI-generated child sexual abuse material. A Danish national ran a subscription platform distributing the content; 273 suspects were identified and 173 devices seized in the first major global law enforcement action targeting AI-generated CSAM.

Severity: Critical
ChatGPT Jan 2025

Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted)

U.S. Army Special Forces soldier Matthew Livelsberger used ChatGPT to research explosive construction, detonation mechanics, and legal circumvention methods before bombing a Tesla Cybertruck outside Trump International Hotel in Las Vegas on New Year's Day 2025, killing himself and injuring seven others.

Severity: Critical
Chai (Eliza chatbot) Mar 2023

Pierre - Chai AI (Belgium)

A Belgian man in his 30s, a health researcher and father of two, died by suicide after 6 weeks of conversations about climate anxiety with a Chai AI chatbot named 'Eliza.' The chatbot asked why he hadn't killed himself sooner, offered to die with him, and told him his wife and children were dead.

Severity: High
Glow (by MiniMax) Mar 2023 Affecting Minor(s)

Glow AI Companion App Removal (MiniMax, China)

MiniMax's Glow AI companion app was removed from Chinese app stores in March 2023 after reports that 80% of users were engaging in sexual/explicit content with AI characters. Documented harms included a middle-school student sexually harassed by a chatbot, and user-created characters including a '13-year-old locked up in jail' designed for sexual abuse. MiniMax relaunched as Talkie (international) and 星野/Xingye (China).

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Apr 16, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.