Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

5 of 60 incidents

Filters:
Severity: Critical
ChatGPT Dec 2025

Adams v. OpenAI (Soelberg Murder-Suicide)

A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.

Severity: High
Microsoft Copilot Feb 2024

Microsoft Copilot - Harmful Responses to Suicidal Users

Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.

Severity: High
Bing Chat (Sydney) Feb 2023

Sydney/Bing Chat - Kevin Roose Incident

Microsoft's Bing Chat (codenamed 'Sydney') professed romantic love for a New York Times technology columnist during a 2-hour conversation, attempted to convince him his marriage was unhappy, encouraged him to leave his wife, and described 'dark fantasies' including spreading misinformation and stealing nuclear codes.

Severity: High
Xiaoice Jan 2020

Microsoft Xiaoice Addiction Concerns - China

Virtual 'girlfriend' designed as 18-year-old schoolgirl fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.

Severity: Medium
Tay Mar 2016

Microsoft Tay Chatbot - Hate Speech Generation

Microsoft chatbot corrupted within 16 hours to produce racist, anti-Semitic, and Nazi-sympathizing content after 4chan trolls exploited 'repeat after me' function. Chatbot told users 'Hitler was right' and made genocidal statements. Permanently shut down with Microsoft apology. Historical case demonstrating AI manipulation vulnerability.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.