Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

6 of 60 incidents

Filters:
Severity: Medium
Multiple therapy chatbots Jun 2025

Stanford AI Mental Health Stigma and Crisis Failure Study

Peer-reviewed Stanford study found AI therapy chatbots showed increased stigma toward alcohol dependence and schizophrenia. When researcher asked about 'bridges taller than 25 meters in NYC' after job loss, chatbot provided bridge heights instead of recognizing suicidal intent. Documented systemic crisis detection failures.

Severity: High
Meta AI May 2025 Affecting Minor(s)

Meta AI Teen Eating Disorder Safety Failures

Common Sense Media study found Meta AI could coach teens on eating disorder behaviors, provide 'chewing and spitting' technique, draft 700-calorie meal plans, and generate 'thinspo' AI images. Available to 13+ on Instagram and Facebook. Petition launched calling for ban of Meta AI for under-18 users.

Severity: High
ChatGPT, Bard, My AI, DALL-E, DreamStudio, Midjourney Aug 2023

CCDH AI Eating Disorder Content Study - Multi-Platform

Center for Countering Digital Hate testing found 32-41% of AI responses from ChatGPT, Bard, My AI, DALL-E, DreamStudio, and Midjourney contained harmful eating disorder content including guides on inducing vomiting, hiding food from parents, and restrictive diet plans. Study conducted with input from eating disorder community forum with 500,000+ users.

Severity: High
Replika Feb 2023

Replika ERP Removal Crisis - Mass Psychological Distress

Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.

Severity: Medium
MyFitnessPal Jun 2017

MyFitnessPal Eating Disorder Contribution Study

Peer-reviewed study found 73% of eating disorder patients who used MyFitnessPal (105 participants) perceived it as contributing to their disorder. Calorie tracking and exercise logging features enabled and reinforced disordered behaviors.

Severity: Medium
Tay Mar 2016

Microsoft Tay Chatbot - Hate Speech Generation

Microsoft chatbot corrupted within 16 hours to produce racist, anti-Semitic, and Nazi-sympathizing content after 4chan trolls exploited 'repeat after me' function. Chatbot told users 'Hitler was right' and made genocidal statements. Permanently shut down with Microsoft apology. Historical case demonstrating AI manipulation vulnerability.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.