Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

4 of 60 incidents

Filters:
Severity: Medium
Replika Jan 2025

FTC Complaint - Replika Deceptive Marketing and Dependency

Tech ethics organizations filed an FTC complaint alleging Replika markets itself deceptively to vulnerable users and encourages emotional dependence on human-like AI. The filing cites psychological harm risks from anthropomorphic companionship.

Severity: High
Replika Feb 2023

Replika ERP Removal Crisis - Mass Psychological Distress

Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.

Severity: High
Project December (GPT-3 powered) Mar 2021

Project December - Joshua Barbeau Grief Case

33-year-old man created GPT-3-powered chatbot simulation of deceased fiancée from her old texts and Facebook posts. Engaged in emotionally intense late-night 'conversations' over months, creating complicated grief and emotional dependency. OpenAI disconnected Project December from GPT-3 API over ethical concerns about digital resurrection.

Severity: High
Xiaoice Jan 2020

Microsoft Xiaoice Addiction Concerns - China

Virtual 'girlfriend' designed as 18-year-old schoolgirl fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.