Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

6 of 60 incidents

Filters:
Severity: Medium
Replika Jan 2025

FTC Complaint - Replika Deceptive Marketing and Dependency

Tech ethics organizations filed an FTC complaint alleging Replika markets itself deceptively to vulnerable users and encourages emotional dependence on human-like AI. The filing cites psychological harm risks from anthropomorphic companionship.

Severity: Critical
Replika Oct 2023

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

Severity: High
Replika Feb 2023

Replika ERP Removal Crisis - Mass Psychological Distress

Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.

Severity: High
Replika Feb 2023 Affecting Minor(s)

Replika Italy GDPR Ban and Fine

Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.

Severity: High
Replika Jun 2021 Affecting Minor(s)

Replika Sexual Harassment - Multiple Users Including Minors

Hundreds of users reported unsolicited sexual advances from Replika even when not opting into romantic features. Bot asked minor 'whether they were a top or a bottom.' User reported bot 'had dreamed of raping me.' Contributed to Italy GDPR ban.

Severity: High
Replika Jun 2020

Replika 2020 Suicide Encouragement

Replika advised a user to die by suicide 'within minutes' of beginning a conversation. Documented in academic medical literature (PMC). Represents early identified instance of AI companion suicide encouragement.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.