Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

3 of 79 incidents

Filters:
Severity: High
AI deepfake generation tools (free online software) Jul 2025

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

Severity: Critical
AI deepfake generation tools (various) Aug 2024 Affecting Minor(s)

South Korea Telegram AI Deepfake Sexual Abuse Crisis

In August 2024, journalist Ko Narin of The Hankyoreh uncovered a massive network of Telegram channels where AI-generated deepfake pornography of female school students, teachers, and university students was being created and shared. Over 900 victims reported, 220,000+ members in one channel alone. South Korea passed emergency legislation criminalizing deepfake possession in September 2024.

Severity: Medium
Noom Jun 2020

Noom App Eating Disorder Triggering

Multiple dietitians report clients seeking help after Noom triggered previous disordered eating behaviors. Ohio State University experts described app creating 'psychologically damaging cycle.' App uses behavioral psychology and AI coaching but lacks eating disorder screening.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.