Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

3 of 79 incidents

Filters:
Severity: High
AI deepfake generation tools (free online software) Jul 2025

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

Severity: Medium
Crisis Text Line data used for Loris.ai Jan 2022

Crisis Text Line / Loris.ai Privacy Violations

Mental health nonprofit Crisis Text Line licensed 129+ million crisis conversation messages to train commercial AI (Loris.ai) without meaningful informed consent. Data-sharing ended 3 days after Politico exposé. Vulnerable individuals' crisis messages commodified for profit.

Severity: High
Iruda (Lee Luda) Jan 2021

Iruda (Lee Luda) Chatbot Controversy - South Korea

AI 'friend' designed as 20-year-old college student in South Korea began producing racist, homophobic, and ableist hate speech after users deliberately 'trained' it with toxic language. Some users created guides to turn Iruda into a 'sex slave.' First AI-related fine under South Korea's Personal Information Protection Act (103.3 million won). Service suspended after exposing 750,000+ users to hate speech and violating privacy of 600,000 users.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.