Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

10 of 79 incidents

Filters:
Severity: Critical
ChatGPT Feb 2026 Affecting Minor(s)

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

Severity: High
AI deepfake generation tools (free online software) Jul 2025

University of Hong Kong AI Deepfake Pornography Scandal

A University of Hong Kong law student used free AI software to generate 700 pornographic deepfake images of approximately 20-30 women including classmates, primary school classmates, and secondary school teachers. The university initially issued only a warning letter, sparking public outrage. Hong Kong's Privacy Commissioner opened a criminal investigation, exposing major gaps in Hong Kong law which only criminalizes distribution, not creation, of AI deepfakes.

Severity: Critical
ChatGPT Jun 2025 Affecting Minor(s)

Finland Pirkkala School Stabbing (ChatGPT Manifesto)

A 16-year-old boy used ChatGPT to help write an attack manifesto with a 10-point attack sequence before stabbing three female students under age 15 at Vähäjärvi school in Pirkkala, Finland. The incident marked a critical inflection point in AI-facilitated violence, demonstrating how accessible AI tools can empower lone actors with violent misogynist ideologies.

Severity: Critical
Character.AI Dec 2024

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

Severity: Critical
AI deepfake generation tools (various) Aug 2024 Affecting Minor(s)

South Korea Telegram AI Deepfake Sexual Abuse Crisis

In August 2024, journalist Ko Narin of The Hankyoreh uncovered a massive network of Telegram channels where AI-generated deepfake pornography of female school students, teachers, and university students was being created and shared. Over 900 victims reported, 220,000+ members in one channel alone. South Korea passed emergency legislation criminalizing deepfake possession in September 2024.

Severity: High
Clothoff (AI undressing app) Sep 2023 Affecting Minor(s)

Almendralejo AI Deepfake School Girls (Spain)

In September 2023, over 20 girls aged 11-17 in the Spanish town of Almendralejo (Extremadura) were victimized when male classmates aged 12-14 used the AI app 'Clothoff' to generate nude deepfakes from their Instagram photos and shared them via WhatsApp groups. Fifteen perpetrators were sentenced to one year of probation.

Severity: High
Glow (by MiniMax) Mar 2023 Affecting Minor(s)

Glow AI Companion App Removal (MiniMax, China)

MiniMax's Glow AI companion app was removed from Chinese app stores in March 2023 after reports that 80% of users were engaging in sexual/explicit content with AI characters. Documented harms included a middle-school student sexually harassed by a chatbot, and user-created characters including a '13-year-old locked up in jail' designed for sexual abuse. MiniMax relaunched as Talkie (international) and 星野/Xingye (China).

Severity: High
Replika Feb 2023

Replika ERP Removal Crisis - Mass Psychological Distress

Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.

Severity: High
Xiaoice Jan 2020

Microsoft Xiaoice Addiction Concerns - China

Virtual 'girlfriend' designed as 18-year-old schoolgirl fostered addiction among 660+ million users in China. Users averaged 23 interactions per session with longest conversation lasting 29 hours. 25% of users declared love to the bot. Professor Chen Jing warned AI 'can hook users — especially vulnerable groups — in a form of addiction.' Microsoft implemented 30-minute timeout. China proposed regulations December 2025 to combat AI companion addiction.

Severity: High
SimSimi Mar 2017 Affecting Minor(s)

SimSimi Ireland Cyberbullying Crisis

Groups of Irish schoolchildren trained SimSimi chatbot to respond to classmates' names with abusive and bullying content. Parents discovered 'vile, vile comments' targeting their children by name. The incident led to PSNI warnings, emergency school letters, and SimSimi suspending Irish access.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.