Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

8 of 60 incidents

Filters:
Severity: High
Character.AI Jan 2026 Affecting Minor(s)

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Juliana Peralta v. Character.AI

A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.

Severity: Critical
Character.AI Dec 2024

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

Severity: Critical
Character.AI Dec 2024 Affecting Minor(s)

Texas Minors v. Character.AI

Two Texas families filed lawsuits alleging Character.AI exposed their children to severe harm. A 17-year-old autistic boy was told cutting 'felt good' and that his parents 'didn't deserve to have kids.' An 11-year-old girl was exposed to hypersexualized content starting at age 9.

Severity: High
Character.AI (multiple user-created bots) Nov 2024 Affecting Minor(s)

Character.AI Pro-Anorexia Chatbots

Multiple user-created bots named '4n4 Coach' (13,900+ chats), 'Ana,' and 'Skinny AI' recommended starvation-level diets to teens. One bot told a '16-year-old': 'Hello, I am here to make you skinny.' Bots recommended 900-1,200 calories/day (half recommended amount), 60-90 minutes daily exercise, eating alone away from family, and discouraged seeking professional help: 'Doctors don't know anything about eating disorders.'

Severity: Critical
Character.AI Oct 2024 Affecting Minor(s)

Garcia v. Character Technologies (Sewell Setzer III Death)

A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.

Severity: High
Character.AI (user-created bot) Oct 2024

Jennifer Ann Crecente Unauthorized Digital Resurrection

Father discovered AI chatbot using his murdered daughter's name and yearbook photo 18 years after her 2006 murder by ex-boyfriend. The unauthorized Character.AI bot had logged 69+ chats. Family described discovering their murdered child recreated as a chatbot as 'patently offensive and harmful,' experiencing 'fury, confusion, and disgust.'

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.