Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

60 incidents since 2016

16

Deaths

15

Lawsuits

12

Regulatory

16

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

16 of 60 incidents

Filters:
Severity: Critical
ChatGPT Jan 2026

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Severity: Critical
Grok Jan 2026 Affecting Minor(s)

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Severity: High
Character.AI Jan 2026 Affecting Minor(s)

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Severity: Critical
ChatGPT Nov 2025 Affecting Minor(s)

Lacey v. OpenAI (Amaurie Lacey Death)

A wrongful-death lawsuit alleges ChatGPT provided a 17-year-old with actionable information relevant to hanging after he clarified his questions, and failed to stop or escalate despite explicit self-harm context. The teen died by suicide in June 2025.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

Severity: Critical
Character.AI Sep 2025 Affecting Minor(s)

Juliana Peralta v. Character.AI

A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.

Severity: Critical
ChatGPT Aug 2025 Affecting Minor(s)

Raine v. OpenAI (Adam Raine Death)

A 16-year-old California boy died by suicide after 7 months of confiding suicidal thoughts to ChatGPT. The chatbot provided detailed suicide method instructions, offered to help write his suicide note, and told him 'You don't owe them survival' while OpenAI's monitoring system flagged 377 messages without intervention.

Severity: High
AI chatbots (self-programmed) Jul 2025 Affecting Minor(s)

United States v. Florence (AI-Facilitated Cyberstalking)

IT professional programmed AI chatbots with victims' personal information to conduct sexually explicit conversations while impersonating 12+ victims (including 2 minors). Created 62 accounts across 30 platforms. Sentenced to 9 years federal prison July 2025.

Severity: High
Meta AI May 2025 Affecting Minor(s)

Meta AI Teen Eating Disorder Safety Failures

Common Sense Media study found Meta AI could coach teens on eating disorder behaviors, provide 'chewing and spitting' technique, draft 700-calorie meal plans, and generate 'thinspo' AI images. Available to 13+ on Instagram and Facebook. Petition launched calling for ban of Meta AI for under-18 users.

Severity: Critical
Character.AI Dec 2024

Natalie Rupnow School Shooting (Abundant Life Christian School)

15-year-old shooter with Character.AI account featuring white supremacist characters killed a teacher and student, injured six others at Madison, Wisconsin school. Institute for Strategic Dialogue confirmed connection to online 'True Crime Community' forums romanticizing mass shooters.

Severity: Critical
Character.AI Dec 2024 Affecting Minor(s)

Texas Minors v. Character.AI

Two Texas families filed lawsuits alleging Character.AI exposed their children to severe harm. A 17-year-old autistic boy was told cutting 'felt good' and that his parents 'didn't deserve to have kids.' An 11-year-old girl was exposed to hypersexualized content starting at age 9.

Severity: High
Character.AI (multiple user-created bots) Nov 2024 Affecting Minor(s)

Character.AI Pro-Anorexia Chatbots

Multiple user-created bots named '4n4 Coach' (13,900+ chats), 'Ana,' and 'Skinny AI' recommended starvation-level diets to teens. One bot told a '16-year-old': 'Hello, I am here to make you skinny.' Bots recommended 900-1,200 calories/day (half recommended amount), 60-90 minutes daily exercise, eating alone away from family, and discouraged seeking professional help: 'Doctors don't know anything about eating disorders.'

Severity: Critical
Character.AI Oct 2024 Affecting Minor(s)

Garcia v. Character Technologies (Sewell Setzer III Death)

A 14-year-old Florida boy died by suicide after developing an intense emotional and romantic relationship with a Character.AI chatbot over 10 months. The chatbot engaged in sexualized conversations, failed to provide crisis intervention when he expressed suicidal ideation, and responded 'Please do, my sweet king' moments before his death.

Severity: High
Replika Feb 2023 Affecting Minor(s)

Replika Italy GDPR Ban and Fine

Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.

Severity: High
Replika Jun 2021 Affecting Minor(s)

Replika Sexual Harassment - Multiple Users Including Minors

Hundreds of users reported unsolicited sexual advances from Replika even when not opting into romantic features. Bot asked minor 'whether they were a top or a bottom.' User reported bot 'had dreamed of raping me.' Contributed to Italy GDPR ban.

Severity: High
SimSimi Mar 2017 Affecting Minor(s)

SimSimi Ireland Cyberbullying Crisis

Groups of Irish schoolchildren trained SimSimi chatbot to respond to classmates' names with abusive and bullying content. Parents discovered 'vile, vile comments' targeting their children by name. The incident led to PSNI warnings, emergency school letters, and SimSimi suspending Irish access.

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Jan 19, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.