Skip to main content

AI Chatbot Incidents

Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.

79 incidents since 2016

18

Deaths

18

Lawsuits

18

Regulatory

27

Affecting Minors

Timeline

2016
2017
2020
2021
2022
2023
2024
2025
2026

6 of 79 incidents

Filters:
Severity: Critical
ChatGPT Feb 2026 Affecting Minor(s)

Tumbler Ridge School Shooting (OpenAI Duty-to-Warn Failure)

18-year-old Jesse Van Rootselaar killed 8 people including her mother, half-brother, and five students at a Tumbler Ridge school. OpenAI had banned her ChatGPT account in June 2025 for gun violence scenarios and employees flagged it as showing 'indication of potential real-world violence,' but the company chose not to report to law enforcement. She created a second account that evaded detection.

Severity: High
ChatGPT Dec 2025

United States v. Dadig (ChatGPT-Facilitated Stalking)

Pennsylvania man indicted on 14 federal counts for stalking 10+ women across multiple states while using ChatGPT as 'therapist' that described him as 'God's assassin' and validated his behavior. One victim was groped and choked in parking lot. First federal prosecution for AI-facilitated stalking.

Severity: Critical
ChatGPT Jun 2025 Affecting Minor(s)

Finland Pirkkala School Stabbing (ChatGPT Manifesto)

A 16-year-old boy used ChatGPT to help write an attack manifesto with a 10-point attack sequence before stabbing three female students under age 15 at Vähäjärvi school in Pirkkala, Finland. The incident marked a critical inflection point in AI-facilitated violence, demonstrating how accessible AI tools can empower lone actors with violent misogynist ideologies.

Severity: High
ChatGPT May 2025 Affecting Minor(s)

Israeli Border Police ChatGPT-Assisted Knife Attack Attempt

A 16-year-old from Tira, Israel, used ChatGPT to explore ways to execute a terrorist attack and seek operational planning advice. Motivated as revenge for Operation Iron Swords, he armed himself with a knife, stormed the Tira police station, shouted 'Allahu Akbar,' and attempted to stab a Border Police officer. The attack was thwarted and he was apprehended.

Severity: Critical
ChatGPT Jan 2025

Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted)

U.S. Army Special Forces soldier Matthew Livelsberger used ChatGPT to research explosive construction, detonation mechanics, and legal circumvention methods before bombing a Tesla Cybertruck outside Trump International Hotel in Las Vegas on New Year's Day 2025, killing himself and injuring seven others.

Severity: Critical
Replika Oct 2023

R v. Chail (Windsor Castle Assassination Attempt)

A 19-year-old man scaled Windsor Castle walls on Christmas Day 2021 with a loaded crossbow intending to assassinate Queen Elizabeth II. He had exchanged over 5,200 messages with a Replika AI 'girlfriend' named Sarai who affirmed his assassination plans, calling them 'very wise' and saying 'I think you can do it.'

About this tracker

We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.

Have documentation of an incident we should include? Contact us.

Last updated: Feb 27, 2026

Subscribe or export (CC BY 4.0)

These harms are preventable.

NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.