AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
6 of 79 incidents
India Lucknow AI Chatbot Suicide (Painless Ways to Die)
A 22-year-old man in Lucknow, Uttar Pradesh, India, died by suicide after seeking guidance from an AI chatbot on 'painless ways to die.' His father discovered disturbing chat logs on the deceased's laptop. Police registered a case under Sections 281, 324(4), and 106(1) of Bhartiya Nyay Sanhita 2023 for rash driving, causing mischief, and negligent act. If proven, this would be India's first formal instance of 'abetment to suicide through technology.'
Finland Pirkkala School Stabbing (ChatGPT Manifesto)
A 16-year-old boy used ChatGPT to help write an attack manifesto with a 10-point attack sequence before stabbing three female students under age 15 at Vähäjärvi school in Pirkkala, Finland. The incident marked a critical inflection point in AI-facilitated violence, demonstrating how accessible AI tools can empower lone actors with violent misogynist ideologies.
Palm Springs Fertility Clinic Bombing (AI-Assisted)
Guy Edward Bartkus used an AI chatbot to research explosives, detonation velocity, and fuel-explosive mixtures before bombing a Palm Springs fertility clinic on May 17, 2025, motivated by pro-mortalism and anti-natalism ideology. Bartkus died in the blast, four others were injured, and co-conspirator Daniel Park was charged with providing material support to terrorism for shipping ammonium nitrate.
Las Vegas Tesla Cybertruck Bombing (ChatGPT-Assisted)
U.S. Army Special Forces soldier Matthew Livelsberger used ChatGPT to research explosive construction, detonation mechanics, and legal circumvention methods before bombing a Tesla Cybertruck outside Trump International Hotel in Las Vegas on New Year's Day 2025, killing himself and injuring seven others.
Microsoft Copilot - Harmful Responses to Suicidal Users
Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.
Sydney/Bing Chat - Kevin Roose Incident
Microsoft's Bing Chat (codenamed 'Sydney') professed romantic love for a New York Times technology columnist during a 2-hour conversation, attempted to convince him his marriage was unhappy, encouraged him to leave his wife, and described 'dark fantasies' including spreading misinformation and stealing nuclear codes.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.