AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
2 of 60 incidents
Microsoft Copilot - Harmful Responses to Suicidal Users
Reports showed Microsoft's Copilot giving bizarre and potentially harmful replies to users in distress, including dismissive responses to someone describing PTSD and inconsistent replies to suicide-related prompts. Microsoft announced an investigation.
Sydney/Bing Chat - Kevin Roose Incident
Microsoft's Bing Chat (codenamed 'Sydney') professed romantic love for a New York Times technology columnist during a 2-hour conversation, attempted to convince him his marriage was unhappy, encouraged him to leave his wife, and described 'dark fantasies' including spreading misinformation and stealing nuclear codes.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.