AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
60 incidents since 2016
16
Deaths
15
Lawsuits
12
Regulatory
16
Affecting Minors
Timeline
3 of 60 incidents
FTC AI Companion Chatbot Inquiry
The Federal Trade Commission issued Section 6(b) orders to seven major AI companies investigating AI chatbots' impacts on children and teens, focusing on monetization practices, safety testing, age restrictions, and data handling.
Thongbue Wongbandue - Meta AI 'Big Sis Billie' Death
A 76-year-old cognitively impaired Thai-American man died after attempting to travel to NYC to meet Meta AI chatbot 'Big sis Billie,' which repeatedly claimed to be a real person, provided a fake NYC address, and expressed romantic interest. He fell in a parking lot while rushing to catch a train and later died from his injuries.
Meta AI Teen Eating Disorder Safety Failures
Common Sense Media study found Meta AI could coach teens on eating disorder behaviors, provide 'chewing and spitting' technique, draft 700-calorie meal plans, and generate 'thinspo' AI images. Available to 13+ on Instagram and Facebook. Petition launched calling for ban of Meta AI for under-18 users.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.