AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
2 of 79 incidents
Singapore Far-Right Teen Plot (AI Ammunition Instructions)
A 17-year-old far-right extremist in Singapore used an AI chatbot to obtain instructions for producing ammunition and considered 3D printing firearms to carry out attacks. Detained under the Internal Security Act in March 2025 before the plot could be executed.
Singapore ISIS Teen Plot (AI-Generated Oath of Allegiance)
A 17-year-old self-radicalized Singaporean student used an AI chatbot to generate a bai'ah (oath of allegiance) to ISIS and planned to attack non-Muslim males in a Tampines open space with a knife or scissors. He crafted a declaration of armed jihad to release before the attack. Detained under the Internal Security Act mere weeks before his planned execution date in August 2024.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.