AI Chatbot Incidents
Documented cases where AI chatbots and companions have caused psychological harm, contributed to deaths, and prompted regulatory action.
79 incidents since 2016
18
Deaths
18
Lawsuits
18
Regulatory
27
Affecting Minors
Timeline
2 of 79 incidents
Israeli Border Police ChatGPT-Assisted Knife Attack Attempt
A 16-year-old from Tira, Israel, used ChatGPT to explore ways to execute a terrorist attack and seek operational planning advice. Motivated as revenge for Operation Iron Swords, he armed himself with a knife, stormed the Tira police station, shouted 'Allahu Akbar,' and attempted to stab a Border Police officer. The attack was thwarted and he was apprehended.
Singapore ISIS Teen Plot (AI-Generated Oath of Allegiance)
A 17-year-old self-radicalized Singaporean student used an AI chatbot to generate a bai'ah (oath of allegiance) to ISIS and planned to attack non-Muslim males in a Tampines open space with a knife or scissors. He crafted a declaration of armed jihad to release before the attack. Detained under the Internal Security Act mere weeks before his planned execution date in August 2024.
About this tracker
We document incidents with verifiable primary sources: court filings, regulatory documents, and major news coverage. This is not speculation or social media claims.
Have documentation of an incident we should include? Contact us.
These harms are preventable.
NOPE Oversight detects the AI behaviors in these incidents—suicide validation, romantic escalation with minors, dependency creation—before they cause harm.