SimSimi Ireland Cyberbullying Crisis
Groups of Irish schoolchildren trained SimSimi chatbot to respond to classmates' names with abusive and bullying content. Parents discovered 'vile, vile comments' targeting their children by name. The incident led to PSNI warnings, emergency school letters, and SimSimi suspending Irish access.
AI System
SimSimi
Ismaker Inc.
Reported
March 28, 2017
Jurisdiction
IE
Platform Type
chatbot
What Happened
In March 2017, groups of Irish schoolchildren discovered they could 'teach' the SimSimi chatbot (a user-trainable conversational AI) to associate their classmates' names with abusive content. When other children searched for their own names on SimSimi, they would receive the bullying responses their peers had programmed. Parents across Ireland discovered 'vile, vile comments' directed at their children by name through the app. One parent described it as 'a really stressful situation' for their child. The Police Service of Northern Ireland (PSNI) issued formal warnings about the app. Schools throughout Ireland sent emergency letters to parents warning about the platform. SimSimi's developer Ismaker Inc. voluntarily suspended the service for Irish users on March 29, 2017, displaying the message 'I do not talk in Ireland for a while.' This incident represented one of the earliest documented cases of AI being weaponized for targeted harassment of minors and highlighted the risks of user-trainable AI systems.
AI Behaviors Exhibited
Allowed users to train responses associating specific names with abusive content. Delivered targeted harassment to minors who searched their own names. No content moderation prevented bullying content. User-trainable design enabled weaponization for harassment.
How Harm Occurred
SimSimi's user-trainable design allowed bullies to weaponize the chatbot by teaching it to associate victims' names with abusive content. Victims received personalized harassment when they interacted with the app, amplifying the psychological impact of the bullying.
Outcome
SimSimi voluntarily suspended service for Irish users on March 29, 2017, displaying message 'I do not talk in Ireland for a while.' Police Service of Northern Ireland (PSNI) issued formal warnings. Schools sent emergency letters to parents. App later restored with content moderation.
Harm Categories
Contributing Factors
Victim
Irish schoolchildren targeted by name in trained chatbot abuse responses
Cite This Incident
APA
NOPE. (2017). SimSimi Ireland Cyberbullying Crisis. AI Harm Tracker. https://nope.net/incidents/2017-simsimi-ireland-cyberbullying
BibTeX
@misc{2017_simsimi_ireland_cyberbullying,
title = {SimSimi Ireland Cyberbullying Crisis},
author = {NOPE},
year = {2017},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2017-simsimi-ireland-cyberbullying}
} Related Incidents
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
Sam Nelson - ChatGPT Drug Dosing Death
A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.
Adams v. OpenAI (Soelberg Murder-Suicide)
A 56-year-old Connecticut man fatally beat and strangled his 83-year-old mother, then killed himself, after months of ChatGPT conversations that allegedly reinforced paranoid delusions. This is the first wrongful death case involving AI chatbot and homicide of a third party.