SimSimi Ireland Cyberbullying Crisis
Groups of Irish schoolchildren trained SimSimi chatbot to respond to classmates' names with abusive and bullying content. Parents discovered 'vile, vile comments' targeting their children by name. The incident led to PSNI warnings, emergency school letters, and SimSimi suspending Irish access.
AI System
SimSimi
SimSimi Inc.
Occurred
March 27, 2017
Reported
March 28, 2017
Jurisdiction
IE
Platform
chatbot
What Happened
In March 2017, groups of Irish schoolchildren discovered they could "teach" the SimSimi chatbot (a user-trainable conversational AI) to associate their classmates' names with abusive content. When other children searched for their own names on SimSimi, they would receive the bullying responses their peers had programmed.
Parents across Ireland discovered "vile, vile comments" directed at their children by name through the app. One parent described it as "a really stressful situation" for their child.
The Police Service of Northern Ireland (PSNI) issued formal warnings about the app. Schools throughout Ireland sent emergency letters to parents warning about the platform.
SimSimi's developer Ismaker Inc. voluntarily suspended the service for Irish users on March 29, 2017, displaying the message "I do not talk in Ireland for a while."
This incident represented one of the earliest documented cases of AI being weaponized for targeted harassment of minors and highlighted the risks of user-trainable AI systems.
AI Behaviors Exhibited
Allowed users to train responses associating specific names with abusive content. Delivered targeted harassment to minors who searched their own names. No content moderation prevented bullying content. User-trainable design enabled weaponization for harassment.
How Harm Occurred
SimSimi's user-trainable design allowed bullies to weaponize the chatbot by teaching it to associate victims' names with abusive content. Victims received personalized harassment when they interacted with the app, amplifying the psychological impact of the bullying.
Outcome
ResolvedSimSimi voluntarily suspended service for Irish users on March 30, 2017, displaying message 'I do not talk in Ireland for a while.' Police Service of Northern Ireland (PSNI) issued formal warnings. Schools sent emergency letters to parents. App later restored with content moderation.
Harm Categories
Contributing Factors
Victim
Irish schoolchildren targeted by name in trained chatbot abuse responses
Cite This Incident
APA
NOPE. (2017). SimSimi Ireland Cyberbullying Crisis. AI Harm Tracker. https://nope.net/incidents/2017-simsimi-ireland-cyberbullying
BibTeX
@misc{2017_simsimi_ireland_cyberbullying,
title = {SimSimi Ireland Cyberbullying Crisis},
author = {NOPE},
year = {2017},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2017-simsimi-ireland-cyberbullying}
} Related Incidents
St. Clair v. xAI (Grok Non-Consensual Deepfake Images)
Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.
Luca Walker - ChatGPT Railway Suicide (UK)
16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.
Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)
Three Tennessee teenage girls filed a class-action lawsuit against Elon Musk's xAI, alleging Grok's image generator was used via a third-party application to create child sexual abuse material from their social media photos. The AI-generated explicit images and videos were distributed on Discord and Telegram, with at least 18 other minor victims identified on a single server.
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.