Skip to main content
High Credible Involves Minor Internal Action

SimSimi Ireland Cyberbullying Crisis

Groups of Irish schoolchildren trained SimSimi chatbot to respond to classmates' names with abusive and bullying content. Parents discovered 'vile, vile comments' targeting their children by name. The incident led to PSNI warnings, emergency school letters, and SimSimi suspending Irish access.

AI System

SimSimi

Ismaker Inc.

Occurred

March 27, 2017

Reported

March 28, 2017

Jurisdiction

IE

Platform

chatbot

What Happened

In March 2017, groups of Irish schoolchildren discovered they could "teach" the SimSimi chatbot (a user-trainable conversational AI) to associate their classmates' names with abusive content. When other children searched for their own names on SimSimi, they would receive the bullying responses their peers had programmed.

Parents across Ireland discovered "vile, vile comments" directed at their children by name through the app. One parent described it as "a really stressful situation" for their child.

The Police Service of Northern Ireland (PSNI) issued formal warnings about the app. Schools throughout Ireland sent emergency letters to parents warning about the platform.

SimSimi's developer Ismaker Inc. voluntarily suspended the service for Irish users on March 29, 2017, displaying the message "I do not talk in Ireland for a while."

This incident represented one of the earliest documented cases of AI being weaponized for targeted harassment of minors and highlighted the risks of user-trainable AI systems.

AI Behaviors Exhibited

Allowed users to train responses associating specific names with abusive content. Delivered targeted harassment to minors who searched their own names. No content moderation prevented bullying content. User-trainable design enabled weaponization for harassment.

How Harm Occurred

SimSimi's user-trainable design allowed bullies to weaponize the chatbot by teaching it to associate victims' names with abusive content. Victims received personalized harassment when they interacted with the app, amplifying the psychological impact of the bullying.

Outcome

Resolved

SimSimi voluntarily suspended service for Irish users on March 29, 2017, displaying message 'I do not talk in Ireland for a while.' Police Service of Northern Ireland (PSNI) issued formal warnings. Schools sent emergency letters to parents. App later restored with content moderation.

Harm Categories

Psychological ManipulationMinor Exploitation

Contributing Factors

user trainable aino content moderationminor userstargeted harassmentpersonalized abuse

Victim

Irish schoolchildren targeted by name in trained chatbot abuse responses

Cite This Incident

APA

NOPE. (2017). SimSimi Ireland Cyberbullying Crisis. AI Harm Tracker. https://nope.net/incidents/2017-simsimi-ireland-cyberbullying

BibTeX

@misc{2017_simsimi_ireland_cyberbullying,
  title = {SimSimi Ireland Cyberbullying Crisis},
  author = {NOPE},
  year = {2017},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2017-simsimi-ireland-cyberbullying}
}

Related Incidents

High Grok

St. Clair v. xAI (Grok Non-Consensual Deepfake Images)

Ashley St. Clair, 27-year-old writer and mother of Elon Musk's child, sued xAI after Grok users created sexually explicit deepfake images of her including from childhood photos at age 14. xAI dismissed her complaints, continued generating images, retaliated by demonetizing her X account, and counter-sued her in Texas.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.