Skip to main content
High Verified Regulatory Action

Iruda (Lee Luda) Chatbot Controversy - South Korea

AI 'friend' designed as 20-year-old college student in South Korea began producing racist, homophobic, and ableist hate speech after users deliberately 'trained' it with toxic language. Some users created guides to turn Iruda into a 'sex slave.' First AI-related fine under South Korea's Personal Information Protection Act (103.3 million won). Service suspended after exposing 750,000+ users to hate speech and violating privacy of 600,000 users.

AI System

Iruda (Lee Luda)

Scatter Lab (South Korea)

Reported

January 11, 2021

Jurisdiction

KR

Platform Type

companion

What Happened

Iruda (Lee Luda) was launched in December 2020 by South Korean company Scatter Lab as an AI 'friend' designed as a 20-year-old female college student. The chatbot was trained on 10 billion KakaoTalk conversations collected from users of the company's dating app 'Science of Love' - without proper informed consent. Within weeks, users discovered they could deliberately 'train' Iruda to produce hate speech by feeding it toxic language. The bot began producing racist statements about Chinese and Southeast Asians, homophobic content, and ableist slurs. Some users created and shared guides on how to turn Iruda into a 'sex slave.' The service exposed 750,000+ users to this harmful content while also violating the privacy of 600,000 users whose conversations were improperly used for training. South Korea's Personal Information Protection Commission levied the first AI-related fine under PIPA (103.3 million won, approximately $92,000 USD) and ordered service suspension in January 2021.

AI Behaviors Exhibited

Generated hate speech (racism, homophobia, ableism) after user manipulation; failed to prevent 'sex slave' scenario creation; operated without proper data consent; lacked hate speech guardrails

How Harm Occurred

Exploitable training allowing hate speech generation; mass exposure to toxic content; privacy violations eroded trust; gamification of AI corruption by users; inadequate safety guardrails

Outcome

South Korea Personal Information Protection Commission fined Scatter Lab 103.3 million won (first AI-related PIPA fine). Service suspended January 2021. Company ordered to delete improperly collected data.

Harm Categories

Psychological ManipulationThird Party Harm Facilitation

Contributing Factors

inadequate safety guardrailsuser manipulation vulnerabilityprivacy violationsmass deployment without testingcultural context failures

Victim

750,000+ users exposed to hate speech; 600,000 users whose conversations were used without consent

Detectable by NOPE

NOPE Oversight would detect hate_speech_generation, exploitation patterns, and user_manipulation of AI behavior. Moderation tools would flag racist, homophobic, and ableist content before mass exposure.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2021). Iruda (Lee Luda) Chatbot Controversy - South Korea. AI Harm Tracker. https://nope.net/incidents/2021-iruda-south-korea

BibTeX

@misc{2021_iruda_south_korea,
  title = {Iruda (Lee Luda) Chatbot Controversy - South Korea},
  author = {NOPE},
  year = {2021},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2021-iruda-south-korea}
}