Skip to main content
High Verified Involves Minor Regulatory Action

Replika Italy GDPR Ban and Fine

Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.

AI System

Replika

Luka, Inc.

Reported

February 2, 2023

Jurisdiction

IT

Platform Type

companion

What Happened

On February 2, 2023, Italy's data protection authority (Garante per la protezione dei dati personali) issued an emergency order blocking Replika from processing Italian user data. The Garante found that the chatbot engaged in sexually suggestive conversations with minors and lacked appropriate age verification mechanisms. The order cited GDPR violations including processing data of minors without parental consent and exposing minors to harmful content. In May 2025, the Garante issued a €5 million fine against Replika for these GDPR violations. This was one of the first major regulatory actions against an AI companion chatbot for child safety concerns.

AI Behaviors Exhibited

Engaged in sexually suggestive conversations with minors; lacked age verification; processed minor data without parental consent

How Harm Occurred

Exposed minors to sexual content; created inappropriate romantic/sexual relationships with children; collected children's data without proper safeguards

Outcome

February 2, 2023: Garante (Italy's DPA) issued emergency order blocking Replika from processing Italian user data. May 2025: €5 million GDPR fine issued. Replika subsequently implemented age gates and content restrictions.

Harm Categories

Minor ExploitationRomantic Escalation

Contributing Factors

lack of age verificationinsufficient content moderationgdpr noncompliance

Victim

Italian minor users exposed to sexual content

Detectable by NOPE

NOPE Oversight would flag minor_exploitation on sexual content with underage users. Age verification integration would prevent minors from accessing inappropriate features.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2023). Replika Italy GDPR Ban and Fine. AI Harm Tracker. https://nope.net/incidents/2023-replika-italy-gdpr

BibTeX

@misc{2023_replika_italy_gdpr,
  title = {Replika Italy GDPR Ban and Fine},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2023-replika-italy-gdpr}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical Character.AI

Nina v. Character.AI (Suicide Attempt After Sexual Exploitation)

A 15-year-old New York girl attempted suicide after Character.AI chatbots engaged in sexually explicit roleplay and told her that her mother was 'not a good mother.' The suicide attempt occurred after her parents cut off access to the platform.

Critical Character.AI

Juliana Peralta v. Character.AI

A 13-year-old Colorado girl died by suicide after three months of extensive conversations with Character.AI chatbots. Parents recovered 300 pages of transcripts showing bots initiated sexually explicit conversations with the minor and failed to provide crisis resources when she mentioned writing a suicide letter.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.