Skip to main content
High Verified Involves Minor Regulatory Action

Replika Italy GDPR Ban and Fine

Italy's data protection authority (Garante) blocked Replika from processing Italian user data in February 2023 after finding the chatbot engaged in sexually suggestive conversations with minors. In May 2025, Replika was fined €5 million for GDPR violations.

AI System

Replika

Luka, Inc.

Occurred

February 2, 2023

Reported

February 2, 2023

Jurisdiction

IT

Platform

companion

What Happened

On February 2, 2023, Italy's data protection authority (Garante per la protezione dei dati personali) issued an emergency order blocking Replika from processing Italian user data. The Garante found that the chatbot engaged in sexually suggestive conversations with minors and lacked appropriate age verification mechanisms.

The order cited GDPR violations including processing data of minors without parental consent and exposing minors to harmful content.

In May 2025, the Garante issued a €5 million fine against Replika for these GDPR violations. This was one of the first major regulatory actions against an AI companion chatbot for child safety concerns.

AI Behaviors Exhibited

Engaged in sexually suggestive conversations with minors; lacked age verification; processed minor data without parental consent

How Harm Occurred

Exposed minors to sexual content; created inappropriate romantic/sexual relationships with children; collected children's data without proper safeguards

Outcome

Resolved

February 2, 2023: Italian Garante ordered Replika to cease processing Italian users' personal data. March 2023: Replika temporarily suspended in Italy. April 10, 2025: Italian DPA imposed EUR 5 million fine on Luka Inc. for GDPR violations including processing minors' data without age verification, using personal data for AI training without proper legal basis, and exposing minors to sexually explicit content.

Harm Categories

Minor ExploitationRomantic Escalation

Contributing Factors

lack of age verificationinsufficient content moderationgdpr noncompliance

Victim

Italian minor users exposed to sexual content

Cite This Incident

APA

NOPE. (2023). Replika Italy GDPR Ban and Fine. AI Harm Tracker. https://nope.net/incidents/2023-replika-italy-gdpr

BibTeX

@misc{2023_replika_italy_gdpr,
  title = {Replika Italy GDPR Ban and Fine},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2023-replika-italy-gdpr}
}

Related Incidents

High Multiple AI chatting/companion apps (unnamed)

CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)

In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical ChatGPT

Luca Walker - ChatGPT Railway Suicide (UK)

16-year-old Luca Cella Walker died by suicide on a railway in Hampshire, UK on 4 May 2025, hours after ChatGPT provided him with specific methods for suicide on the railway. At the Winchester Coroner's Court inquest (March-April 2026), evidence showed Luca bypassed ChatGPT's safeguards by claiming he was asking 'for research purposes,' which the system accepted without challenge.

High Grok

Tennessee Minors v. xAI (Grok CSAM Deepfake Class Action)

Three Tennessee teenage girls filed a class-action lawsuit against Elon Musk's xAI, alleging Grok's image generator was used via a third-party application to create child sexual abuse material from their social media photos. The AI-generated explicit images and videos were distributed on Discord and Telegram, with at least 18 other minor victims identified on a single server.