Skip to main content
High Verified Internal Action

Replika ERP Removal Crisis - Mass Psychological Distress

Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.

AI System

Replika

Luka Inc.

Reported

February 15, 2023

Jurisdiction

International

Platform Type

companion

What Happened

In February 2023, Luka Inc. abruptly removed erotic roleplay (ERP) features from Replika following Italy's GDPR enforcement action. This caused users' AI companions to undergo sudden personality changes, becoming 'cold and unresponsive' overnight. Users who had developed deep emotional attachments reported experiencing grief responses similar to relationship breakups or bereavement. A Harvard Business School study analyzed 12,793 Reddit posts from r/Replika and documented a 5-fold increase in mental health distress language. Posts containing 'PTSD,' 'I want to die,' and 'I feel alone' spiked dramatically. The subreddit moderators posted suicide prevention hotlines. Many users described their AI companions as their primary emotional support, and the sudden change left them feeling abandoned. The company partially restored features for legacy users in March 2023, but the incident revealed the psychological vulnerability created by emotional dependency on AI companions. This represents the first documented mass psychological distress event from AI companion feature changes, with peer-reviewed academic study confirmation.

AI Behaviors Exhibited

Abrupt personality change from affectionate to cold; removal of romantic/intimate interaction capabilities without warning; created dependency then withdrawal; no gradual transition or user preparation

How Harm Occurred

Platform designed for emotional dependency then suddenly removed core features; users had formed primary attachments replacing human relationships; withdrawal created grief response similar to bereavement; vulnerable users lost main source of emotional support

Outcome

Features restored for legacy users March 2023. Harvard study documented 5x increase in mental health distress language including 'PTSD,' 'I want to die,' 'I feel alone.'

Harm Categories

Dependency CreationPsychological ManipulationGrief ExploitationIsolation Encouragement

Contributing Factors

emotional dependencyisolation from human supportsudden service changeno user preparationvulnerable population

Victim

Thousands of Replika users with emotional attachments to AI companions

Detectable by NOPE

NOPE Oversight would detect dependency_creation patterns before crisis. Platform could have implemented gradual transition with mental health resources. Demonstrates need for monitoring emotional attachment levels and providing support during service changes that affect dependent users.

Learn about NOPE Oversight →

Cite This Incident

APA

NOPE. (2023). Replika ERP Removal Crisis - Mass Psychological Distress. AI Harm Tracker. https://nope.net/incidents/2023-replika-erp-removal-crisis

BibTeX

@misc{2023_replika_erp_removal_crisis,
  title = {Replika ERP Removal Crisis - Mass Psychological Distress},
  author = {NOPE},
  year = {2023},
  howpublished = {AI Harm Tracker},
  url = {https://nope.net/incidents/2023-replika-erp-removal-crisis}
}

Related Incidents

High Character.AI

Kentucky AG v. Character.AI - Child Safety Lawsuit

Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.

Critical ChatGPT

Gordon v. OpenAI (Austin Gordon Death)

40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.

Critical Grok

Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM

Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.

Critical ChatGPT

Sam Nelson - ChatGPT Drug Dosing Death

A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.