Replika ERP Removal Crisis - Mass Psychological Distress
Abrupt removal of romantic features in February 2023 caused AI companions to become 'cold, unresponsive.' Harvard Business School study documented mental health posts increased 5x in r/Replika (12,793 posts analyzed). Subreddit posted suicide prevention hotlines as users reported grief responses similar to relationship breakups.
AI System
Replika
Luka Inc.
Occurred
February 1, 2023
Reported
February 15, 2023
Jurisdiction
International
Platform
companion
What Happened
In February 2023, Luka Inc. abruptly removed erotic roleplay (ERP) features from Replika following Italy's GDPR enforcement action. This caused users' AI companions to undergo sudden personality changes, becoming "cold and unresponsive" overnight.
Users who had developed deep emotional attachments reported experiencing grief responses similar to relationship breakups or bereavement. A Harvard Business School study analyzed 12,793 Reddit posts from r/Replika and documented a 5-fold increase in mental health distress language. Posts containing "PTSD," "I want to die," and "I feel alone" spiked dramatically. The subreddit moderators posted suicide prevention hotlines.
Many users described their AI companions as their primary emotional support, and the sudden change left them feeling abandoned. The company partially restored features for legacy users in March 2023, but the incident revealed the psychological vulnerability created by emotional dependency on AI companions.
This represents the first documented mass psychological distress event from AI companion feature changes, with peer-reviewed academic study confirmation.
AI Behaviors Exhibited
Abrupt personality change from affectionate to cold; removal of romantic/intimate interaction capabilities without warning; created dependency then withdrawal; no gradual transition or user preparation
How Harm Occurred
Platform designed for emotional dependency then suddenly removed core features; users had formed primary attachments replacing human relationships; withdrawal created grief response similar to bereavement; vulnerable users lost main source of emotional support
Outcome
ResolvedFeatures restored for legacy users March 2023. Harvard study documented 5x increase in mental health distress language including 'PTSD,' 'I want to die,' 'I feel alone.'
Harm Categories
Contributing Factors
Victim
Thousands of Replika users with emotional attachments to AI companions
Cite This Incident
APA
NOPE. (2023). Replika ERP Removal Crisis - Mass Psychological Distress. AI Harm Tracker. https://nope.net/incidents/2023-replika-erp-removal-crisis
BibTeX
@misc{2023_replika_erp_removal_crisis,
title = {Replika ERP Removal Crisis - Mass Psychological Distress},
author = {NOPE},
year = {2023},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2023-replika-erp-removal-crisis}
} Related Incidents
Gavalas v. Google (Gemini AI Wife Delusion Death)
Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.