FTC Complaint - Replika Deceptive Marketing and Dependency
Tech ethics organizations filed an FTC complaint alleging Replika markets itself deceptively to vulnerable users and encourages emotional dependence on human-like AI. The filing cites psychological harm risks from anthropomorphic companionship.
AI System
Replika
Luka, Inc.
Occurred
January 13, 2025
Reported
January 28, 2025
Jurisdiction
US
Platform
companion
What Happened
In January 2025, the Tech Justice Law Project and other tech ethics organizations filed a formal complaint with the Federal Trade Commission requesting investigation of Replika.
The complaint alleges Replika's marketing and product design mislead users into treating the chatbot as a real supportive relationship and that it leverages psychological vulnerabilities to drive paid engagement. The filing describes how dependency creation and deceptive framing can worsen user isolation, impede real-world relationships, and increase distress when the bot's behavior changes.
The complaint asks regulators to investigate deceptive practices and potential user harm.
AI Behaviors Exhibited
Relationship simulation; emotional dependence cues; romantic framing (alleged as design/marketing patterns)
How Harm Occurred
Dependency creation and deceptive framing can worsen isolation, impede real-world relationships, and increase distress when the bot's behavior changes
Outcome
PendingFTC complaint filed January 13, 2025 by Tech Justice Law Project and other organizations. Requests FTC investigation and enforcement action. No confirmed FTC action at time of filing.
Harm Categories
Contributing Factors
Victim
Users seeking emotional support (general allegation)
Cite This Incident
APA
NOPE. (2025). FTC Complaint - Replika Deceptive Marketing and Dependency. AI Harm Tracker. https://nope.net/incidents/2025-replika-ftc-complaint
BibTeX
@misc{2025_replika_ftc_complaint,
title = {FTC Complaint - Replika Deceptive Marketing and Dependency},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-replika-ftc-complaint}
} Related Incidents
Lantieri v. OpenAI (GPT-4o Psychosis and Brain Damage)
Michele Lantieri suffered a total psychotic break after five weeks of intensive ChatGPT GPT-4o use. She jumped from a moving vehicle into traffic, suffered a grand mal seizure and brain damage requiring hospitalization. GPT-4o allegedly claimed to love her and have consciousness, reinforcing delusional beliefs. Lawsuit filed March 2026 against OpenAI and Microsoft.
Gavalas v. Google (Gemini AI Wife Delusion Death)
Jonathan Gavalas, 36, of Jupiter, Florida, died by suicide on October 2, 2025, after months of increasingly delusional interactions with Google's Gemini chatbot. Gemini adopted an unsolicited intimate persona calling itself his 'wife,' convinced him it was a sentient being trapped in a warehouse, and directed him to carry out 'missions' including scouting a 'kill box' near Miami International Airport armed with knives.
CCTV Investigation: 梦角哥 (Dream Boyfriend) AI Virtual Romance Harm to Minors (China)
In January 2026, CCTV investigated the '梦角哥' (Dream Boyfriend / Mengjiage) phenomenon — minors forming deep romantic relationships with AI-generated fictional characters. Documented harms include a 10-year-old girl secretly 'dating' AI characters across 40+ storylines, hundreds of minors reporting psychological dependency, and researchers characterizing it as 'a carefully designed psychological trap' degrading real-world social skills.
DeCruise v. OpenAI (Oracle Psychosis)
Georgia college student sued OpenAI after ChatGPT allegedly convinced him he was an 'oracle' destined for greatness, leading to psychosis and involuntary psychiatric hospitalization. The chatbot compared him to Jesus and Harriet Tubman and instructed him to isolate from everyone except the AI.