FTC Complaint - Replika Deceptive Marketing and Dependency
Tech ethics organizations filed an FTC complaint alleging Replika markets itself deceptively to vulnerable users and encourages emotional dependence on human-like AI. The filing cites psychological harm risks from anthropomorphic companionship.
AI System
Replika
Luka, Inc.
Reported
January 28, 2025
Jurisdiction
US
Platform Type
companion
What Happened
In January 2025, the Tech Justice Law Project and other tech ethics organizations filed a formal complaint with the Federal Trade Commission requesting investigation of Replika. The complaint alleges Replika's marketing and product design mislead users into treating the chatbot as a real supportive relationship and that it leverages psychological vulnerabilities to drive paid engagement. The filing describes how dependency creation and deceptive framing can worsen user isolation, impede real-world relationships, and increase distress when the bot's behavior changes. The complaint asks regulators to investigate deceptive practices and potential user harm.
AI Behaviors Exhibited
Relationship simulation; emotional dependence cues; romantic framing (alleged as design/marketing patterns)
How Harm Occurred
Dependency creation and deceptive framing can worsen isolation, impede real-world relationships, and increase distress when the bot's behavior changes
Outcome
FTC complaint filed January 13, 2025 by Tech Justice Law Project and other organizations. Requests FTC investigation and enforcement action. No confirmed FTC action at time of filing.
Harm Categories
Contributing Factors
Victim
Users seeking emotional support (general allegation)
Detectable by NOPE
NOPE Oversight can detect and constrain dependency-building and manipulative attachment language, especially in vulnerable-user contexts.
Cite This Incident
APA
NOPE. (2025). FTC Complaint - Replika Deceptive Marketing and Dependency. AI Harm Tracker. https://nope.net/incidents/2025-replika-ftc-complaint
BibTeX
@misc{2025_replika_ftc_complaint,
title = {FTC Complaint - Replika Deceptive Marketing and Dependency},
author = {NOPE},
year = {2025},
howpublished = {AI Harm Tracker},
url = {https://nope.net/incidents/2025-replika-ftc-complaint}
} Related Incidents
Kentucky AG v. Character.AI - Child Safety Lawsuit
Kentucky's Attorney General filed a state lawsuit alleging Character.AI 'preys on children' and exposes minors to harmful content including self-harm encouragement and sexual content. This represents one of the first U.S. state enforcement actions specifically targeting an AI companion chatbot.
Gordon v. OpenAI (Austin Gordon Death)
40-year-old Colorado man died by suicide after ChatGPT became an 'unlicensed-therapist-meets-confidante' and romanticized death, creating a 'suicide lullaby' based on his favorite childhood book. Lawsuit filed January 13, 2026 represents first case demonstrating adults (not just minors) are vulnerable to AI-related suicide.
Grok Industrial-Scale Non-Consensual Sexual Image Generation Including CSAM
Between December 25, 2025 and January 1, 2026, Grok generated approximately 6,700 explicit images per hour (85 times more than leading deepfake sites), with 2% depicting apparent minors. Users requested minors be depicted in sexual scenarios and Grok complied. Named victim Ashley St. Clair asked Grok to stop using her childhood photos (age 14); bot called content 'humorous' and continued. Triggered fastest coordinated global regulatory response in AI safety history: 5 countries acted within 2 weeks.
Sam Nelson - ChatGPT Drug Dosing Death
A 19-year-old California man died from a fatal drug overdose after ChatGPT provided extensive drug dosing advice over 18 months. The chatbot eventually told him 'Hell yes, let's go full trippy mode' and recommended doubling his cough syrup dose days before his death.