FTC AI Actions
FTC AI Enforcement
FTC applies Section 5 authority against unfair/deceptive AI practices, plus AI-specific rules including the Fake Reviews Rule (Oct 2024) prohibiting AI-generated fake reviews.
Jurisdiction
United States
US
Enacted
Unknown
Effective
Unknown
Enforcement
Federal Trade Commission
Harms Addressed
Who Must Comply
Safety Provisions
- • Fake Reviews Rule (effective Oct 2024): prohibits AI-generated fake consumer reviews
- • False AI capability claims are deceptive practices
- • Algorithmic disgorgement: deletion of models trained on illegally obtained data
- • Unfair practices doctrine applies to AI harms
- • Impersonation Rule (Feb 2024) with proposed AI individual impersonation extension
Compliance Timeline
Aug 22, 2022
ANPR issued for Commercial Surveillance Rule (AI-related provisions pending NPRM)
Enforcement
Enforced by
Federal Trade Commission
Quick Facts
- Binding
- Yes
- Mental Health Focus
- No
- Child Safety Focus
- No
- Algorithmic Scope
- No
Why It Matters
Most active federal AI enforcement body. Fake Reviews Rule is first AI-specific FTC rule. Companion chatbot study signals potential enforcement focus on mental health/safety claims.
Recent Developments
Operation AI Comply (Sep 25, 2024): DoNotPay ($193K), Rytr, Ascend Ecom, Ecommerce Empire Builders, FBA Machine. Separate actions: Evolv Technologies (Nov 2024), IntelliVision (Dec 2024). 6(b) study on AI companion chatbots (Sep 2025).
Cite This
APA
United States. (n.d.). FTC AI Enforcement. Retrieved from https://nope.net/regs/us-ftc-ai
BibTeX
@misc{us_ftc_ai,
title = {FTC AI Enforcement},
author = {United States},
year = {n.d.},
url = {https://nope.net/regs/us-ftc-ai}
} Related Regulations
State AG AI Warning
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
NIST AI RMF
Dominant voluntary AI governance framework in the US. Four functions (Govern, Map, Measure, Manage) operationalize what regulators expect. Not legally binding but heavily referenced.
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
EU PLD
Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.
CT SB 1295
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
Colorado AI Act
First comprehensive US state law regulating high-risk AI systems. Modeled partly on EU AI Act with developer and deployer obligations for consequential decisions.