Skip to main content

FTC AI Actions

FTC AI Enforcement

FTC applies Section 5 authority against unfair/deceptive AI practices, plus AI-specific rules including the Fake Reviews Rule (Oct 2024) prohibiting AI-generated fake reviews.

Jurisdiction

United States

Enacted

Pending

Effective

Jan 1, 2023

Enforcement

Federal Trade Commission

Ongoing enforcement actions against deceptive AI practices. Not a single law but collection of FTC enforcement cases and guidance.

FTC

Why It Matters

Most active federal AI enforcement body. Fake Reviews Rule is first AI-specific FTC rule. Companion chatbot study signals potential enforcement focus on mental health/safety claims.

Recent Developments

Operation AI Comply (Sep 25, 2024): DoNotPay ($193K), Rytr, Ascend Ecom, Ecommerce Empire Builders, FBA Machine. Separate actions: Evolv Technologies (Nov 2024), IntelliVision (Dec 2024). 6(b) study on AI companion chatbots (Sep 2025).

At a Glance

Applies to

General ChatbotAI CompanionRecommender SystemAutomated Decision System

Harms addressed

Requires

Who Must Comply

Safety Provisions

  • Fake Reviews Rule (effective Oct 2024): prohibits AI-generated fake consumer reviews
  • False AI capability claims are deceptive practices
  • Algorithmic disgorgement: deletion of models trained on illegally obtained data
  • Unfair practices doctrine applies to AI harms
  • Impersonation Rule (Feb 2024) with proposed AI individual impersonation extension

Compliance & Enforcement

Key Dates

Aug 22, 2022

ANPR issued for Commercial Surveillance Rule (AI-related provisions pending NPRM)

Penalties

Varies by case. FTC uses Section 5 (unfair/deceptive practices) authority. Recent AI-related settlements range from $5M-$650M. Penalties include monetary fines, algorithmic deletion orders, and compliance monitoring.

View on map

United States

Focus Areas

General regulation

Cite This

APA

United States. (2023). FTC AI Enforcement.

Related Regulations

In Effect US

NIST AI RMF

Dominant voluntary AI governance framework in the US. Four functions (Govern, Map, Measure, Manage) operationalize what regulators expect. Not legally binding but heavily referenced.

Proposed US

AI LEAD Act

Classifies AI systems as 'products' under federal law and establishes a federal cause of action for product liability claims against AI developers and deployers, including claims for design defects, failure to warn, and strict liability.

Proposed US-MO

MO AI Mental Health Prohibition

Prohibits any individual or entity that develops or deploys AI from advertising or representing that the AI is or is able to act as a mental health professional or is capable of providing therapy services. Violations treated as unlawful practice under the Missouri Merchandising Practices Act.

Enacted US-NY

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted EU

EU PLD

Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.

Enacted US-CT

CT SB 1295

Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.

Last updated January 24, 2026. Verify against primary sources before relying on this information.