FTC Companion AI Study
FTC Section 6(b) Study on AI Companion Chatbots
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
Jurisdiction
United States
US
Enacted
Unknown
Effective
Sep 11, 2025
Enforcement
Federal Trade Commission
Harms Addressed
Who Must Comply
Safety Provisions
- • Compulsory information demands under Section 6(b)
- • Focus: Children's mental health impacts
- • Focus: Data practices with minors
- • Focus: Safety features and their effectiveness
- • Companies: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, xAI
Compliance Timeline
Sep 11, 2025
6(b) orders issued to 7 AI companion companies
Sep 25, 2025
Staff coordination meeting deadline
Oct 26, 2025
45-day response deadline for ordered companies
Enforcement
Enforced by
Federal Trade Commission
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- No
Why It Matters
FTC signaling enforcement interest in AI companion + children intersection. Non-recipient companies should prepare for similar scrutiny.
Recent Developments
Orders issued September 11, 2025. Responses due early 2026. Findings will shape federal approach.
Cite This
APA
United States. (2025). FTC Section 6(b) Study on AI Companion Chatbots. Retrieved from https://nope.net/regs/us-ftc-6b-companion
BibTeX
@misc{us_ftc_6b_companion,
title = {FTC Section 6(b) Study on AI Companion Chatbots},
author = {United States},
year = {2025},
url = {https://nope.net/regs/us-ftc-6b-companion}
} Related Regulations
CHAT Act
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
State AG AI Warning
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
CA SB243
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.