FTC Companion AI Study
FTC Section 6(b) Study on AI Companion Chatbots
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
Jurisdiction
United States
Enacted
Pending
Effective
Sep 11, 2025
Enforcement
Federal Trade Commission
Why It Matters
FTC signaling enforcement interest in AI companion + children intersection. Non-recipient companies should prepare for similar scrutiny.
Recent Developments
Orders issued September 11, 2025. Responses due early 2026. Findings will shape federal approach.
At a Glance
Who Must Comply
- Recipient companies; broader implications for industry
Safety Provisions
- Compulsory information demands under Section 6(b)
- Focus: Children's mental health impacts
- Focus: Data practices with minors
- Focus: Safety features and their effectiveness
- Companies: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, xAI
Compliance & Enforcement
Key Dates
Sep 11, 2025
6(b) orders issued to 7 AI companion companies
Sep 25, 2025
Staff coordination meeting deadline
Oct 26, 2025
45-day response deadline for ordered companies
Penalties
Not applicable. This is a 6(b) study order requiring companies to provide information, not a law with enforcement penalties. Non-compliance with 6(b) orders can result in contempt proceedings.
View on map
United States
Focus Areas
Cite This
APA
United States. (2025). FTC Section 6(b) Study on AI Companion Chatbots.
Related Regulations
CHAT Act
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
State AG AI Warning
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
ID Conversational AI Safety
Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
AU OSA Phase 2 Codes
Phase 2 industry codes under Australia's Online Safety Act extending age-restricted material obligations to AI companion chatbots, generative AI services, search engines, app stores, and gaming platforms. Requires robust age assurance, prohibits AI-generated sexually explicit conversations with minors, and mandates suicide/self-harm content safeguards.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Last updated January 24, 2026. Verify against primary sources before relying on this information.