Skip to main content

FTC Companion AI Study

FTC Section 6(b) Study on AI Companion Chatbots

September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.

Jurisdiction

United States

Enacted

Pending

Effective

Sep 11, 2025

Enforcement

Federal Trade Commission

FTC Press Release

Why It Matters

FTC signaling enforcement interest in AI companion + children intersection. Non-recipient companies should prepare for similar scrutiny.

Recent Developments

Orders issued September 11, 2025. Responses due early 2026. Findings will shape federal approach.

At a Glance

Applies to

AI CompanionCharacter Chatbot

Harms addressed

Requires

Who Must Comply

Safety Provisions

  • Compulsory information demands under Section 6(b)
  • Focus: Children's mental health impacts
  • Focus: Data practices with minors
  • Focus: Safety features and their effectiveness
  • Companies: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, xAI

Compliance & Enforcement

Key Dates

Sep 11, 2025

6(b) orders issued to 7 AI companion companies

Sep 25, 2025

Staff coordination meeting deadline

Oct 26, 2025

45-day response deadline for ordered companies

Penalties

Not applicable. This is a 6(b) study order requiring companies to provide information, not a law with enforcement penalties. Non-compliance with 6(b) orders can result in contempt proceedings.

View on map

United States

Focus Areas

Mental health & crisis
Child safety

Cite This

APA

United States. (2025). FTC Section 6(b) Study on AI Companion Chatbots.

Related Regulations

Proposed US

CHAT Act

Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.

In Effect US

State AG AI Warning

Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

In Effect IE

Ireland OSMR

Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.

Last updated January 24, 2026. Verify against primary sources before relying on this information.