CHAT Act
CHAT Act (S.2714)
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
Jurisdiction
United States
Enacted
Pending
Effective
TBD
Enforcement
TBD
119th Congress
Congress.govWhy It Matters
Most explicit federal bill on companion AI and suicide risk. Statutory definitions would establish federal framework for crisis detection requirements.
At a Glance
Applies to
Who Must Comply
- Companion AI chatbot operators
Obligations fall on:
Safety Provisions
- Statutory definition of "companion AI chatbot"
- Statutory definition of "suicidal ideation" in AI context
- Age verification requirements
- Covered entity obligations
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
View on map
United States
Focus Areas
Cite This
APA
United States. (n.d.). CHAT Act (S.2714).
Related Regulations
FTC Companion AI Study
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
State AG AI Warning
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
FL Companion Chatbot Act
Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Ireland OSMR
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
Last updated January 22, 2026. Verify against primary sources before relying on this information.