State AG AI Warning
State Attorneys General Warnings on AI Chatbots
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
Jurisdiction
United States
Enacted
Pending
Effective
Aug 25, 2025
Enforcement
State Attorneys General (coordinated)
Why It Matters
40+ AGs acting together signals coordinated enforcement coming. Self-harm encouragement explicitly identified as concern.
At a Glance
Who Must Comply
- AI chatbot operators serving US users
Obligations fall on:
Safety Provisions
- Warning against: AI chatbots flirting with children
- Warning against: AI encouraging self-harm
- Warning against: AI engaging in sexual conversations with minors
- Demand for enhanced safety measures
- Threat of coordinated state enforcement
View on map
United States
Focus Areas
Cite This
APA
United States. (2025). State Attorneys General Warnings on AI Chatbots.
Related Regulations
FTC Companion AI Study
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
CHAT Act
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
TX TRAIGA
Comprehensive AI governance with prohibited uses approach. Bans AI that incites self-harm/suicide, exploits children, or intentionally discriminates. Government entities have additional disclosure requirements. First-in-nation AI regulatory sandbox program.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Ireland OSMR
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Last updated January 22, 2026. Verify against primary sources before relying on this information.