SAFE BOTs Act
SAFE BOTs Act (H.R.6489)
Requires disclosure to minors that they are interacting with AI (not a human) and that the AI is not a licensed professional. Baseline transparency approach.
Jurisdiction
United States
Enacted
Pending
Effective
TBD
Enforcement
TBD
119th Congress
Congress.govWhy It Matters
Disclosure-focused approach to AI chatbot safety. Simpler than CHAT/GUARD Acts but establishes minimum transparency requirements for minors.
At a Glance
Applies to
Requires
Who Must Comply
- AI chatbot operators interacting with minors
Obligations fall on:
Safety Provisions
- Disclosure to minors about AI nature
- "Not a human" disclosure requirement
- "Not a licensed professional" disclosure requirement
View on map
United States
Focus Areas
Cite This
APA
United States. (n.d.). SAFE BOTs Act (H.R.6489).
Related Regulations
Trump AI Preemption EO
Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).
White House AI Legislative Framework
Non-binding White House framework outlining seven legislative pillars for Congress, including child safety protections, federal preemption of state AI laws, liability limitations for AI developers, intellectual property protections, free speech safeguards, AI infrastructure investment, and workforce development. Calls for a unified national standard superseding state AI regulations while preserving state child safety, consumer protection, and anti-fraud laws.
EU CRA
Mandatory cybersecurity requirements for all products with digital elements placed on the EU market, including AI software. Requires security by design, vulnerability handling, incident reporting to ENISA, software bills of materials, and CE marking for market access.
EU PLD
Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.
China Minor Platform Identification Measures
Establishes quantified thresholds and assessment criteria for identifying internet platforms with massive minor user bases or significant impact on minors. Specifies identification procedures and delisting rules for platforms that no longer meet criteria. Platforms meeting thresholds face enhanced obligations for minor protection.
Malaysia OSA
Requires licensed platforms to implement content moderation systems, child-specific safeguards, and submit Online Safety Plans. Nine categories of harmful content regulated.
Last updated January 22, 2026. Verify against primary sources before relying on this information.