White House AI Legislative Framework
National Policy Framework for Artificial Intelligence: Legislative Recommendations
Non-binding White House framework outlining seven legislative pillars for Congress, including child safety protections, federal preemption of state AI laws, liability limitations for AI developers, intellectual property protections, free speech safeguards, AI infrastructure investment, and workforce development. Calls for a unified national standard superseding state AI regulations while preserving state child safety, consumer protection, and anti-fraud laws.
Jurisdiction
United States
Enacted
Mar 20, 2026
Effective
Mar 20, 2026
Enforcement
N/A (non-binding framework; references FTC, FCC if enacted)
Released March 20, 2026. Non-binding legislative recommendations to Congress. Accompanied by Sen. Blackburn's TRUMP AMERICA AI Act discussion draft (291 pages).
The White HouseWhy It Matters
Sets the Trump administration's legislative agenda for AI. Proposes broad federal preemption of state AI laws while preserving child safety and consumer protection carve-outs. Signals political direction even if Congress does not act on all recommendations.
Recent Developments
Released March 20, 2026. Accompanied by Sen. Marsha Blackburn's TRUMP AMERICA AI Act discussion draft, the most comprehensive federal AI legislation proposed in the US to date (291 pages).
At a Glance
Harms addressed
Requires
Who Must Comply
- Congress (as legislative recommendations)
- AI developers and platforms (if enacted)
- State legislatures (preemption implications)
Safety Provisions
- Calls for age-assurance requirements (such as parental attestation) for AI platforms likely accessed by minors
- Calls for parental tools to manage children's privacy, screen time, content exposure, and accounts
- Calls for AI platforms to implement features reducing risks of sexual exploitation and self-harm by minors
- Calls for existing child privacy protections to apply to AI systems including data collection limits
- Calls for building on TAKE IT DOWN Act for deepfake protections
Compliance & Enforcement
Penalties
N/A (non-binding)
View on map
United States
Focus Areas
Cite This
APA
United States. (2026). National Policy Framework for Artificial Intelligence: Legislative Recommendations.
Related Regulations
Biden AI EO
Comprehensive federal AI policy requiring safety testing, reporting, and standards development. Revoked in January 2025 by new administration.
Trump AI Preemption EO
Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).
ID Conversational AI Safety
Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
AU OSA Phase 2 Codes
Phase 2 industry codes under Australia's Online Safety Act extending age-restricted material obligations to AI companion chatbots, generative AI services, search engines, app stores, and gaming platforms. Requires robust age assurance, prohibits AI-generated sexually explicit conversations with minors, and mandates suicide/self-harm content safeguards.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Last updated March 27, 2026. Verify against primary sources before relying on this information.