Trump AI Preemption EO
Executive Order on Ensuring a National Policy Framework for Artificial Intelligence
Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).
Jurisdiction
United States
Enacted
Dec 11, 2025
Effective
Dec 11, 2025
Enforcement
DOJ (AI Litigation Task Force), FTC, FCC
Signed December 11, 2025. Legal authority disputed by state governors and constitutional scholars.
White HouseWhy It Matters
First federal attempt to limit state-level AI regulation. Creates major uncertainty for companies operating across states. Carve-out for child safety protections means compliance obligations under state laws (SB243, etc.) remain in effect. If upheld, could shift AI regulation to federal level; if struck down, confirms states' authority.
Recent Developments
Florida Governor Ron DeSantis publicly challenged order, stating 'An executive order doesn't/can't preempt state legislative action.' Colorado AI Act explicitly referenced as problematic. Legal challenges expected from multiple states. White House AI advisor David Sacks stated administration will not challenge child safety state laws.
Who Must Comply
- Federal agencies (DOJ, FTC, FCC)
- State AI laws (subject to preemption challenge)
- AI developers and platforms operating across state lines
Obligations fall on:
Safety Provisions
- Explicitly preserves state child safety protections from preemption
- Preserves state AI infrastructure laws
- Preserves state government procurement standards
- DOJ AI Litigation Task Force to challenge state laws on interstate commerce or federal preemption grounds
- FCC directed to adopt federal reporting/disclosure standards for AI models
- FTC directed to issue policy statement on AI and unfair/deceptive practices
Exemptions
Child Safety Protections Preserved
State laws on 'child safety protections' explicitly exempt from preemption. Scope of 'child safety' to be defined in legislative recommendation.
- • State law addresses child safety
- • Not subject to DOJ litigation challenge
- • Companies must continue complying with state child safety laws
AI Infrastructure Laws Preserved
State laws on AI infrastructure exempt from preemption.
- • State law addresses AI infrastructure (e.g., data centers, compute resources)
State Government Procurement Preserved
State government procurement standards for AI exempt from preemption.
- • State law governs state government's own AI procurement
Compliance & Enforcement
Penalties
Federal enforcement against states (litigation); potential FTC/FCC penalties for AI companies (TBD)
View on map
United States
Focus Areas
Cite This
APA
United States. (2025). Executive Order on Ensuring a National Policy Framework for Artificial Intelligence.
Related Regulations
SAFE BOTs Act
Requires disclosure to minors that they are interacting with AI (not a human) and that the AI is not a licensed professional. Baseline transparency approach.
White House AI Legislative Framework
Non-binding White House framework outlining seven legislative pillars for Congress, including child safety protections, federal preemption of state AI laws, liability limitations for AI developers, intellectual property protections, free speech safeguards, AI infrastructure investment, and workforce development. Calls for a unified national standard superseding state AI regulations while preserving state child safety, consumer protection, and anti-fraud laws.
EU CRA
Mandatory cybersecurity requirements for all products with digital elements placed on the EU market, including AI software. Requires security by design, vulnerability handling, incident reporting to ENISA, software bills of materials, and CE marking for market access.
EU PLD
Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.
China Minor Platform Identification Measures
Establishes quantified thresholds and assessment criteria for identifying internet platforms with massive minor user bases or significant impact on minors. Specifies identification procedures and delisting rules for platforms that no longer meet criteria. Platforms meeting thresholds face enhanced obligations for minor protection.
Malaysia OSA
Requires licensed platforms to implement content moderation systems, child-specific safeguards, and submit Online Safety Plans. Nine categories of harmful content regulated.
Last updated February 17, 2026. Verify against primary sources before relying on this information.