A Short Primer to President Trump’s Executive Order: “Ensuring a National Policy Framework for Artificial Intelligence”
On December 11, 2025, President Trump signed an Executive Order aimed at limiting state governments’ powers to regulate artificial intelligence. This move signifies a shift towards reduced regulation, highlighting the administration’s focus on speed and innovation over guardrails.
Reduced Regulation Does Not Mean Reduced Risk
The Executive Order, titled Ensuring a National Policy Framework for Artificial Intelligence, seeks to preempt state-level regulations on AI. This initiative lays the groundwork for the federal government to challenge state AI laws, pursue new regulations, and influence state actions by threatening to withhold federal funds.
The EO outlines several key arguments:
- A 50-state patchwork of different regulatory regimes stifles innovation and creates compliance challenges.
- Anti-discrimination provisions in some state laws could embed ideological bias within AI models.
- State AI laws may violate the Commerce Clause by regulating beyond state borders.
Action Items Directed by the Executive Order
The EO mandates several actions by various Executive Branch agencies:
- The Attorney General is tasked with creating an AI Litigation Task Force within 30 days to challenge inconsistent state laws.
- The Secretary of Commerce must publish an evaluation of existing conflicting state AI laws within 90 days.
- States with “onerous AI laws” will be deemed ineligible for federal grants, linking AI policy to the Broadband Equity Access and Deployment Program.
- The Federal Communications Commission (FCC) is required to determine whether to adopt a federal reporting standard for AI models within 90 days.
- The Federal Trade Commission (FTC) must create a policy statement within 90 days regarding state law preemption related to AI model outputs.
- Presidential Advisors are to prepare legislative recommendations for a uniform federal policy framework for AI.
Examples of State Laws at Odds with the EO
The EO specifically cites Colorado’s algorithmic discrimination law as an example of state regulations that contradict its directives. This law is viewed as potentially pressuring AI models to produce false results to avoid perceived differential treatment.
What Does This Mean for State Law?
It is important to note that Executive Orders do not preempt existing state laws governing AI. Therefore, all state and local laws remain enforceable. States may challenge the actions of executive agencies, raising constitutional arguments regarding the 10th Amendment and Spending Clause coercion.
Disagreement exists among federal lawmakers and state governors about the overarching federal pre-emption of AI laws. For instance, in December 2025, Congress rejected a provision in the National Defense Authorization Act (NDAA) that would have prohibited states from enforcing their own AI regulations.
Navigating Uncertainty in AI Regulation
Organizations must not confuse deregulation with reduced risk. AI-related risks will not present themselves solely as “AI claims” but will emerge through established legal pathways such as:
- Product Liability: For instance, an autonomous vehicle AI failing to detect pedestrians due to design flaws.
- Unfair and Deceptive Acts and Practices (UDAP): AI-driven pricing tools targeting vulnerable consumers unfairly.
- FTC Act, Section 5: Misrepresentation by AI vendors about data collection practices.
- Antitrust Laws: Exclusionary conduct by dominant platforms utilizing biased AI algorithms.
- Privacy & Data Protection Laws: AI systems collecting personal data without proper consent.
- Intellectual Property Infringement: Training AI on copyrighted works without authorization.
While these examples illustrate potential liabilities, the uncertainty surrounding AI regulation does not eliminate legal risks. Existing regulatory authorities and private plaintiffs may still pursue enforcement actions, and the absence of AI-specific rules does not shield companies from liability.