Trump Administration Unveils Its AI Legislative Agenda: Calling for Preemption While Leaving Gaps
Last Friday, the Trump administration released its much-anticipated National Policy Framework for Artificial Intelligence (the “Framework”), unveiling its views on how AI should be regulated – or deregulated. The Framework, which was previewed by the White House’s executive order on AI from December 2025, calls for a federally preemptive law that would focus on kids’ online safety, balancing intellectual property rights with AI development, removing barriers to innovation, and enhancing AI literacy in the American workforce. It calls on Congress to not create “any new federal rulemaking body to regulate AI.”
Presented as a “comprehensive national legislative framework”, the Framework will need to be adopted in detailed legislation. Its provisions on federal preemption of state AI regulation are arguably its most consequential and contentious. In a press release, the Trump administration stated, “Importantly, this framework can succeed only if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
The administration calls on Congress to preempt state AI laws that “impose undue burdens,” replacing them with a single national standard. The Framework softens this by carving out certain areas where states would retain authority: traditional police powers to enforce laws of general applicability, zoning laws governing data center placement, and requirements governing states’ own use of AI in procurement, law enforcement, and public education.
However, the preemption language goes quite far in asserting that states should not be permitted to regulate AI development at all and should not penalize AI developers for third parties’ unlawful conduct involving their models. It further provides that states should not unduly burden the use of AI for activity that would be lawful if performed without AI, purporting to prevent states from imposing any AI-specific requirements on deployed applications.
This language will inevitably raise interpretative questions. For example, it isn’t clear how “laws of general applicability”, which the Framework would grandfather in, differ from laws that “regulate AI development,” which the Framework would forbid. Does merely mentioning the term “AI” in a law render it an “AI-specific requirement”? These days, most technology license contracts and even corporate acquisition agreements define and reference AI. And the Framework itself calls for imposing what appear to be AI-specific requirements, for example, in the context of child safety.
The political viability of the Framework remains an open question. Federal preemption of AI regulation has been a flashpoint within the Republican caucus itself. Tech-aligned members and libertarians favor a deregulatory approach, while states’ rights advocates and members representing districts with active state-level AI initiatives have resisted ceding ground to Washington. Last year, the Administration failed to include AI preemption in even its filibuster-proof budget reconciliation bill, underscoring these divisions. Indeed, the Framework could be seen as conceding that state laws can only be preempted by Congressional action. The Executive Order, in contrast, called for non-legislative measures against states, including by creating a litigation task force charged with challenging state AI laws and tasking executive agencies with restricting funding to states that stray from the Administration’s approach.
Meanwhile, states are not waiting around. With more than 100 bills pending on AI chatbots alone and dozens more addressing AI governance, safety, and industry-specific applications, the state-level regulatory apparatus is building momentum that will be increasingly difficult to reverse. Not just “blue states” like California, Colorado, and Illinois, but also “red states” like Montana, Texas, and Utah, have already enacted significant AI-related legislation, and more states are poised to follow. The longer Congress takes to act on the Framework, the more entrenched this patchwork becomes.
In addition to what’s covered in the Framework, certain central AI policy questions are not even mentioned. For example, the Framework doesn’t address national security, cybersecurity, AI governance, or high-risk AI. Regardless of regulatory or antiregulatory stance, policymakers all over the globe agree that these issues present critical AI risk vectors. It’s surprising, then, that the Framework omits recognition of these key fields.
Framework Breakdown
The Framework is divided into seven parts, the last of which concerns federal preemption of AI regulation. The other six parts are focused on:
- Protecting Children and Empowering Parents: The Framework calls on Congress to strengthen protection of kids online, including measures to prevent tech addiction, empower parents to oversee kids’ use of technology, and impose age assurance requirements. Notably, these ideas have been featured in numerous legislative initiatives over the past few years and do not seem specific to AI.
- Safeguarding and Strengthening American Communities: This section supports streamlining federal rules permitting data center construction, protecting residential ratepayers from increased electricity costs, combating AI-enabled fraud targeting seniors, and providing AI resources to small businesses. The Framework also calls for national security agencies to develop sufficient technical capacity to understand frontier AI models.
- Respecting Intellectual Property Rights and Supporting Creators: The IP provisions attempt to balance supporting AI innovation and protecting property rights. The Administration takes the position that training AI models on copyrighted material does not violate copyright laws, but at the same time emphasizes that “American creators, publishers, and innovators should be protected from AI-generated outputs.”
- Preventing Censorship and Protecting Free Speech: This section reflects the Administration’s broader agenda against “woke AI.” The Framework calls on Congress to prevent the federal government from coercing AI providers to ban, compel, or alter content based on partisan or ideological agendas.
- Enabling Innovation and Ensuring American AI Dominance: This section captures the Framework’s pro-innovation core, positing that the US must lead the world in AI by removing barriers to innovation and ensuring broad access to the testing environments.
- Educating Americans and Developing an AI-Ready Workforce: The Framework calls for integrating AI training into existing education and workforce programs and studying AI-driven workforce displacement at the task level.
On the whole, the Framework is best understood as a statement of principles rather than a detailed legislative blueprint. It signals the Administration’s clear preference for light-touch, pro-innovation regulation at the federal level, with the AI industry largely setting the pace through self-governance and industry standards. Whether this approach will prove adequate to address the real-world harms that AI is already causing, from disinformation and algorithmic discrimination to energy demands and labor market displacement, is a question the Framework largely sidesteps. The hard work of translating these principles into legislation that can command a majority in Congress has only just begun.