Framework for American AI Leadership: Beyond Preemption

Preemption is No Panacea: Creating a Workable National Framework for American AI Dominance

The federal preemption of state law has become both a crucial tool and a potential pitfall for lawmakers and the Trump Administration, as they strive to establish the United States as the world leader in AI. After Congress neglected to include a broad moratorium on state authority to regulate AI in last year’s budget reconciliation package, the Trump Administration’s AI action plan adopted a more nuanced approach. Following an unsuccessful attempt to include extensive AI-related preemption in the National Defense Authorization Act (NDAA), an executive order issued on December 11 built upon the Trump AI action plan, establishing a more direct and federally preemptive strategy for AI policy.

Understanding Preemption

Grounded in the Supremacy Clause of the US Constitution, when state and federal law conflict, federal law displaces, or preempts, state law. This principle applies universally, regardless of whether the conflicting laws originate from legislatures, courts, administrative agencies, or constitutions. Historically, Congress has preempted state regulation across various sectors while allowing federal agencies to set national minimum standards without entirely eliminating state authority.

Proponents of strong preemption argue that it is essential to prevent a fragmented landscape of state regulations that could burden AI developers with inconsistent compliance obligations. Conversely, opponents assert that preserving state authority is vital for addressing local issues, protecting residents, and filling gaps left by federal legislation.

The Need for Congressional Action

Both the White House and Congress now recognize that a viable national AI governance framework necessitates congressional intervention. As the White House Office of Science and Technology Policy (OSTP) Director has indicated, the administration requires support from the Legislative Branch to solidify America’s position as a global AI standard-setter.

Unfortunately, the ongoing debate surrounding preemption risks stalling essential technology reforms. The historical struggle of Congress to enact comprehensive privacy legislation demonstrates how preemption disputes can hinder otherwise popular initiatives, despite bipartisan backing. Even with a focus on deregulation and innovation, the Trump Administration’s AI policy agenda will require legislative action to achieve its objectives.

AI as a Generational Technology

There is a growing consensus that AI represents a generational technology with the potential to reshape the global landscape. The competition for AI supremacy directly impacts American national security and economic vitality. The urgency to outpace adversaries, particularly China, pervades federal AI policymaking. A recent study indicates that China has already emerged as a formidable global power in technology innovation, nearly matching the US in AI development.

The Trump Administration’s AI agenda emphasizes promoting innovation and American dominance. Upon taking office for his second term, Trump rescinded a comprehensive executive order from the previous administration aimed at establishing a government-wide framework for safe and trustworthy AI development. This withdrawal, coupled with a lack of congressional consensus, has widened the gap between state legislative activities and the absence of federal AI legislation.

The Legislative Landscape

During the 2025 session, all 50 states, along with the District of Columbia, the US Virgin Islands, and Puerto Rico, introduced AI-related legislation, resulting in approximately 100 measures being adopted across 38 jurisdictions. A notable inclusion in the House’s budget reconciliation measure was a 10-year moratorium on state AI laws, which faced significant bipartisan opposition. A coalition of 40 state attorneys general and 260 state legislators from all 50 states publicly condemned this provision.

The Senate’s attempt to revise this language also faltered, as bipartisan coalitions opposed including any broad AI-related state law moratorium in the NDAA.

The Trump AI Executive Order

The executive order outlines a federally preemptive approach to AI, aimed at advancing US global dominance through a minimally burdensome national policy framework. Unlike the proposed moratoriums, this action plan instructs federal agencies to evaluate state regulatory environments when distributing AI-related funding, explicitly indicating that states with overly restrictive laws may be ineligible for such funds.

Crucially, the executive order emphasizes the necessity for collaboration with Congress to establish a national standard that preempts conflicting state AI laws. The order also delineates certain exemptions for lawful state AI laws, including those focused on child safety protections and state government use of AI.

The Complex Nature of AI Regulation

Creating a national AI framework is inherently complex, as AI is not just a singular industry or technology but an array of capabilities integrated across various sectors, including healthcare, finance, and law enforcement. This makes regulation context-dependent and intertwined with areas traditionally governed by state law. Delayed action on AI poses immediate risks related to national security and economic competitiveness.

Congress’s Role in AI Policy

Determining the division of authority between state and federal governments through preemption does not automatically result in a unified federal policy framework for AI. The Trump AI action plan outlines a national strategy built on three pillars: accelerating innovation, building infrastructure, and ensuring international leadership. However, congressional action is essential to provide the necessary legislative framework, resources, and authorities to support this vision.

Numerous specific bipartisan bills are currently in progress, aiming to take steps toward realizing the Trump Administration’s AI agenda. These include initiatives to develop national standards for AI systems, enhance testing and evaluation capabilities, and establish regulatory sandboxes for innovation.

Conclusion

The importance of establishing a coherent and effective national framework for AI governance cannot be overstated. As AI technology continues to evolve, the US must navigate the complexities of state and federal regulation to ensure it retains its competitive edge in the global arena.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...