Rethinking AI Regulation: Embracing Federalism Over Federal Preemption

AI Governance Needs Federalism, Not a Federally Imposed Moratorium

On May 22, the U.S. House of Representatives passed a budget proposal including a ten-year moratorium on state and local regulation of AI. This proposal aims to nullify dozens of existing state AI laws and block states from enacting new ones. Congress should reject the proposed “AI preemption moratorium.” It is not only bad policy but is also likely unconstitutional under the Tenth Amendment.

Proponents of the moratorium argue that the fragmented patchwork of state AI laws justifies preempting state regulation, claiming it will spur innovation and help the United States outpace China. However, this argument rests on a false dichotomy between regulation and innovation. In reality, regulation can drive innovation by establishing clear rules, building public trust, and encouraging adoption. If the United States seeks to lead in AI, it must do so by upholding its democratic values and fostering systems that people trust—not by sidelining the institutions best positioned to govern responsibly.

Who Decides AI Regulation?

The pivot to preemption transforms first-order questions of how to regulate AI into second-order questions of who decides. The stakes are high: whoever decides AI regulation will determine the content, scope, and timing of the policies that emerge. While Congress has the power to regulate AI, stating “we choose not to regulate and won’t let states either” is an unusual and likely unconstitutional assertion of national power.

The Tenth Amendment reserves all powers not delegated to the federal government to the states. The anti-commandeering doctrine protects this constitutional balance by forbidding Congress from commanding state governments to enact or refrain from enacting laws. Although Congress can regulate private actors and preempt contrary state laws under the Supremacy Clause of the Constitution, it cannot directly regulate state institutions.

The Supreme Court’s 2018 decision in Murphy v. NCAA reinforced these principles. The federal statute at issue—PASPA—prohibited states from legalizing sports gambling, which the Court held was an unconstitutional commandeering. The Court ruled that PASPA’s fatal flaw was the absence of any regulation of private action.

A Blow to Federalism

Beyond constitutional problems, the proposed moratorium threatens democratic discourse on AI. Federalism serves not merely to protect state sovereignty but also to foster democratic representation and policy experimentation. Given profound uncertainties about AI’s impacts, states’ ability to test regulatory approaches without committing the entire nation provides crucial benefits.

State-level experiments generate vital empirical data about what works, what fails, and what requires refinement. State regulation often reflects multiple and competing interests and serves as a compromise solution. Although stakeholders may not be fully satisfied, the negotiated outcomes provide insights into what can be acceptable.

Moreover, state-level discussions catalyze robust policy debate and public engagement that might not occur in a fully centralized system. For instance, California’s proposed SB 1047 aimed to impose risk mitigation requirements and state oversight in the event of catastrophic harm incurred within the state. This proposal sparked a national debate about risk allocation, accountability, and regulatory design.

While some may expect a law stripping all states of regulatory power in a prominent field for a decade would meet resistance from state governments, this is one of the political safeguards of federalism: state representatives in Congress can block federal laws that preempt state law or negotiate for more favorable terms to protect state interests.

Building Smarter AI Governance

While not without its drawbacks, federalism offers what a fully centralized system does not: the ability to engage in policy innovation, widespread political participation, and iterative adaptation. The proposed AI moratorium contradicts these values. Prohibiting state legislatures from addressing AI-related challenges for a decade would stifle the values of federalism precisely when government oversight is most needed.

It is highly questionable whether Congress, after ten years of state inaction, will suddenly possess the optimal wisdom to craft comprehensive and effective AI rules. Even a shorter moratorium would be problematic, as policy decisions made—and not made—within the next two years will create path dependencies and trajectories with long-lasting effects.

Instead of shutting out the states, Congress can achieve regulatory cohesion in ways that respect constitutional limits and harness the positive potential of states. For instance, Congress could adopt a cooperative federalism framework, establishing federal baseline standards while allowing states the flexibility to experiment within federal parameters that promote both national and local values.

If the United States is to lead in AI, it must do so in a manner that reflects its constitutional commitments and political ideals. True AI progress comes not from regulatory paralysis but from building AI systems that people trust and that can be deployed responsibly across diverse contexts and populations.

Ultimately, Congress should avoid imposing a blanket moratorium on state AI regulation. The genius of our federalism lies not in its efficiency, but in its adaptability, resilience, and capacity for pluralism. Thoughtful governance structures, rather than shortcuts, are essential for harnessing the transformative potential of AI.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...