State AI Regulation: A Bipartisan Debate on Federal Preemption

Moratorium on State AI Regulation: A Mixed Response from GOP

The recent moratorium on state artificial intelligence (AI) regulation has drawn varied reactions from Republican lawmakers, with some expressing concern while others commend the decision. As the political landscape continues to evolve, the implications of this moratorium could significantly influence AI development across the United States.

Legislative Context

Both the House and Senate versions of the One Big Beautiful Bill Act (OBBBA) include provisions aimed at preempting state regulations on AI. This legislative initiative coincides with ongoing discussions surrounding the Tax Cuts and Jobs Act (TCJA) and its implications for state and local tax (SALT) deductions.

Opponents of the TCJA’s SALT cap argue that it disproportionately affects residents in blue states, which tend to have higher tax burdens. However, the provision in OBBBA that prohibits states from regulating AI has sparked criticism from within the GOP itself. Notably, Congresswoman Marjorie Taylor Greene (R-Ga.) has publicly opposed the AI preemption, insisting it should be removed in the Senate.

The Debate on Federal vs. State Authority

Greene contended that the bill undermines state authority, stating, “This needs to be stripped out in the Senate. We should be reducing federal power and preserving state power.” In contrast, some former tech officials believe that allowing states to regulate AI could lead to a fragmented landscape of conflicting regulations.

Former chief technologist at the Federal Trade Commission, Neil Chilson, responded to Greene’s sentiments by questioning the wisdom of allowing states like California to dictate national AI policy, suggesting that the approach could inadvertently benefit foreign competitors such as China.

State-Level Regulatory Movements

The push for state-level regulation is exemplified by the recent passage of the RAISE Act in New York. This legislation aims to impose new regulations on companies involved in AI, raising concerns among industry stakeholders. A letter from NetChoice, a trade association representing online businesses, warned that the RAISE Act could stifle innovation and economic competitiveness.

In Texas, a state typically viewed as a bastion of conservatism, lawmakers have also sought to regulate AI through the introduction of the Texas Responsible AI Governance Act (TRAIGA). Although initially met with resistance, the bill was eventually reworked and passed as House Bill 149, which focuses on government use of AI rather than broader industry regulations.

Concerns Over a Patchwork of Regulations

Governors from various states have expressed apprehension about the prospect of a fragmented regulatory environment. Governor Ned Lamont of Connecticut highlighted the potential challenges posed by a state-by-state approach, emphasizing the need for cohesive federal legislation. Similarly, Governor Jared Polis of Colorado has advocated for federal preemption to avoid conflicting state regulations.

In Virginia, Governor Glenn Youngkin rejected an AI regulation bill, arguing that a heavy-handed regulatory approach would stifle innovation. He emphasized the importance of enabling creators to thrive rather than imposing burdensome regulations.

Prospects for Federal Preemption

The call for federal preemption of state AI regulations is gaining traction among bipartisan lawmakers. Advocates assert that a unified regulatory framework is essential for maintaining the United States’ competitive edge in AI development. Vance Ginn, president of Ginn Economic Consulting, pointed to the Internet Tax Freedom Act of 1998 as a precedent for a federal moratorium that spurred digital innovation.

As discussions continue, it remains to be seen whether Congress will prioritize a cohesive approach to AI regulation or allow states to dictate their policies, a decision that could have lasting repercussions for the future of technology in America.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...