Balancing AI Governance: Federal vs. State Regulation

When Should Congress Preempt State AI Law? The Lessons of Past Technologies

Who should govern AI—U.S. state governments, the federal government, or both? This question has recently moved to the center of practical policymaking. In July, an attempt to restrict states from regulating AI technology was defeated in the Senate. However, the concept of preemption, where Congress overrides state-level laws with national standards, may soon return. Members of Congress are drafting fresh proposals, while big tech companies continue to advocate for reduced state-level regulation.

The Debate on Preemption

Supporters of preemption argue that innovation could suffer if AI companies are subject to a tangle of state rules. In contrast, opponents believe that the rapid pace of AI development makes policy experimentation more valuable than uniformity. They argue that state governments are experienced in regulating various sectors, including policing and education, where AI is being implemented.

A crucial element missing from the current debate is the historical precedent. Congress has faced similar dilemmas with new technologies—such as nuclear power and genetically modified food—forcing it to determine whether, and how, to preempt state authority.

Key Lessons from History

From previous experiences, several lessons emerge regarding Congress’s purposes in preempting state laws:

  • Preventing Conflicting State Laws: Congress often seeks to prevent a fragmented national industry.
  • Restraining Outlier States: There is a desire to stop states from imposing their preferred regulations on the entire country.
  • Federal Expertise: Congress aims to leverage federal expertise, especially in areas related to national security.

Additional insights from past regulatory actions include:

  • Congress typically does not preempt state law without establishing a federal replacement.
  • Federal regulatory action is most likely when a compromise can be reached between pro-regulatory factions and industry coalitions.
  • Merely having diverging state laws is not sufficient to justify preemption; a truly national market must exist.
  • Congress allows for evolution in governance, responding quickly when states push too far.
  • It is uncommon for Congress to preempt entire policy areas; instead, it often carves out shared responsibilities between state and federal authorities.

Current State of AI Regulation

Applying these historical lessons to the current AI landscape reveals a mixed picture. The anticipated chaos from conflicting state regulations has not yet materialized. Most states are focusing their regulations on specific AI use cases that fall within traditional state authority.

One state, Colorado, has passed a comprehensive new AI regulation that includes extensive requirements for developers. However, other states have not followed suit, indicating that the case for federal preemption is weak at this moment. Transparency rules could be a reasonable starting point for federal intervention.

Promoting Uniformity

The argument for uniformity in AI regulation varies across different areas. Applications of AI that fall within traditional state governance may benefit from local experimentation. In contrast, areas such as foundation model development may require federal oversight due to the national nature of the industry and the economies of scale involved.

Federal intervention might be appropriate as the industry evolves and the potential for conflicting state laws increases. Congress should closely monitor state-level developments and be prepared to act if necessary.

Conclusion

Determining the balance between state and federal regulation of AI will require careful consideration and a willingness to adapt. Policymakers must embrace uncertainty and experiment with various approaches to find the most effective regulatory framework. The governance of AI, like that of past technologies, will benefit from incremental steps rather than an all-or-nothing approach.

More Insights

Responsible AI Principles for .NET Developers

In the era of Artificial Intelligence, trust in AI systems is crucial, especially in sensitive fields like banking and healthcare. This guide outlines Microsoft's six principles of Responsible...

EU AI Act Copyright Compliance Guidelines Unveiled

The EU AI Office has released a more workable draft of the Code of Practice for general-purpose model providers under the EU AI Act, which must be finalized by May 2. This draft outlines compliance...

Building Trust in the Age of AI: Compliance and Customer Confidence

Artificial intelligence holds great potential for marketers, provided it is supported by responsibly collected quality data. A recent panel discussion at the MarTech Conference emphasized the...

AI Transforming Risk and Compliance in Banking

In today's banking landscape, AI has become essential for managing risk and compliance, particularly in India, where regulatory demands are evolving rapidly. Financial institutions must integrate AI...

California’s Landmark AI Transparency Law: A New Era for Frontier Models

California lawmakers have passed a landmark AI transparency law, the Transparency in Frontier Artificial Intelligence Act (SB 53), aimed at enhancing accountability and public trust in advanced AI...

Ireland Establishes National AI Office to Oversee EU Act Implementation

The Government has designated 15 competent authorities under the EU's AI Act and plans to establish a National AI Office by August 2, 2026, to serve as the central coordinating authority in Ireland...

AI Recruitment Challenges and Legal Compliance

The increasing use of AI applications in recruitment offers efficiency benefits but also presents significant legal challenges, particularly under the EU AI Act and GDPR. Employers must ensure that AI...

Building Robust Guardrails for Responsible AI Implementation

As generative AI transforms business operations, deploying AI systems without proper guardrails is akin to driving a Formula 1 car without brakes. To successfully implement AI solutions, organizations...

Inclusive AI for Emerging Markets

Artificial Intelligence is transforming emerging markets, offering opportunities in education, healthcare, and financial inclusion, but also risks widening the digital divide. To ensure equitable...