Protecting State Authority in AI Regulation

Congress Must Preserve State Authority in AI Governance

Artificial intelligence (AI) is rapidly transforming how governments serve their constituents, enhancing everything from emergency response to licensing and zoning processes. However, as AI systems grow in power and prevalence, the pressing question of how to govern these tools responsibly has become urgent.

This urgency intensified recently with a leaked executive order from the Trump administration that would prevent states from enforcing their own AI regulations. Additionally, House leadership is contemplating inserting similar preemption language into the National Defense Authorization Act (NDAA). In response, California’s new AI law has sparked federal pushback, with Senator Ted Cruz planning to introduce legislation to challenge it directly.

The Implications of Federal Governance

Washington’s consideration to strip states of their authority over AI governance is a potential misstep. The impact of AI on state-level decisions is too broad, varied, and context-specific for federal deliberation to be effective. AI’s influence is not a distant concern; it is a current reality.

Officials across federal, state, and local governments bear unique civic responsibilities: maximizing social benefits while minimizing potential harms for their constituencies. This includes providing the right balance of investments, incentives, and enforceable regulations. For example, states and local governments must address distinct requirements related to planning, zoning, and licensing services.

The Need for State Authority

AI is affecting many local services, and a one-size-fits-all approach is ineffective for tasks such as bike lane planning or pothole detection. States manage diverse services, including emergency response, housing, education, health, utilities, and public safety. Effective AI deployment must align with local laws, needs, and conditions.

A federal governance framework could serve the needs of innovative businesses developing AI systems while offering predictability and consistency for rapid deployment. However, Congress must recognize that a national framework should encompass more than just a moratorium on state laws. Innovative approaches are necessary to balance AI’s potential benefits with its risks, and states are actively working to find this balance.

Consequences of Stripping State Authority

Limiting states’ authority to regulate AI could have several unintended consequences. One major effect would be an influx of unverified systems into procurement pipelines. In a regulatory vacuum, vendors may market unvalidated AI solutions to officials lacking the technical expertise to discern quality from hype, further diminishing public trust in AI.

Additionally, state agencies would face higher costs and increased complexity without the authority to set their own regulatory guardrails. They would have to rely on ad-hoc contractual solutions instead of cohesive policies, contradicting Congress’s intent to avoid a patchwork approach.

Such limitations could also stifle the beneficial adoption of AI technologies. If states cannot manage or mitigate AI risks, they may avoid deploying transformative tools, hindering innovation where it is most needed.

Empowering States for Effective Governance

Instead of sidelining states, Congress should empower them. As Justice Louis Brandeis famously stated, states serve as “laboratories of democracy.” Many states are already experimenting with various incentives—such as transparency, accountability, and contestability—through public-private partnerships and regulatory sandboxes.

By working closely with local businesses, states can craft responsive regulations and quickly address emerging AI-related harms, thus preventing potential national crises. Supporting state-led innovation tailored to specific local conditions can enhance the United States’ leadership in AI on a global scale.

The Call for Collaboration

AI companies may seek to avoid a fragmented regulatory landscape, but the more pressing issue is the absence of clear and enforceable safeguards. Without established standards, businesses may exaggerate their product capabilities or label software as “AI” to evade oversight, potentially compromising safety in critical areas.

The future of AI governance hinges on collaboration. Congress and the Trump administration should reject proposals that block state AI laws—whether through the NDAA or executive actions—and instead work alongside states to develop a shared governance model that protects the public while fostering responsible innovation.

Empowering states is not a hindrance to progress; rather, it is essential for ensuring that AI strengthens America’s communities rather than undermining the institutions that serve them.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...