Congress Must Preserve State Authority in AI Governance
Artificial intelligence (AI) is rapidly transforming how governments serve their constituents, enhancing everything from emergency response to licensing and zoning processes. However, as AI systems grow in power and prevalence, the pressing question of how to govern these tools responsibly has become urgent.
This urgency intensified recently with a leaked executive order from the Trump administration that would prevent states from enforcing their own AI regulations. Additionally, House leadership is contemplating inserting similar preemption language into the National Defense Authorization Act (NDAA). In response, California’s new AI law has sparked federal pushback, with Senator Ted Cruz planning to introduce legislation to challenge it directly.
The Implications of Federal Governance
Washington’s consideration to strip states of their authority over AI governance is a potential misstep. The impact of AI on state-level decisions is too broad, varied, and context-specific for federal deliberation to be effective. AI’s influence is not a distant concern; it is a current reality.
Officials across federal, state, and local governments bear unique civic responsibilities: maximizing social benefits while minimizing potential harms for their constituencies. This includes providing the right balance of investments, incentives, and enforceable regulations. For example, states and local governments must address distinct requirements related to planning, zoning, and licensing services.
The Need for State Authority
AI is affecting many local services, and a one-size-fits-all approach is ineffective for tasks such as bike lane planning or pothole detection. States manage diverse services, including emergency response, housing, education, health, utilities, and public safety. Effective AI deployment must align with local laws, needs, and conditions.
A federal governance framework could serve the needs of innovative businesses developing AI systems while offering predictability and consistency for rapid deployment. However, Congress must recognize that a national framework should encompass more than just a moratorium on state laws. Innovative approaches are necessary to balance AI’s potential benefits with its risks, and states are actively working to find this balance.
Consequences of Stripping State Authority
Limiting states’ authority to regulate AI could have several unintended consequences. One major effect would be an influx of unverified systems into procurement pipelines. In a regulatory vacuum, vendors may market unvalidated AI solutions to officials lacking the technical expertise to discern quality from hype, further diminishing public trust in AI.
Additionally, state agencies would face higher costs and increased complexity without the authority to set their own regulatory guardrails. They would have to rely on ad-hoc contractual solutions instead of cohesive policies, contradicting Congress’s intent to avoid a patchwork approach.
Such limitations could also stifle the beneficial adoption of AI technologies. If states cannot manage or mitigate AI risks, they may avoid deploying transformative tools, hindering innovation where it is most needed.
Empowering States for Effective Governance
Instead of sidelining states, Congress should empower them. As Justice Louis Brandeis famously stated, states serve as “laboratories of democracy.” Many states are already experimenting with various incentives—such as transparency, accountability, and contestability—through public-private partnerships and regulatory sandboxes.
By working closely with local businesses, states can craft responsive regulations and quickly address emerging AI-related harms, thus preventing potential national crises. Supporting state-led innovation tailored to specific local conditions can enhance the United States’ leadership in AI on a global scale.
The Call for Collaboration
AI companies may seek to avoid a fragmented regulatory landscape, but the more pressing issue is the absence of clear and enforceable safeguards. Without established standards, businesses may exaggerate their product capabilities or label software as “AI” to evade oversight, potentially compromising safety in critical areas.
The future of AI governance hinges on collaboration. Congress and the Trump administration should reject proposals that block state AI laws—whether through the NDAA or executive actions—and instead work alongside states to develop a shared governance model that protects the public while fostering responsible innovation.
Empowering states is not a hindrance to progress; rather, it is essential for ensuring that AI strengthens America’s communities rather than undermining the institutions that serve them.