States Lead the Charge in AI Regulation Amid Congressional Inaction

States Push Ahead on AI Regulation as Federal Action Stalls

This month, the U.S. Senate voted to strike down a controversial provision in President Trump’s sweeping tax and spending package – officially the One Big Beautiful Bill Act – that would have blocked states and localities from regulating AI for the next ten years. The measure had ignited bipartisan backlash, with critics arguing that it would strip states of the ability to address harms posed by algorithmic discrimination, surveillance abuses, and AI-driven fraud.

The revival of state action comes amid growing frustration over the lack of a unified federal AI framework. While the business community has lobbied for preemptive national legislation to avoid regulatory fragmentation, many states view themselves as essential watchdogs in the absence of timely congressional oversight. Meanwhile, industry groups continue to press Washington to leave room for innovation, especially in sectors using AI for fraud detection and efficiency gains.

With the moratorium dead, more than 1,000 AI-related bills have surged back into legislative play across all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. These measures span a wide range of issues, including biometric data protection, algorithmic transparency, and restrictions on AI tools used in hiring, criminal justice, and education. Lawmakers in California, Colorado, New York, and Texas are crafting frameworks that impose risk assessments, mandate human oversight, and prohibit AI applications deemed discriminatory or unsafe.

The Decentralized Model of AI Governance

The Trump administration has tended to favor innovation and industry self-regulation over the imposition of federal policies and legislative mandates. David Sacks, an Internet technology investor and chair of Trump’s Council of Advisors on Science and Technology, has put forth a plan for government AI that closely reflects what the AI industry is lobbying for. For now, though, the collapse of the moratorium has reaffirmed a decentralized model of AI governance in the U.S.

As Congress weighs its next steps, states are forging ahead and setting the tone for how algorithmic accountability and AI safety will be handled on the ground. This decentralization has sparked both enthusiasm and concern. Proponents argue that states are better positioned to craft laws that reflect local values and needs. Critics, on the other hand, worry that the growing complexity of a fragmented regulatory landscape will impose heavy compliance burdens on businesses operating across state lines. Legal experts point to overlapping or contradictory requirements that could stifle innovation and increase legal uncertainty for developers.

Common Principles Among State Legislation

In the absence of a national framework, state legislatures continue to lead the charge. Despite regional differences, many of the legislative efforts share common principles. Most emphasize transparency, accountability, and harm prevention. Common provisions include mandatory disclosures when AI is used in hiring, lending, housing, or criminal justice; bans or restrictions on high-risk applications like facial recognition; and requirements for human oversight in automated decision-making.

States such as California, Colorado, New York, and Utah have also incorporated risk assessments and bias mitigation protocols into their laws, signaling a growing consensus on the need for ethical AI governance. California’s regulations build on the California Consumer Privacy Act, introducing rules around automated decision-making technologies. Colorado’s AI Act mandates safeguards for high-risk AI systems that affect access to essential services. New York now requires public agencies to disclose their use of AI tools and mandates regular bias assessments.

Healthcare and Public Safety Regulations

In states like Kentucky and Maryland, lawmakers are targeting healthcare-related AI and biometric data protections. Meanwhile, states like Texas and Montana have moved to regulate AI use in public safety and criminal sentencing contexts.

Federal Oversight and Future Developments

One of the most significant developments expected in the coming months is the role of the Senate Committee on Commerce in shaping federal AI oversight. A major focus of attention is Senate Bill 1290, the Artificial Intelligence and Critical Technology Workforce Framework Act, introduced by Senator Gary Peters (D-MI) with bipartisan support.

The bill aims to strengthen the U.S. workforce in AI and other critical technologies, and tasks the National Institute of Standards and Technology (NIST) with developing national workforce frameworks for AI and critical technologies that define roles, skills, and knowledge needed for these jobs, building on the existing NICE framework for cybersecurity.

As the artificial intelligence sector continues to grow and play an increasingly important role in everything from health care to finance to agriculture, it’s crucial that we have a highly skilled workforce ready to drive innovation and keep the United States at the forefront of this industry.

Incremental Approaches to Regulation

S.1290 is seen as part of a broader effort to align national security, economic competitiveness, and educational pipelines with AI governance. It has the support of various industry groups. The bill is expected to receive a dedicated hearing by the Senate Commerce Committee in the coming weeks.

Lawmakers appear to be pursuing an incremental approach. Recent federal measures, such as the bipartisan bill to ban Chinese-developed AI in federal agencies and the passage of the Take it Down Act to combat deepfake abuse, illustrate targeted responses to specific AI threats. These narrower bills may serve as a blueprint for broader legislation down the road.

State Laboratories for AI Regulation

In the meantime, the states have become de facto laboratories for AI regulation. Their legislative frameworks are already shaping how companies design, deploy, and govern their technologies. Whether through mandatory algorithmic audits, disclosure requirements, or bans on deceptive applications, states are setting the tone for AI accountability.

Amid this rapidly evolving landscape, the business community has ramped up efforts to pressure Congress into enacting a unified national AI framework. Major tech firms, trade associations, and cross-industry coalitions argue that without a coherent federal standard, the patchwork of state laws will hinder innovation and complicate nationwide deployment.

Their message to lawmakers is clear: a national framework is essential not only for regulatory clarity but also for maintaining U.S. competitiveness in the global AI race. Industry advocates are lobbying for legislation that balances innovation-friendly guardrails with meaningful accountability, drawing comparisons to Europe’s AI Act but urging a uniquely American approach.

Tech companies, including Microsoft, Google, Meta, Amazon, Nvidia, OpenAI, and Anthropic, are calling for national guardrails that preempt state laws and reduce compliance overhead. Payment processors and financial institutions have joined the chorus, warning that state-level restrictions could interfere with fraud detection systems powered by AI.

As the landscape continues to shift, the need for a comprehensive and coherent framework for AI governance becomes increasingly urgent.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...