States Lead the Charge in AI Regulation Amid Congressional Inaction

States Push Ahead on AI Regulation as Federal Action Stalls

This month, the U.S. Senate voted to strike down a controversial provision in President Trump’s sweeping tax and spending package – officially the One Big Beautiful Bill Act – that would have blocked states and localities from regulating AI for the next ten years. The measure had ignited bipartisan backlash, with critics arguing that it would strip states of the ability to address harms posed by algorithmic discrimination, surveillance abuses, and AI-driven fraud.

The revival of state action comes amid growing frustration over the lack of a unified federal AI framework. While the business community has lobbied for preemptive national legislation to avoid regulatory fragmentation, many states view themselves as essential watchdogs in the absence of timely congressional oversight. Meanwhile, industry groups continue to press Washington to leave room for innovation, especially in sectors using AI for fraud detection and efficiency gains.

With the moratorium dead, more than 1,000 AI-related bills have surged back into legislative play across all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. These measures span a wide range of issues, including biometric data protection, algorithmic transparency, and restrictions on AI tools used in hiring, criminal justice, and education. Lawmakers in California, Colorado, New York, and Texas are crafting frameworks that impose risk assessments, mandate human oversight, and prohibit AI applications deemed discriminatory or unsafe.

The Decentralized Model of AI Governance

The Trump administration has tended to favor innovation and industry self-regulation over the imposition of federal policies and legislative mandates. David Sacks, an Internet technology investor and chair of Trump’s Council of Advisors on Science and Technology, has put forth a plan for government AI that closely reflects what the AI industry is lobbying for. For now, though, the collapse of the moratorium has reaffirmed a decentralized model of AI governance in the U.S.

As Congress weighs its next steps, states are forging ahead and setting the tone for how algorithmic accountability and AI safety will be handled on the ground. This decentralization has sparked both enthusiasm and concern. Proponents argue that states are better positioned to craft laws that reflect local values and needs. Critics, on the other hand, worry that the growing complexity of a fragmented regulatory landscape will impose heavy compliance burdens on businesses operating across state lines. Legal experts point to overlapping or contradictory requirements that could stifle innovation and increase legal uncertainty for developers.

Common Principles Among State Legislation

In the absence of a national framework, state legislatures continue to lead the charge. Despite regional differences, many of the legislative efforts share common principles. Most emphasize transparency, accountability, and harm prevention. Common provisions include mandatory disclosures when AI is used in hiring, lending, housing, or criminal justice; bans or restrictions on high-risk applications like facial recognition; and requirements for human oversight in automated decision-making.

States such as California, Colorado, New York, and Utah have also incorporated risk assessments and bias mitigation protocols into their laws, signaling a growing consensus on the need for ethical AI governance. California’s regulations build on the California Consumer Privacy Act, introducing rules around automated decision-making technologies. Colorado’s AI Act mandates safeguards for high-risk AI systems that affect access to essential services. New York now requires public agencies to disclose their use of AI tools and mandates regular bias assessments.

Healthcare and Public Safety Regulations

In states like Kentucky and Maryland, lawmakers are targeting healthcare-related AI and biometric data protections. Meanwhile, states like Texas and Montana have moved to regulate AI use in public safety and criminal sentencing contexts.

Federal Oversight and Future Developments

One of the most significant developments expected in the coming months is the role of the Senate Committee on Commerce in shaping federal AI oversight. A major focus of attention is Senate Bill 1290, the Artificial Intelligence and Critical Technology Workforce Framework Act, introduced by Senator Gary Peters (D-MI) with bipartisan support.

The bill aims to strengthen the U.S. workforce in AI and other critical technologies, and tasks the National Institute of Standards and Technology (NIST) with developing national workforce frameworks for AI and critical technologies that define roles, skills, and knowledge needed for these jobs, building on the existing NICE framework for cybersecurity.

As the artificial intelligence sector continues to grow and play an increasingly important role in everything from health care to finance to agriculture, it’s crucial that we have a highly skilled workforce ready to drive innovation and keep the United States at the forefront of this industry.

Incremental Approaches to Regulation

S.1290 is seen as part of a broader effort to align national security, economic competitiveness, and educational pipelines with AI governance. It has the support of various industry groups. The bill is expected to receive a dedicated hearing by the Senate Commerce Committee in the coming weeks.

Lawmakers appear to be pursuing an incremental approach. Recent federal measures, such as the bipartisan bill to ban Chinese-developed AI in federal agencies and the passage of the Take it Down Act to combat deepfake abuse, illustrate targeted responses to specific AI threats. These narrower bills may serve as a blueprint for broader legislation down the road.

State Laboratories for AI Regulation

In the meantime, the states have become de facto laboratories for AI regulation. Their legislative frameworks are already shaping how companies design, deploy, and govern their technologies. Whether through mandatory algorithmic audits, disclosure requirements, or bans on deceptive applications, states are setting the tone for AI accountability.

Amid this rapidly evolving landscape, the business community has ramped up efforts to pressure Congress into enacting a unified national AI framework. Major tech firms, trade associations, and cross-industry coalitions argue that without a coherent federal standard, the patchwork of state laws will hinder innovation and complicate nationwide deployment.

Their message to lawmakers is clear: a national framework is essential not only for regulatory clarity but also for maintaining U.S. competitiveness in the global AI race. Industry advocates are lobbying for legislation that balances innovation-friendly guardrails with meaningful accountability, drawing comparisons to Europe’s AI Act but urging a uniquely American approach.

Tech companies, including Microsoft, Google, Meta, Amazon, Nvidia, OpenAI, and Anthropic, are calling for national guardrails that preempt state laws and reduce compliance overhead. Payment processors and financial institutions have joined the chorus, warning that state-level restrictions could interfere with fraud detection systems powered by AI.

As the landscape continues to shift, the need for a comprehensive and coherent framework for AI governance becomes increasingly urgent.

More Insights

Responsible AI Principles for .NET Developers

In the era of Artificial Intelligence, trust in AI systems is crucial, especially in sensitive fields like banking and healthcare. This guide outlines Microsoft's six principles of Responsible...

EU AI Act Copyright Compliance Guidelines Unveiled

The EU AI Office has released a more workable draft of the Code of Practice for general-purpose model providers under the EU AI Act, which must be finalized by May 2. This draft outlines compliance...

Building Trust in the Age of AI: Compliance and Customer Confidence

Artificial intelligence holds great potential for marketers, provided it is supported by responsibly collected quality data. A recent panel discussion at the MarTech Conference emphasized the...

AI Transforming Risk and Compliance in Banking

In today's banking landscape, AI has become essential for managing risk and compliance, particularly in India, where regulatory demands are evolving rapidly. Financial institutions must integrate AI...

California’s Landmark AI Transparency Law: A New Era for Frontier Models

California lawmakers have passed a landmark AI transparency law, the Transparency in Frontier Artificial Intelligence Act (SB 53), aimed at enhancing accountability and public trust in advanced AI...

Ireland Establishes National AI Office to Oversee EU Act Implementation

The Government has designated 15 competent authorities under the EU's AI Act and plans to establish a National AI Office by August 2, 2026, to serve as the central coordinating authority in Ireland...

AI Recruitment Challenges and Legal Compliance

The increasing use of AI applications in recruitment offers efficiency benefits but also presents significant legal challenges, particularly under the EU AI Act and GDPR. Employers must ensure that AI...

Building Robust Guardrails for Responsible AI Implementation

As generative AI transforms business operations, deploying AI systems without proper guardrails is akin to driving a Formula 1 car without brakes. To successfully implement AI solutions, organizations...

Inclusive AI for Emerging Markets

Artificial Intelligence is transforming emerging markets, offering opportunities in education, healthcare, and financial inclusion, but also risks widening the digital divide. To ensure equitable...