States Lead the Charge in AI Regulation Amid Congressional Inaction

States Push Ahead on AI Regulation as Federal Action Stalls

This month, the U.S. Senate voted to strike down a controversial provision in President Trump’s sweeping tax and spending package – officially the One Big Beautiful Bill Act – that would have blocked states and localities from regulating AI for the next ten years. The measure had ignited bipartisan backlash, with critics arguing that it would strip states of the ability to address harms posed by algorithmic discrimination, surveillance abuses, and AI-driven fraud.

The revival of state action comes amid growing frustration over the lack of a unified federal AI framework. While the business community has lobbied for preemptive national legislation to avoid regulatory fragmentation, many states view themselves as essential watchdogs in the absence of timely congressional oversight. Meanwhile, industry groups continue to press Washington to leave room for innovation, especially in sectors using AI for fraud detection and efficiency gains.

With the moratorium dead, more than 1,000 AI-related bills have surged back into legislative play across all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C. These measures span a wide range of issues, including biometric data protection, algorithmic transparency, and restrictions on AI tools used in hiring, criminal justice, and education. Lawmakers in California, Colorado, New York, and Texas are crafting frameworks that impose risk assessments, mandate human oversight, and prohibit AI applications deemed discriminatory or unsafe.

The Decentralized Model of AI Governance

The Trump administration has tended to favor innovation and industry self-regulation over the imposition of federal policies and legislative mandates. David Sacks, an Internet technology investor and chair of Trump’s Council of Advisors on Science and Technology, has put forth a plan for government AI that closely reflects what the AI industry is lobbying for. For now, though, the collapse of the moratorium has reaffirmed a decentralized model of AI governance in the U.S.

As Congress weighs its next steps, states are forging ahead and setting the tone for how algorithmic accountability and AI safety will be handled on the ground. This decentralization has sparked both enthusiasm and concern. Proponents argue that states are better positioned to craft laws that reflect local values and needs. Critics, on the other hand, worry that the growing complexity of a fragmented regulatory landscape will impose heavy compliance burdens on businesses operating across state lines. Legal experts point to overlapping or contradictory requirements that could stifle innovation and increase legal uncertainty for developers.

Common Principles Among State Legislation

In the absence of a national framework, state legislatures continue to lead the charge. Despite regional differences, many of the legislative efforts share common principles. Most emphasize transparency, accountability, and harm prevention. Common provisions include mandatory disclosures when AI is used in hiring, lending, housing, or criminal justice; bans or restrictions on high-risk applications like facial recognition; and requirements for human oversight in automated decision-making.

States such as California, Colorado, New York, and Utah have also incorporated risk assessments and bias mitigation protocols into their laws, signaling a growing consensus on the need for ethical AI governance. California’s regulations build on the California Consumer Privacy Act, introducing rules around automated decision-making technologies. Colorado’s AI Act mandates safeguards for high-risk AI systems that affect access to essential services. New York now requires public agencies to disclose their use of AI tools and mandates regular bias assessments.

Healthcare and Public Safety Regulations

In states like Kentucky and Maryland, lawmakers are targeting healthcare-related AI and biometric data protections. Meanwhile, states like Texas and Montana have moved to regulate AI use in public safety and criminal sentencing contexts.

Federal Oversight and Future Developments

One of the most significant developments expected in the coming months is the role of the Senate Committee on Commerce in shaping federal AI oversight. A major focus of attention is Senate Bill 1290, the Artificial Intelligence and Critical Technology Workforce Framework Act, introduced by Senator Gary Peters (D-MI) with bipartisan support.

The bill aims to strengthen the U.S. workforce in AI and other critical technologies, and tasks the National Institute of Standards and Technology (NIST) with developing national workforce frameworks for AI and critical technologies that define roles, skills, and knowledge needed for these jobs, building on the existing NICE framework for cybersecurity.

As the artificial intelligence sector continues to grow and play an increasingly important role in everything from health care to finance to agriculture, it’s crucial that we have a highly skilled workforce ready to drive innovation and keep the United States at the forefront of this industry.

Incremental Approaches to Regulation

S.1290 is seen as part of a broader effort to align national security, economic competitiveness, and educational pipelines with AI governance. It has the support of various industry groups. The bill is expected to receive a dedicated hearing by the Senate Commerce Committee in the coming weeks.

Lawmakers appear to be pursuing an incremental approach. Recent federal measures, such as the bipartisan bill to ban Chinese-developed AI in federal agencies and the passage of the Take it Down Act to combat deepfake abuse, illustrate targeted responses to specific AI threats. These narrower bills may serve as a blueprint for broader legislation down the road.

State Laboratories for AI Regulation

In the meantime, the states have become de facto laboratories for AI regulation. Their legislative frameworks are already shaping how companies design, deploy, and govern their technologies. Whether through mandatory algorithmic audits, disclosure requirements, or bans on deceptive applications, states are setting the tone for AI accountability.

Amid this rapidly evolving landscape, the business community has ramped up efforts to pressure Congress into enacting a unified national AI framework. Major tech firms, trade associations, and cross-industry coalitions argue that without a coherent federal standard, the patchwork of state laws will hinder innovation and complicate nationwide deployment.

Their message to lawmakers is clear: a national framework is essential not only for regulatory clarity but also for maintaining U.S. competitiveness in the global AI race. Industry advocates are lobbying for legislation that balances innovation-friendly guardrails with meaningful accountability, drawing comparisons to Europe’s AI Act but urging a uniquely American approach.

Tech companies, including Microsoft, Google, Meta, Amazon, Nvidia, OpenAI, and Anthropic, are calling for national guardrails that preempt state laws and reduce compliance overhead. Payment processors and financial institutions have joined the chorus, warning that state-level restrictions could interfere with fraud detection systems powered by AI.

As the landscape continues to shift, the need for a comprehensive and coherent framework for AI governance becomes increasingly urgent.

More Insights

Artists Unite to Protect Music Rights in the Age of AI

More than 30 European musicians have launched a united video campaign urging the European Commission to preserve the integrity of the EU AI Act. The Stay True To The Act campaign calls for...

AI Agents: The New Security Challenge for Enterprises

The rise of AI agents in enterprise applications is creating new security challenges due to the autonomous nature of their outbound API calls. This "agentic traffic" can lead to unpredictable costs...

11 Essential Steps for a Successful AI Audit in the Workplace

As organizations increasingly adopt generative AI tools, particularly in human resources, conducting thorough AI audits is essential to mitigate legal, operational, and reputational risks. A...

Future-Proof Your Career with AI Compliance Certification

AI compliance certification is essential for professionals to navigate the complex regulatory landscape as artificial intelligence increasingly integrates into various industries. This certification...

States Lead the Charge in AI Regulation Amid Congressional Inaction

The U.S. Senate recently voted to eliminate a provision that would have prevented states from regulating AI for the next decade, leading to a surge in state-level legislative action on AI-related...

Prioritizing Ethics and Sustainability in AI Development

As AI Appreciation Day approaches, industry leaders are reflecting on the transformative potential of artificial intelligence while emphasizing the urgent challenges of governance, ethics, and...

European Musicians Challenge EU AI Act Implementation

Thirty-one artists from Europe have launched a campaign urging the EU Commission to "Stay True to the AI Act," criticizing its implementation for failing to adequately protect artists' rights against...

New Jersey Leads the Way in AI Innovation

New Jersey has been recognized as a national leader in AI innovation, achieving the top-level designation of "Advanced" AI readiness. The state is making significant investments in AI development...

New Jersey Leads the Way in AI Innovation

New Jersey has been recognized as a national leader in AI innovation, achieving the top-level designation of "Advanced" AI readiness. The state is making significant investments in AI development...