More States Reject Fear-Based AI Regulation
Recent developments in Virginia and Texas indicate a shift in the debate surrounding artificial intelligence (AI) policy toward a more positive, pro-innovation direction. With over 900 AI-related legislative proposals introduced in just three months, the regulatory landscape is evolving rapidly.
Vetoes Signal a Change in Direction
On March 24, Virginia Republican Governor Glenn Youngkin vetoed a significant AI regulatory measure, known as the “High-Risk Artificial Intelligence Developer and Deployer Act” (HB 2094). In his veto statement, Youngkin expressed concerns that the bill would hinder job creation, deter business investment, and restrict access to innovative technology in Virginia. The Chamber of Progress estimated that compliance with this legislation would cost AI developers nearly $30 million, presenting a substantial challenge for small tech startups.
In Texas, GOP Representative Giovanni Capriglione introduced an updated version of the “Texas Responsible AI Governance Act” (TRAIGA), which initially sought to impose heavy regulations on AI innovation. The revised version, however, has shed many of its more stringent elements, signaling a response to widespread opposition.
Rejecting the EU Approach
The Virginia bill vetoed by Youngkin was part of a broader trend driven by the Multistate AI Policymaker Working Group (MAP-WG), which has been advocating for similar legislation across more than 45 states. These proposals often emulate the European Union’s new AI Act and the Biden administration’s framework for AI policy, which have been criticized for being fundamentally fear-based.
In contrast, Youngkin’s veto and the changes to Texas’s TRAIGA reflect a growing recognition among some state lawmakers of the potential costs and complexities associated with heavy-handed regulation. The legislative moves in these states could represent a turning point in AI policy, aligning more closely with a national focus on AI opportunity and investment.
Concerns with Preemptive Regulation
Many states continue to propose regulatory frameworks that echo the Biden administration’s cautionary approach, viewing AI primarily as a risk rather than an opportunity. These MAP-WG bills aim to preemptively regulate potential harms associated with AI systems, particularly focusing on the risks of algorithmic bias and other issues arising from high-risk applications.
However, existing state and federal laws, including civil rights protections, are already equipped to address these concerns if they arise. Critics argue that these new regulatory approaches can lead to unnecessary complications and costs for innovators. Youngkin’s veto highlighted the importance of fostering an environment conducive to innovation rather than imposing burdensome mandates.
A Lesson from Colorado
The experience of Colorado serves as a cautionary tale for other states. Despite passing an AI discrimination bill, significant issues emerged even before its implementation, with entrepreneurs expressing concerns over vague and overly broad mandates that stifled innovation. Governor Jared Polis acknowledged these challenges, emphasizing the need for a cohesive federal approach to mitigate compliance burdens and ensure equitable access to AI technologies.
Conclusion
The recent actions taken in Virginia and Texas send a clear message to state lawmakers: imposing stringent regulations based on fear-driven models could stifle innovation and hinder the growth of the AI sector. Rather than adopting the European regulatory model, states should consider alternative approaches that encourage innovation while addressing genuine concerns about AI technology.
As the landscape of AI policy continues to evolve, it is crucial for lawmakers to prioritize frameworks that empower innovators, ensuring that the U.S. remains at the forefront of AI development and implementation.