Texas and Virginia Reject Heavy AI Regulation in Favor of Innovation

Texas & Virginia Steer States Away from European-Style AI Regulation

Recent developments in Virginia and Texas indicate a shift in the conversation surrounding artificial intelligence (AI) policy, moving towards a more positive and pro-innovation stance. As of March 2025, over 900 AI-related legislative proposals have been introduced across various states—averaging approximately 12 per day. The majority of these proposals aim to impose new regulations on algorithmic systems, reflecting an unprecedented level of interest in regulating emerging technologies.

Virginia’s Veto of AI Regulation

On March 24, Virginia Governor Glenn Youngkin vetoed a significant AI regulatory measure, known as the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This legislation was criticized for potentially harming the state’s position as a leader in digital innovation. Youngkin’s veto was grounded in the belief that the bill would:

  • Hinder the creation of new jobs
  • Discourage business investment
  • Limit access to innovative technology in Virginia

Additionally, the Chamber of Progress estimated that compliance with this bill would have cost AI developers nearly $30 million, significantly impacting small tech startups.

Texas’ Reformed AI Governance Act

In Texas, Rep. Giovanni Capriglione introduced a revised version of the Texas Responsible AI Governance Act (TRAIGA) shortly after Virginia’s veto. The original version of TRAIGA was heavily criticized for its stringent regulations, but the new iteration has shed many of its more onerous elements. This change marks a notable shift towards a more balanced approach to AI governance, aimed at fostering innovation while still addressing concerns about potential risks associated with AI technologies.

Implications for AI Policy

The actions taken by Virginia and Texas may signify a turning point in the approach to AI policy across the United States. Other states have been considering regulatory measures that align with a European Union (EU)-style framework, which tends to prioritize regulation over innovation. The moves by Virginia and Texas suggest a growing recognition of the need to align state AI policies with a national focus on fostering AI opportunities and investments, particularly in light of recent advancements from China in the AI sector.

Rejecting Fear-Based Regulation

The Virginia bill vetoed by Governor Youngkin is part of a broader trend of legislation pushed by the Multistate AI Policymaker Working Group (MAP-WG), which comprises lawmakers from over 45 states attempting to establish a consensus on AI regulation. Many of these bills echo the EU’s new AI Act and reflect the Biden administration’s earlier approach to AI policy, which was criticized for being fundamentally fear-based and for viewing AI as potentially harmful.

Lessons from Colorado’s AI Regulation

The situation in Virginia and Texas also serves as a cautionary tale for other states considering similar regulations. Colorado’s recent AI law faced backlash from small tech entrepreneurs who argued it imposed vague and overbroad mandates that stifled innovation. Governor Jared Polis acknowledged the potential negative impact of such regulations, leading to the formation of a task force to address concerns about compliance burdens on developers.

Conclusion

The recent actions by Virginia and Texas highlight the importance of understanding the implications of AI regulation. As states continue to navigate the complex landscape of AI policy, the lessons learned from these developments could influence future legislation across the country. The rejection of overly burdensome regulations in favor of a more supportive framework may pave the way for a thriving AI sector in the United States, enabling innovators to drive progress without being hindered by excessive compliance costs.

Ultimately, a cohesive approach to AI regulation that promotes innovation while ensuring public safety and ethical considerations is essential for the continued advancement of this transformative technology.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...