Texas and Virginia Reject Heavy AI Regulation in Favor of Innovation

Texas & Virginia Steer States Away from European-Style AI Regulation

Recent developments in Virginia and Texas indicate a shift in the conversation surrounding artificial intelligence (AI) policy, moving towards a more positive and pro-innovation stance. As of March 2025, over 900 AI-related legislative proposals have been introduced across various states—averaging approximately 12 per day. The majority of these proposals aim to impose new regulations on algorithmic systems, reflecting an unprecedented level of interest in regulating emerging technologies.

Virginia’s Veto of AI Regulation

On March 24, Virginia Governor Glenn Youngkin vetoed a significant AI regulatory measure, known as the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). This legislation was criticized for potentially harming the state’s position as a leader in digital innovation. Youngkin’s veto was grounded in the belief that the bill would:

  • Hinder the creation of new jobs
  • Discourage business investment
  • Limit access to innovative technology in Virginia

Additionally, the Chamber of Progress estimated that compliance with this bill would have cost AI developers nearly $30 million, significantly impacting small tech startups.

Texas’ Reformed AI Governance Act

In Texas, Rep. Giovanni Capriglione introduced a revised version of the Texas Responsible AI Governance Act (TRAIGA) shortly after Virginia’s veto. The original version of TRAIGA was heavily criticized for its stringent regulations, but the new iteration has shed many of its more onerous elements. This change marks a notable shift towards a more balanced approach to AI governance, aimed at fostering innovation while still addressing concerns about potential risks associated with AI technologies.

Implications for AI Policy

The actions taken by Virginia and Texas may signify a turning point in the approach to AI policy across the United States. Other states have been considering regulatory measures that align with a European Union (EU)-style framework, which tends to prioritize regulation over innovation. The moves by Virginia and Texas suggest a growing recognition of the need to align state AI policies with a national focus on fostering AI opportunities and investments, particularly in light of recent advancements from China in the AI sector.

Rejecting Fear-Based Regulation

The Virginia bill vetoed by Governor Youngkin is part of a broader trend of legislation pushed by the Multistate AI Policymaker Working Group (MAP-WG), which comprises lawmakers from over 45 states attempting to establish a consensus on AI regulation. Many of these bills echo the EU’s new AI Act and reflect the Biden administration’s earlier approach to AI policy, which was criticized for being fundamentally fear-based and for viewing AI as potentially harmful.

Lessons from Colorado’s AI Regulation

The situation in Virginia and Texas also serves as a cautionary tale for other states considering similar regulations. Colorado’s recent AI law faced backlash from small tech entrepreneurs who argued it imposed vague and overbroad mandates that stifled innovation. Governor Jared Polis acknowledged the potential negative impact of such regulations, leading to the formation of a task force to address concerns about compliance burdens on developers.

Conclusion

The recent actions by Virginia and Texas highlight the importance of understanding the implications of AI regulation. As states continue to navigate the complex landscape of AI policy, the lessons learned from these developments could influence future legislation across the country. The rejection of overly burdensome regulations in favor of a more supportive framework may pave the way for a thriving AI sector in the United States, enabling innovators to drive progress without being hindered by excessive compliance costs.

Ultimately, a cohesive approach to AI regulation that promotes innovation while ensuring public safety and ethical considerations is essential for the continued advancement of this transformative technology.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...