Texas Risks Innovation with Aggressive AI Regulation

Texas’ Left Turn On AI Regulation

Texas has long been celebrated as a beacon of business-friendly policies and innovation. However, a surprising new piece of legislation — the Texas Responsible AI Governance Act — threatens to upend the state’s pro-business reputation. The proposed bill, introduced as HB 1709, is one of the most aggressive AI regulatory efforts yet seen in the United States, rivaling even California and Europe in its scope. While the legislation aims to address significant risks posed by artificial intelligence, its potential unintended consequences could derail innovation in Texas, particularly with initiatives like the $500 billion Stargate Project.

What Texas’ AI Bill Proposes

TRAIGA takes a risk-based approach to regulation, likely inspired by the European Union’s AI Act. It introduces obligations for developers, deployers, and distributors of “high-risk” AI systems — a category defined expansively to include systems that make consequential decisions related to employment, finance, healthcare, housing, and education. The bill also bans AI systems that pose “unacceptable risk,” such as those used for biometric categorization or manipulating human behavior without explicit consent. Most fundamentally, it mandates detailed record-keeping for generative AI models and holds companies accountable for “algorithmic discrimination.”

TRAIGA’s core provisions include:

  • Requiring developers and deployers of high-risk AI systems to conduct impact assessments or implement risk management plans.
  • Imposing a “reasonable care” negligence standard to hold developers and deployers accountable for any harm caused by their systems.
  • Creating a powerful regulatory body, the Texas Artificial Intelligence Council, tasked with issuing advisory opinions and crafting regulations.
  • Prohibiting certain AI applications, such as social scoring and subliminal manipulation.

The state attorney general is tasked with enforcing the law’s provisions, which includes the authority to investigate violations and bring civil actions against entities that fail to meet the requirements laid out by TRAIGA. The attorney general can also impose substantial financial penalties for noncompliance.

On the surface, some of these objectives may seem reasonable, as policymakers may have valid concerns about biased AI systems or those that could harm consumers. TRAIGA also sensibly exempts small businesses and offers limited experimental “sandboxes” for research, training, and other pre-deployment activities. However, the core provisions of the bill — including impact assessments, compliance documentation, and the “reasonable care” negligence standard — impose significant burdens on companies developing or deploying AI.

A Strange Turn For Texas

What’s especially perplexing is how much TRAIGA borrows from regulatory approaches found in California and the European Union — places often criticized for stifling innovation through heavy-handed policies. For example, California’s SB 1047 sought to impose similar requirements on AI companies, but it was ultimately vetoed by Governor Gavin Newsom due to concerns about its feasibility and potential impact on the tech industry. TRAIGA, however, doubles down on this approach, introducing even stricter standards in some cases.

TRAIGA’s requirements for compliance documentation, such as high-risk reports and annual assessments, will create an immense administrative burden and undoubtedly slow the pace of AI development in Texas. Companies will need to update documentation whenever a “substantial modification” is made to their systems, which could occur frequently in fast-moving technology industries.

The proposed law also comes at a time when Texas is positioning itself as a leader in AI, hosting major investments from the Stargate Project — a collaboration between OpenAI, Oracle, SoftBank, and others to build a nationwide network of AI infrastructure. The project’s first data center is under construction in Abilene and promises to create thousands of jobs. TRAIGA’s regulatory hurdles could jeopardize these plans, prompting companies to reconsider investing in Texas.

Will TRAIGA Solve The Problems It Identifies?

The legislation’s primary target, algorithmic discrimination, is a real issue but one that’s already addressed by existing state and federal anti-discrimination laws. The regulatory approach taken in TRAIGA is deeply shortsighted for other reasons as well. The pace of technological progress far outstrips the ability of state governments to regulate effectively, risking a scenario where foreign companies set the standards and priorities for AI development.

For instance, the Chinese company DeepSeek recently unveiled an open-source AI model called R1, which rivals the sophistication of top-tier models from OpenAI. Even if stringent regulations are imposed on domestic tech companies, international competitors will continue to push the boundaries of innovation. With advancements unfolding at a pace measured in weeks rather than months or years, efforts to micromanage innovation from the outset are not only impractical but also risk ceding leadership to foreigners, allowing them to shape AI development according to their own values rather than those of the United States.

Conclusion

Texas has an opportunity to be a hub for AI innovation. However, the introduction of HB 1709 risks undermining that potential. It sends a troubling signal to businesses: Texas may no longer be the place where innovation is free to thrive.

Texas policymakers should consider whether their efforts will create barriers that push businesses to other states or even overseas. A too aggressive approach will discourage the investment and creativity that has made Texas an attractive destination for tech companies. Getting this wrong could mean falling behind on one of the most transformative technologies of our time.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...