Texas Risks Innovation with Aggressive AI Regulation

Texas’ Left Turn On AI Regulation

Texas has long been celebrated as a beacon of business-friendly policies and innovation. However, a surprising new piece of legislation — the Texas Responsible AI Governance Act — threatens to upend the state’s pro-business reputation. The proposed bill, introduced as HB 1709, is one of the most aggressive AI regulatory efforts yet seen in the United States, rivaling even California and Europe in its scope. While the legislation aims to address significant risks posed by artificial intelligence, its potential unintended consequences could derail innovation in Texas, particularly with initiatives like the $500 billion Stargate Project.

What Texas’ AI Bill Proposes

TRAIGA takes a risk-based approach to regulation, likely inspired by the European Union’s AI Act. It introduces obligations for developers, deployers, and distributors of “high-risk” AI systems — a category defined expansively to include systems that make consequential decisions related to employment, finance, healthcare, housing, and education. The bill also bans AI systems that pose “unacceptable risk,” such as those used for biometric categorization or manipulating human behavior without explicit consent. Most fundamentally, it mandates detailed record-keeping for generative AI models and holds companies accountable for “algorithmic discrimination.”

TRAIGA’s core provisions include:

  • Requiring developers and deployers of high-risk AI systems to conduct impact assessments or implement risk management plans.
  • Imposing a “reasonable care” negligence standard to hold developers and deployers accountable for any harm caused by their systems.
  • Creating a powerful regulatory body, the Texas Artificial Intelligence Council, tasked with issuing advisory opinions and crafting regulations.
  • Prohibiting certain AI applications, such as social scoring and subliminal manipulation.

The state attorney general is tasked with enforcing the law’s provisions, which includes the authority to investigate violations and bring civil actions against entities that fail to meet the requirements laid out by TRAIGA. The attorney general can also impose substantial financial penalties for noncompliance.

On the surface, some of these objectives may seem reasonable, as policymakers may have valid concerns about biased AI systems or those that could harm consumers. TRAIGA also sensibly exempts small businesses and offers limited experimental “sandboxes” for research, training, and other pre-deployment activities. However, the core provisions of the bill — including impact assessments, compliance documentation, and the “reasonable care” negligence standard — impose significant burdens on companies developing or deploying AI.

A Strange Turn For Texas

What’s especially perplexing is how much TRAIGA borrows from regulatory approaches found in California and the European Union — places often criticized for stifling innovation through heavy-handed policies. For example, California’s SB 1047 sought to impose similar requirements on AI companies, but it was ultimately vetoed by Governor Gavin Newsom due to concerns about its feasibility and potential impact on the tech industry. TRAIGA, however, doubles down on this approach, introducing even stricter standards in some cases.

TRAIGA’s requirements for compliance documentation, such as high-risk reports and annual assessments, will create an immense administrative burden and undoubtedly slow the pace of AI development in Texas. Companies will need to update documentation whenever a “substantial modification” is made to their systems, which could occur frequently in fast-moving technology industries.

The proposed law also comes at a time when Texas is positioning itself as a leader in AI, hosting major investments from the Stargate Project — a collaboration between OpenAI, Oracle, SoftBank, and others to build a nationwide network of AI infrastructure. The project’s first data center is under construction in Abilene and promises to create thousands of jobs. TRAIGA’s regulatory hurdles could jeopardize these plans, prompting companies to reconsider investing in Texas.

Will TRAIGA Solve The Problems It Identifies?

The legislation’s primary target, algorithmic discrimination, is a real issue but one that’s already addressed by existing state and federal anti-discrimination laws. The regulatory approach taken in TRAIGA is deeply shortsighted for other reasons as well. The pace of technological progress far outstrips the ability of state governments to regulate effectively, risking a scenario where foreign companies set the standards and priorities for AI development.

For instance, the Chinese company DeepSeek recently unveiled an open-source AI model called R1, which rivals the sophistication of top-tier models from OpenAI. Even if stringent regulations are imposed on domestic tech companies, international competitors will continue to push the boundaries of innovation. With advancements unfolding at a pace measured in weeks rather than months or years, efforts to micromanage innovation from the outset are not only impractical but also risk ceding leadership to foreigners, allowing them to shape AI development according to their own values rather than those of the United States.

Conclusion

Texas has an opportunity to be a hub for AI innovation. However, the introduction of HB 1709 risks undermining that potential. It sends a troubling signal to businesses: Texas may no longer be the place where innovation is free to thrive.

Texas policymakers should consider whether their efforts will create barriers that push businesses to other states or even overseas. A too aggressive approach will discourage the investment and creativity that has made Texas an attractive destination for tech companies. Getting this wrong could mean falling behind on one of the most transformative technologies of our time.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...