Texas Risks Innovation with Aggressive AI Regulation

Texas’ Left Turn On AI Regulation

Texas has long been celebrated as a beacon of business-friendly policies and innovation. However, a surprising new piece of legislation — the Texas Responsible AI Governance Act — threatens to upend the state’s pro-business reputation. The proposed bill, introduced as HB 1709, is one of the most aggressive AI regulatory efforts yet seen in the United States, rivaling even California and Europe in its scope. While the legislation aims to address significant risks posed by artificial intelligence, its potential unintended consequences could derail innovation in Texas, particularly with initiatives like the $500 billion Stargate Project.

What Texas’ AI Bill Proposes

TRAIGA takes a risk-based approach to regulation, likely inspired by the European Union’s AI Act. It introduces obligations for developers, deployers, and distributors of “high-risk” AI systems — a category defined expansively to include systems that make consequential decisions related to employment, finance, healthcare, housing, and education. The bill also bans AI systems that pose “unacceptable risk,” such as those used for biometric categorization or manipulating human behavior without explicit consent. Most fundamentally, it mandates detailed record-keeping for generative AI models and holds companies accountable for “algorithmic discrimination.”

TRAIGA’s core provisions include:

  • Requiring developers and deployers of high-risk AI systems to conduct impact assessments or implement risk management plans.
  • Imposing a “reasonable care” negligence standard to hold developers and deployers accountable for any harm caused by their systems.
  • Creating a powerful regulatory body, the Texas Artificial Intelligence Council, tasked with issuing advisory opinions and crafting regulations.
  • Prohibiting certain AI applications, such as social scoring and subliminal manipulation.

The state attorney general is tasked with enforcing the law’s provisions, which includes the authority to investigate violations and bring civil actions against entities that fail to meet the requirements laid out by TRAIGA. The attorney general can also impose substantial financial penalties for noncompliance.

On the surface, some of these objectives may seem reasonable, as policymakers may have valid concerns about biased AI systems or those that could harm consumers. TRAIGA also sensibly exempts small businesses and offers limited experimental “sandboxes” for research, training, and other pre-deployment activities. However, the core provisions of the bill — including impact assessments, compliance documentation, and the “reasonable care” negligence standard — impose significant burdens on companies developing or deploying AI.

A Strange Turn For Texas

What’s especially perplexing is how much TRAIGA borrows from regulatory approaches found in California and the European Union — places often criticized for stifling innovation through heavy-handed policies. For example, California’s SB 1047 sought to impose similar requirements on AI companies, but it was ultimately vetoed by Governor Gavin Newsom due to concerns about its feasibility and potential impact on the tech industry. TRAIGA, however, doubles down on this approach, introducing even stricter standards in some cases.

TRAIGA’s requirements for compliance documentation, such as high-risk reports and annual assessments, will create an immense administrative burden and undoubtedly slow the pace of AI development in Texas. Companies will need to update documentation whenever a “substantial modification” is made to their systems, which could occur frequently in fast-moving technology industries.

The proposed law also comes at a time when Texas is positioning itself as a leader in AI, hosting major investments from the Stargate Project — a collaboration between OpenAI, Oracle, SoftBank, and others to build a nationwide network of AI infrastructure. The project’s first data center is under construction in Abilene and promises to create thousands of jobs. TRAIGA’s regulatory hurdles could jeopardize these plans, prompting companies to reconsider investing in Texas.

Will TRAIGA Solve The Problems It Identifies?

The legislation’s primary target, algorithmic discrimination, is a real issue but one that’s already addressed by existing state and federal anti-discrimination laws. The regulatory approach taken in TRAIGA is deeply shortsighted for other reasons as well. The pace of technological progress far outstrips the ability of state governments to regulate effectively, risking a scenario where foreign companies set the standards and priorities for AI development.

For instance, the Chinese company DeepSeek recently unveiled an open-source AI model called R1, which rivals the sophistication of top-tier models from OpenAI. Even if stringent regulations are imposed on domestic tech companies, international competitors will continue to push the boundaries of innovation. With advancements unfolding at a pace measured in weeks rather than months or years, efforts to micromanage innovation from the outset are not only impractical but also risk ceding leadership to foreigners, allowing them to shape AI development according to their own values rather than those of the United States.

Conclusion

Texas has an opportunity to be a hub for AI innovation. However, the introduction of HB 1709 risks undermining that potential. It sends a troubling signal to businesses: Texas may no longer be the place where innovation is free to thrive.

Texas policymakers should consider whether their efforts will create barriers that push businesses to other states or even overseas. A too aggressive approach will discourage the investment and creativity that has made Texas an attractive destination for tech companies. Getting this wrong could mean falling behind on one of the most transformative technologies of our time.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...