Texas Takes a Stand: The TRAIGA AI Bill and Its Implications

Why a Texas AI Bill is Shaping Up as the Next Battleground Over U.S. AI Policy

A contentious battle over state-level AI regulation is emerging in the United States, following the recent vetoing of California’s controversial AI safety bill. The focus has shifted to Texas, where the Texas Responsible AI Governance Act, also known as TRAIGA, has been proposed by Republican Texas House Representative Giovanni Capriglione.

Overview of TRAIGA

Formally introduced just before Christmas, TRAIGA seeks to outlaw certain uses of AI and impose significant compliance obligations on developers and deployers of any “high-risk” AI system, defined as one that is a substantial factor in consequential decision-making.

Unlike California’s previous efforts, which centered on theoretical risks associated with AI, TRAIGA primarily aims to prevent AI-powered discrimination. It also proposes grants for local AI companies and educational institutions to train workers in effective AI usage.

Implications and Concerns

The bill would have profound implications for AI deployment in Texas, the world’s eighth-largest economy. Concerns have been raised that TRAIGA could extend its reach beyond Texas, potentially impacting AI developers across the nation. Critics argue that this law exemplifies the risks posed by fragmented state-level AI regulations, particularly in the absence of a comprehensive federal AI law.

Hodan Omaar, a senior policy manager at the Information Technology and Innovation Foundation (ITIF), emphasizes that existing federal and state anti-discrimination laws are sufficient, warning that TRAIGA risks creating a patchwork of regulations that could hinder national progress toward a unified AI strategy.

Comparison to Other State Regulations

TRAIGA is comparable to Colorado’s AI Act, which passed last year and will take effect in February 2026. Both bills draw inspiration from the EU’s AI Act, which is currently the most comprehensive AI legislation worldwide.

The swift determination of HB 1709’s fate is anticipated due to Texas’s legislative timeline, where bills are only considered from January to June in odd-numbered years. Capriglione has proposed that TRAIGA should take effect in September.

Regulatory Measures and Penalties

TRAIGA designates the Texas attorney general as the law’s enforcer, with the authority to impose fines of up to $200,000 per violation, alongside administrative fines of $40,000 per day for ongoing non-compliance.

The law would prohibit the use of AI for subliminal manipulation, social scoring, or inferring personal characteristics from biometric data. Additionally, it bans the deployment of any AI capable of generating sexual deepfakes, raising concerns among experts regarding free speech implications.

Developer Responsibilities

Under TRAIGA, developers and deployers of AI systems must exercise “reasonable care” to protect consumers from algorithmic discrimination. They are required to provide metrics related to their models, including accuracy and transparency, and maintain detailed records of their training data, exceeding the EU’s requirements.

Developers must withdraw or disable non-compliant models immediately and notify the Texas attorney general of any risks associated with algorithmic discrimination or data misuse.

Consumer Rights and Transparency

The bill grants consumers the right to appeal AI-driven decisions adversely affecting their rights and to understand how their personal data is utilized within AI systems. However, similar to Colorado’s legislation, TRAIGA does not afford consumers the ability to sue for violations.

Formation of a Regulatory Body

TRAIGA proposes establishing a Texas AI Council attached to the governor’s office, primarily composed of public experts in the field. This council will explore how AI can enhance government efficiency and recommend reforms to stimulate AI development within the state.

While some experts worry that TRAIGA may stifle innovation and impose excessive oversight, proponents argue it sets a necessary framework for responsible AI usage.

Final Thoughts

As the legislative process unfolds, stakeholders, including developers and consumers, will closely monitor the developments surrounding TRAIGA. The outcomes of this bill may set critical precedents for AI regulation across the United States.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...