Texas Leads the Way in AI Regulation with TRAIGA

New AI Regulations Come into Play with the Texas Responsible Artificial Intelligence Governance Act

The rapid advancement of artificial intelligence (“AI”) has outpaced existing U.S. regulatory frameworks. At present, AI regulation occurs primarily on a state-by-state basis. Most states that have enacted AI laws rely on targeted regulations for particular use cases or fields. Texas has established one of the more comprehensive approaches with its Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”).

TRAIGA was signed into law on June 22, 2025, and took effect on January 1, 2026, with implications beyond the borders of Texas.

Key Prohibitions Under TRAIGA

TRAIGA addresses the development or deployment of AI systems and prohibits the following:

  1. Developing or deploying an AI system with the intent to manipulate human behavior to incite or encourage self-harm, harm to others, or criminal activity.
  2. Developing or deploying an AI system with the sole intent to infringe, restrict, or impair rights guaranteed under the Constitution.
  3. Developing or deploying an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.
  4. Developing or deploying an AI system with the sole intent of producing or distributing certain sexually explicit content.

The Texas Business & Commerce Code defines AI systems as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments.”

TRAIGA applies to any person or entity who does business in Texas or with Texans, thereby expanding its reach far beyond the Texas border. Another point of interest is that TRAIGA focuses on both development and deployment. In doing so, the law affects not only AI developers but any entity that may use such AI.

Shifting Regulatory Focus

The emphasis on intent represents a significant shift from EU-style risk-based assessments, which have also been adopted in Colorado. By taking this approach, TRAIGA offers a clear and potentially more easily operationalized framework compared to traditional risk-based methodologies.

Establishment of Advisory Bodies

TRAIGA also creates a state advisory body in the form of an AI Council to provide oversight and guidance. In addition, TRAIGA establishes a regulatory sandbox program, in which companies can test AI systems in a controlled environment for 36 months, while being protected from certain types of prosecution.

Enforcement Mechanisms

The Texas Attorney General has the exclusive right to bring actions under TRAIGA, as the act provides no private right of action. The Attorney General must provide notice and an opportunity to cure before bringing an action. Penalties range from $10,000 to $200,000 per violation, depending in part on whether the violation is determined to be “curable,” or $2,000 to $40,000 per day for a continued violation.

Comparative Analysis with Other States

TRAIGA’s approach can be compared to other states’ regulations. For instance, Colorado has enacted comprehensive AI legislation through the Colorado AI Act (“CAIA”). This statute implements a risk-based framework, requiring developers and deployers to conduct impact assessments and provide clear consumer notifications, thus including more roadblocks to compliance than TRAIGA.

Utah’s Utah AI Policy Act primarily addresses consumer notification and deceptive practices, resulting in a more limited scope than TRAIGA. California has adopted several targeted regulations addressing specific AI applications, such as chatbot oversight, election integrity measures, and deepfake restrictions. Texas, meanwhile, has established AI-related laws that are more straightforward compared to those in California.

Practical Steps for Compliance

Any entity conducting business in Texas or with Texas residents should carefully assess their risk exposure and review their business policies accordingly. Organizations are advised to consider the following points:

  1. Map your Texas exposure: Identify AI systems developed, offered, or deployed in Texas.
  2. Update AI policies: Explicitly forbid AI uses that could manipulate self-harm, violence, crime, discriminate intentionally, violate rights, or produce child sexual content.
  3. Evaluate the Texas sandbox: Assess if it is appropriate to pilot your next-generation AI features within the Texas sandbox to mitigate regulatory risk.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...