Texas Takes the Lead: New AI Governance Law Unveiled

TRAIGA: Key Provisions of Texas’ New Artificial Intelligence Governance Act

On May 31, 2025, the Texas Legislature passed House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This legislation aims to establish a framework for the use and governance of artificial intelligence (AI) technologies in Texas, setting forth disclosure requirements, outlining prohibited uses of AI, and establishing civil penalties for violations.

Effective Date and Legislative Background

TRAIGA was signed into law on June 22, 2025, and is set to take effect on January 1, 2026. It is part of a growing trend among states, including California, Colorado, and Utah, which have also enacted AI legislation.

Applicability of TRAIGA

TRAIGA applies to two main groups: covered persons and entities and government entities.

Covered Persons and Entities

Covered persons and entities are defined as any individual or organization that:

  • Promotes, advertises, or conducts business in Texas;
  • Produces products or services utilized by Texas residents;
  • Develops or deploys AI systems within Texas.

Developers and Deployers

A developer is defined as anyone who creates an AI system offered or used in Texas, while a deployer is someone who implements an AI system for use in the state.

Government Entities

A governmental entity includes any administrative unit of Texas that exercises governmental functions, although it specifically excludes hospital districts and institutions of higher education.

Consumers

A consumer refers to any Texas resident acting in an individual or household context, meaning that commercial uses are not covered by TRAIGA.

Definition of Artificial Intelligence System

TRAIGA broadly defines an artificial intelligence system as any machine-based system that generates outputs, including decisions or recommendations, based on the inputs it receives.

Enforcement Mechanisms

The Texas Attorney General (AG) has exclusive authority to enforce TRAIGA, with limited exceptions for certain licensing state agencies. Importantly, TRAIGA does not allow for a private right of action.

Notice and Opportunity to Cure

Before initiating enforcement action, the AG must provide a written notice of violation to the alleged violator, who then has 60 days to:

  • Cure the violation;
  • Provide documentation of the cure;
  • Revise internal policies to prevent future violations.

Civil Penalties

TRAIGA establishes civil penalties, categorized as follows:

  • Curable violations: $10,000 – $12,000 per violation;
  • Uncurable violations: $80,000 – $200,000 per violation;
  • Ongoing violations: $2,000 – $40,000 per day.

Additionally, the AG may seek injunctive relief, attorneys’ fees, and investigative costs.

Safe Harbors

TRAIGA outlines safe harbors, stating that individuals are not liable if:

  • A third party misuses the AI;
  • The violation is discovered through audits;
  • They comply with recognized standards like the NIST AI Risk Management Framework.

Operational Framework of TRAIGA

TRAIGA includes provisions for consumer disclosures and outlines prohibited uses of AI, which may impact businesses.

Disclosure to Consumers

Government agencies must inform consumers when they are interacting with AI, ensuring the disclosure is clear, conspicuous, and uses plain language.

Prohibited Uses of AI

TRAIGA explicitly prohibits the use of AI by government entities for:

  • Assigning social scores;
  • Biometric identification without consent;
  • Encouraging self-harm, crime, or violence;
  • Infringing on individual rights under the U.S. Constitution;
  • Unlawfully discriminating against protected classes;
  • Producing or distributing certain explicit content or child pornography.

Additionally, TRAIGA establishes a sandbox program for companies to test AI in a controlled environment without full regulatory compliance and creates the Texas Artificial Intelligence Council to address ethical and legal issues surrounding AI.

Compliance Considerations

Organizations should assess whether their AI systems meet TRAIGA’s definitions and consider the following compliance steps:

  • Conduct applicability assessments to inventory AI systems.
  • Analyze use cases to identify potential infringements.
  • Implement consumer notice requirements.
  • Align AI programs with recognized risk frameworks.
  • Participate in the sandbox program for testing.
  • Be aware of the federal AI moratorium proposal that may impact state-level legislation.

In conclusion, TRAIGA represents a significant step in AI governance, aiming to balance innovation with the protection of consumer rights and ethical considerations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...