TRAIGA: Key Provisions of Texas’ New Artificial Intelligence Governance Act
On May 31, 2025, the Texas Legislature passed House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This legislation aims to establish a framework for the use and governance of artificial intelligence (AI) technologies in Texas, setting forth disclosure requirements, outlining prohibited uses of AI, and establishing civil penalties for violations.
Effective Date and Legislative Background
TRAIGA was signed into law on June 22, 2025, and is set to take effect on January 1, 2026. It is part of a growing trend among states, including California, Colorado, and Utah, which have also enacted AI legislation.
Applicability of TRAIGA
TRAIGA applies to two main groups: covered persons and entities and government entities.
Covered Persons and Entities
Covered persons and entities are defined as any individual or organization that:
- Promotes, advertises, or conducts business in Texas;
- Produces products or services utilized by Texas residents;
- Develops or deploys AI systems within Texas.
Developers and Deployers
A developer is defined as anyone who creates an AI system offered or used in Texas, while a deployer is someone who implements an AI system for use in the state.
Government Entities
A governmental entity includes any administrative unit of Texas that exercises governmental functions, although it specifically excludes hospital districts and institutions of higher education.
Consumers
A consumer refers to any Texas resident acting in an individual or household context, meaning that commercial uses are not covered by TRAIGA.
Definition of Artificial Intelligence System
TRAIGA broadly defines an artificial intelligence system as any machine-based system that generates outputs, including decisions or recommendations, based on the inputs it receives.
Enforcement Mechanisms
The Texas Attorney General (AG) has exclusive authority to enforce TRAIGA, with limited exceptions for certain licensing state agencies. Importantly, TRAIGA does not allow for a private right of action.
Notice and Opportunity to Cure
Before initiating enforcement action, the AG must provide a written notice of violation to the alleged violator, who then has 60 days to:
- Cure the violation;
- Provide documentation of the cure;
- Revise internal policies to prevent future violations.
Civil Penalties
TRAIGA establishes civil penalties, categorized as follows:
- Curable violations: $10,000 – $12,000 per violation;
- Uncurable violations: $80,000 – $200,000 per violation;
- Ongoing violations: $2,000 – $40,000 per day.
Additionally, the AG may seek injunctive relief, attorneys’ fees, and investigative costs.
Safe Harbors
TRAIGA outlines safe harbors, stating that individuals are not liable if:
- A third party misuses the AI;
- The violation is discovered through audits;
- They comply with recognized standards like the NIST AI Risk Management Framework.
Operational Framework of TRAIGA
TRAIGA includes provisions for consumer disclosures and outlines prohibited uses of AI, which may impact businesses.
Disclosure to Consumers
Government agencies must inform consumers when they are interacting with AI, ensuring the disclosure is clear, conspicuous, and uses plain language.
Prohibited Uses of AI
TRAIGA explicitly prohibits the use of AI by government entities for:
- Assigning social scores;
- Biometric identification without consent;
- Encouraging self-harm, crime, or violence;
- Infringing on individual rights under the U.S. Constitution;
- Unlawfully discriminating against protected classes;
- Producing or distributing certain explicit content or child pornography.
Additionally, TRAIGA establishes a sandbox program for companies to test AI in a controlled environment without full regulatory compliance and creates the Texas Artificial Intelligence Council to address ethical and legal issues surrounding AI.
Compliance Considerations
Organizations should assess whether their AI systems meet TRAIGA’s definitions and consider the following compliance steps:
- Conduct applicability assessments to inventory AI systems.
- Analyze use cases to identify potential infringements.
- Implement consumer notice requirements.
- Align AI programs with recognized risk frameworks.
- Participate in the sandbox program for testing.
- Be aware of the federal AI moratorium proposal that may impact state-level legislation.
In conclusion, TRAIGA represents a significant step in AI governance, aiming to balance innovation with the protection of consumer rights and ethical considerations.