Texas Adopts the Responsible AI Governance Act
On June 22, 2025, Texas Governor Abbott signed HB 149, the “Texas Responsible Artificial Intelligence Governance Act” (TRAIGA), establishing Texas as the third U.S. state to implement a comprehensive artificial intelligence (AI) law, following Colorado and Utah. This legislation aims to create a balanced regulatory framework that encourages innovation while addressing potential risks associated with AI systems.
Key Provisions of the Act
The Act is set to take effect on January 1, 2026, providing companies approximately 18 months to develop compliance programs. Key elements of the law include:
- Baseline duties for AI developers and deployers
- Prohibition of AI intended for social scoring or discrimination
- Creation of a regulatory sandbox for testing innovative AI applications
- Exclusive enforcement authority vested in the Attorney General (AG)
- Preemption of local AI regulations
Organizations are encouraged to align their compliance efforts with the forthcoming requirements of the EU AI Act and Colorado’s AI regulations, while also monitoring federal developments that may impact state-level enforcement.
Scope and Definitions
The Act applies to any entity that promotes, advertises, or conducts business in Texas, offers products or services to Texas residents, or develops or deploys an AI system within the state. The definition of an “artificial intelligence system” is broad, encompassing any machine-based system that infers outputs from inputs to influence environments, thereby covering various applications, including generative models and recommendation engines.
Responsibilities are assigned based on roles:
- Developer: Any entity creating an AI system available in Texas.
- Deployer: Any entity putting an AI system into service or use in Texas.
Transparency and Consumer Protection
Key duties outlined in the Act include:
- Transparency: Governmental entities must inform individuals when they are interacting with an AI system, ensuring clarity and accessibility in communication.
- Behavioral Manipulation: Developers and deployers are prohibited from using AI to incite self-harm or criminal activity.
- Social Scoring Ban: The Act prohibits the use of AI systems for categorizing individuals in a manner that could lead to unfair treatment.
- Biometric Data Protection: AI systems cannot identify individuals via biometric data without consent, in compliance with constitutional rights.
- Constitutional Rights and Discrimination: AI systems must not be developed with the intent to infringe upon constitutional rights or discriminate against protected classes.
- Child Protection: AI systems must not be developed to produce or distribute harmful content involving minors.
The Act sets a high bar for liability, requiring intent to establish violations related to constitutional rights or discrimination, distinguishing it from other state AI laws.
AI Regulatory Sandbox
The Act introduces a 36-month regulatory sandbox, managed by the Department of Information Resources (DIR) in consultation with the Texas Artificial Intelligence Council. Participants can test innovative AI applications without traditional licensing, provided they submit applications detailing their systems and adhere to reporting requirements. Notably, the AG cannot pursue penalties for violations of waived laws during the testing period.
Enforcement and Safe Harbors
The Texas AG holds exclusive enforcement authority, with a structured approach to violations:
- Notice-and-Cure: The AG must notify violators and allow 60 days to remedy the situation before legal action can be initiated.
- Civil Penalties: Fines can reach up to $200,000 per violation, with additional daily penalties for ongoing breaches.
- Statewide Preemption: The Act nullifies local ordinances regulating AI, aiming to create uniformity.
- No Private Right of Action: The legislation does not allow individuals to sue under the Act, aligning with Colorado regulations.
Safe harbors are also established, protecting entities from civil penalties if violations are discovered through testing or if they comply with recognized AI risk management frameworks.
Comparison with Other Regulations
The Texas Act is compared with Colorado and the EU AI Act, highlighting differences in effective dates, risk frameworks, and transparency obligations:
Aspect | Texas | Colorado | EU AI Act |
---|---|---|---|
Effective Date | January 1, 2026 | February 1, 2026 | February 2, 2025 (staggered implementation) |
Risk Framework | Duties/prohibitions keyed to specific practices | Duties/prohibitions keyed to specific practices | Framework designed for “High-risk” AI |
Transparency | Mandatory AI-interaction notice for government entities | Consumer notice for high-risk decisions | Mandatory disclosure for most AI interactions |
Discrimination Standard | “Intent to discriminate” required | “Algorithmic discrimination” (impact-focused) | Fundamental-rights impact assessment for high-risk AI |
Sandbox | Yes – 36 months, broad | No | Member States must establish at least one sandbox by August 2, 2026 |
Penalties | AG enforcement; up to $200,000 | AG enforcement; penalties to be determined | Tiered penalties; up to €35 million or 7% of global turnover |
Future Considerations
As Texas navigates the implications of the Responsible AI Governance Act, it faces the potential challenge of a ten-year federal moratorium on AI regulations. Proposed federal legislation could restrict states from enforcing AI-specific laws, affecting Texas’s ability to implement the Act fully.
Companies operating within Texas are encouraged to prepare for the upcoming regulatory environment by:
- Inventorying and stratifying AI use cases by risk level
- Documenting compliance with prohibited practices
- Establishing testing protocols
- Aligning governance with national AI risk management frameworks
In summary, the Texas Responsible AI Governance Act represents a significant step in regulating AI technologies while fostering an environment conducive to innovation. As companies adapt to these changes, they must remain vigilant in their compliance efforts.