New AI Regulations Come into Play with the Texas Responsible Artificial Intelligence Governance Act
The rapid advancement of artificial intelligence (“AI”) has outpaced existing U.S. regulatory frameworks. At present, AI regulation occurs primarily on a state-by-state basis. Most states that have enacted AI laws rely on targeted regulations for particular use cases or fields. Texas has established one of the more comprehensive approaches with its Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”).
TRAIGA was signed into law on June 22, 2025, and took effect on January 1, 2026, with implications beyond the borders of Texas.
Key Prohibitions Under TRAIGA
TRAIGA addresses the development or deployment of AI systems and prohibits the following:
- Developing or deploying an AI system with the intent to manipulate human behavior to incite or encourage self-harm, harm to others, or criminal activity.
- Developing or deploying an AI system with the sole intent to infringe, restrict, or impair rights guaranteed under the Constitution.
- Developing or deploying an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.
- Developing or deploying an AI system with the sole intent of producing or distributing certain sexually explicit content.
The Texas Business & Commerce Code defines AI systems as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that can influence physical or virtual environments.”
TRAIGA applies to any person or entity who does business in Texas or with Texans, thereby expanding its reach far beyond the Texas border. Another point of interest is that TRAIGA focuses on both development and deployment. In doing so, the law affects not only AI developers but any entity that may use such AI.
Shifting Regulatory Focus
The emphasis on intent represents a significant shift from EU-style risk-based assessments, which have also been adopted in Colorado. By taking this approach, TRAIGA offers a clear and potentially more easily operationalized framework compared to traditional risk-based methodologies.
Establishment of Advisory Bodies
TRAIGA also creates a state advisory body in the form of an AI Council to provide oversight and guidance. In addition, TRAIGA establishes a regulatory sandbox program, in which companies can test AI systems in a controlled environment for 36 months, while being protected from certain types of prosecution.
Enforcement Mechanisms
The Texas Attorney General has the exclusive right to bring actions under TRAIGA, as the act provides no private right of action. The Attorney General must provide notice and an opportunity to cure before bringing an action. Penalties range from $10,000 to $200,000 per violation, depending in part on whether the violation is determined to be “curable,” or $2,000 to $40,000 per day for a continued violation.
Comparative Analysis with Other States
TRAIGA’s approach can be compared to other states’ regulations. For instance, Colorado has enacted comprehensive AI legislation through the Colorado AI Act (“CAIA”). This statute implements a risk-based framework, requiring developers and deployers to conduct impact assessments and provide clear consumer notifications, thus including more roadblocks to compliance than TRAIGA.
Utah’s Utah AI Policy Act primarily addresses consumer notification and deceptive practices, resulting in a more limited scope than TRAIGA. California has adopted several targeted regulations addressing specific AI applications, such as chatbot oversight, election integrity measures, and deepfake restrictions. Texas, meanwhile, has established AI-related laws that are more straightforward compared to those in California.
Practical Steps for Compliance
Any entity conducting business in Texas or with Texas residents should carefully assess their risk exposure and review their business policies accordingly. Organizations are advised to consider the following points:
- Map your Texas exposure: Identify AI systems developed, offered, or deployed in Texas.
- Update AI policies: Explicitly forbid AI uses that could manipulate self-harm, violence, crime, discriminate intentionally, violate rights, or produce child sexual content.
- Evaluate the Texas sandbox: Assess if it is appropriate to pilot your next-generation AI features within the Texas sandbox to mitigate regulatory risk.