Texas Enacts Groundbreaking AI Governance Law

Texas Signs Responsible AI Governance Act Into Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, marking the culmination of a bill that garnered national attention and underwent significant amendments throughout its legislative journey.

Originally introduced in December 2024, the draft of TRAIGA proposed an extensive regulatory framework modeled after the Colorado AI Act and the EU AI Act, primarily addressing “high-risk” artificial intelligence (AI) systems and imposing substantial obligations on developers and deployers in the private sector. However, in March 2025, Texas legislators presented an amended version that considerably narrowed the bill’s scope. Many of the earlier draft’s stringent requirements—such as the obligation to protect consumers from foreseeable harm, conduct impact assessments, and disclose details of high-risk AI systems to consumers—were either entirely removed or restricted to governmental entities.

Despite these revisions, the enacted version of TRAIGA encompasses several provisions that could significantly impact companies operating in Texas. Notably, the Act imposes categorical restrictions on the development and deployment of AI systems for specific purposes, including:

  • Behavioral Manipulation
  • Unlawful Discrimination
  • Creation or Distribution of Child Pornography and Unlawful Deepfakes
  • Infringement of Constitutional Rights

TRAIGA also establishes a regulatory sandbox program, allowing participants to develop and test AI systems in a relaxed regulatory environment. Furthermore, it sets up an AI advisory council responsible for assisting the state legislature in identifying effective AI policy and law and making recommendations to state agencies regarding their use of AI systems.

TRAIGA’s Substantive Provisions

Prohibited AI Practices

TRAIGA prohibits the development or deployment of any AI system for certain purposes, specifically targeting private-sector entities that conduct business in Texas, produce a product or service used by Texas residents, or develop or deploy an AI system within the state. The prohibitions include:

  1. Manipulation of Human Behavior: AI systems cannot be developed or deployed to intentionally encourage self-harm, harm to others, or engagement in criminal activity.
  2. Constitutional Protection: AI systems cannot be developed or deployed with the intent to infringe upon, restrict, or impair a person’s federal Constitutional rights.
  3. Unlawful Discrimination: AI systems cannot be developed or deployed with the intent of unlawfully discriminating against a protected class under federal or state law. Notably, TRAIGA specifies that a “disparate impact” alone is insufficient to demonstrate intent to discriminate.
  4. Sexually Explicit Content: The development or distribution of AI systems intended for producing, assisting in, or distributing child pornography or unlawful deepfake videos is strictly prohibited.

The Act emphasizes that these prohibitions should be “broadly construed and applied” to further TRAIGA’s core objectives, which include facilitating responsible AI development and safeguarding the public from foreseeable AI-related risks.

Enforcement and Penalties

TRAIGA grants enforcement authority solely to the Texas Attorney General (AG). The AG is required to develop a reporting mechanism to facilitate consumer complaints regarding potential violations. Upon receiving a consumer complaint, the AG may issue a civil investigative demand to parties suspected of violating TRAIGA, requesting comprehensive information about the AI system in question.

After receiving a notice of violation from the AG, a party has 60 days to rectify the violation and submit documentation to the AG showing compliance. The AG may pursue enforcement actions or seek civil penalties for uncured violations, which can range from:

  • $10,000 to $12,000 per curable violation/breach
  • $80,000 to $200,000 per uncurable violation
  • Up to $40,000 per day for continuing violations

The Act also allows state agencies to sanction a party found liable for TRAIGA violations, potentially leading to the suspension or revocation of licenses and monetary penalties of up to $100,000.

Regulatory Sandbox Program

TRAIGA introduces a regulatory sandbox program managed by the Department of Information Resources (DIR), designed to support the testing and development of AI systems under less stringent regulatory conditions. Participants must apply with a detailed description of the AI system, a benefit assessment addressing consumer impacts, and proof of compliance with federal AI laws.

If accepted, participants will have 36 months to test and develop their AI systems, during which time the AG cannot file charges, and state agencies cannot take punitive action for waived violations of state laws.

Texas Artificial Intelligence Council

TRAIGA establishes the Texas Artificial Intelligence Advisory Council, consisting of seven qualified members appointed by the governor. The Council is tasked with conducting AI training programs for state agencies and local governments and issuing reports on AI-related topics, including data privacy, security, and legal risks. However, the Council is explicitly prohibited from promulgating any binding rules or regulations.

Practical Takeaways for Developers and Deployers

Developers and deployers operating in Texas have a timeline to ensure compliance before TRAIGA becomes effective on January 1, 2026. Companies should assess whether they have developed or plan to develop an AI system that could violate TRAIGA’s prohibited uses. The Act indicates that liability may arise only if a party intentionally engages in prohibited practices.

Moreover, the AG’s focus on AI enforcement and the potential for significant penalties underscore the importance of proactive compliance measures. Companies should establish robust internal processes to identify potential violations and utilize recognized AI risk management frameworks to mitigate risks and enhance compliance.

More Insights

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft's chief scientist, Dr. Eric Horvitz, has criticized Donald Trump's proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need...

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative...

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into...

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on "high-risk"...

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...