Texas Enacts Groundbreaking AI Governance Law

Texas Signs Responsible AI Governance Act Into Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, marking the culmination of a bill that garnered national attention and underwent significant amendments throughout its legislative journey.

Originally introduced in December 2024, the draft of TRAIGA proposed an extensive regulatory framework modeled after the Colorado AI Act and the EU AI Act, primarily addressing “high-risk” artificial intelligence (AI) systems and imposing substantial obligations on developers and deployers in the private sector. However, in March 2025, Texas legislators presented an amended version that considerably narrowed the bill’s scope. Many of the earlier draft’s stringent requirements—such as the obligation to protect consumers from foreseeable harm, conduct impact assessments, and disclose details of high-risk AI systems to consumers—were either entirely removed or restricted to governmental entities.

Despite these revisions, the enacted version of TRAIGA encompasses several provisions that could significantly impact companies operating in Texas. Notably, the Act imposes categorical restrictions on the development and deployment of AI systems for specific purposes, including:

  • Behavioral Manipulation
  • Unlawful Discrimination
  • Creation or Distribution of Child Pornography and Unlawful Deepfakes
  • Infringement of Constitutional Rights

TRAIGA also establishes a regulatory sandbox program, allowing participants to develop and test AI systems in a relaxed regulatory environment. Furthermore, it sets up an AI advisory council responsible for assisting the state legislature in identifying effective AI policy and law and making recommendations to state agencies regarding their use of AI systems.

TRAIGA’s Substantive Provisions

Prohibited AI Practices

TRAIGA prohibits the development or deployment of any AI system for certain purposes, specifically targeting private-sector entities that conduct business in Texas, produce a product or service used by Texas residents, or develop or deploy an AI system within the state. The prohibitions include:

  1. Manipulation of Human Behavior: AI systems cannot be developed or deployed to intentionally encourage self-harm, harm to others, or engagement in criminal activity.
  2. Constitutional Protection: AI systems cannot be developed or deployed with the intent to infringe upon, restrict, or impair a person’s federal Constitutional rights.
  3. Unlawful Discrimination: AI systems cannot be developed or deployed with the intent of unlawfully discriminating against a protected class under federal or state law. Notably, TRAIGA specifies that a “disparate impact” alone is insufficient to demonstrate intent to discriminate.
  4. Sexually Explicit Content: The development or distribution of AI systems intended for producing, assisting in, or distributing child pornography or unlawful deepfake videos is strictly prohibited.

The Act emphasizes that these prohibitions should be “broadly construed and applied” to further TRAIGA’s core objectives, which include facilitating responsible AI development and safeguarding the public from foreseeable AI-related risks.

Enforcement and Penalties

TRAIGA grants enforcement authority solely to the Texas Attorney General (AG). The AG is required to develop a reporting mechanism to facilitate consumer complaints regarding potential violations. Upon receiving a consumer complaint, the AG may issue a civil investigative demand to parties suspected of violating TRAIGA, requesting comprehensive information about the AI system in question.

After receiving a notice of violation from the AG, a party has 60 days to rectify the violation and submit documentation to the AG showing compliance. The AG may pursue enforcement actions or seek civil penalties for uncured violations, which can range from:

  • $10,000 to $12,000 per curable violation/breach
  • $80,000 to $200,000 per uncurable violation
  • Up to $40,000 per day for continuing violations

The Act also allows state agencies to sanction a party found liable for TRAIGA violations, potentially leading to the suspension or revocation of licenses and monetary penalties of up to $100,000.

Regulatory Sandbox Program

TRAIGA introduces a regulatory sandbox program managed by the Department of Information Resources (DIR), designed to support the testing and development of AI systems under less stringent regulatory conditions. Participants must apply with a detailed description of the AI system, a benefit assessment addressing consumer impacts, and proof of compliance with federal AI laws.

If accepted, participants will have 36 months to test and develop their AI systems, during which time the AG cannot file charges, and state agencies cannot take punitive action for waived violations of state laws.

Texas Artificial Intelligence Council

TRAIGA establishes the Texas Artificial Intelligence Advisory Council, consisting of seven qualified members appointed by the governor. The Council is tasked with conducting AI training programs for state agencies and local governments and issuing reports on AI-related topics, including data privacy, security, and legal risks. However, the Council is explicitly prohibited from promulgating any binding rules or regulations.

Practical Takeaways for Developers and Deployers

Developers and deployers operating in Texas have a timeline to ensure compliance before TRAIGA becomes effective on January 1, 2026. Companies should assess whether they have developed or plan to develop an AI system that could violate TRAIGA’s prohibited uses. The Act indicates that liability may arise only if a party intentionally engages in prohibited practices.

Moreover, the AG’s focus on AI enforcement and the potential for significant penalties underscore the importance of proactive compliance measures. Companies should establish robust internal processes to identify potential violations and utilize recognized AI risk management frameworks to mitigate risks and enhance compliance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...