Texas Takes Charge: New AI Governance Law Enacted

Texas AI Governance Law Signed by Governor

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law. This legislative action comes amidst ongoing debates in the U.S. Senate regarding a proposed moratorium on state legislation concerning artificial intelligence (AI). The signing of HB 149 serves as a declaration that states will continue to legislate on matters of consumer protection and AI usage unless preempted by a final reconciliation bill, which remains pending in the Senate.

Governor Abbott’s Statement

According to Abbott’s office, “By enacting the Texas Responsible AI Governance Act, Gov. Abbott is showing Texas-style leadership in governing artificial intelligence. During a time when others are asserting that AI is an exceptional technology that should have no guardrails, Texas shows that it is critically important to ensure both innovation and citizen safety. Gov. Abbott’s support also highlights the importance of the states as bipartisan national laboratories for nimbly developing AI policy.”

Key Objectives of TRAIGA

The bill aims to:

  • Facilitate and advance the responsible development and use of AI systems;
  • Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems;
  • Provide transparency regarding risks in the development, deployment, and use of AI systems;
  • Offer reasonable notice concerning the use or contemplated use of AI systems by state agencies.

Scope and Requirements

TRAIGA applies to both developers and deployers of AI systems, including government entities. A developer and deployer is broadly defined as any entity that “develops or deploys an artificial intelligence system in Texas.”

The law mandates government entities to provide clear and conspicuous notice to consumers before or at the time of interaction that they are engaging with AI. This can be accomplished through a hyperlink. Moreover, it prohibits the use of AI by government entities to assign a social score, which includes evaluating individuals based on personal characteristics of social behavior, or uniquely identifying a consumer using biometric data without their consent.

Prohibitions Under TRAIGA

TRAIGA explicitly prohibits any entity from developing or deploying an AI system that:

  • Intentionally aims to incite or encourage a person to commit physical self-harm, including suicide;
  • Harms another person;
  • Engages in criminal activity.

It also prohibits the development or deployment of an AI system with the “sole intent” to:

  • Infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution;
  • Unlawfully discriminate against a protected class;
  • Produce, assist, or aid in producing or distributing sexually explicit content and child pornography, including deep fakes.

Enforcement and Penalties

The Texas Attorney General has exclusive jurisdiction over the enforcement of TRAIGA and can levy civil penalties after a court determination. These penalties depend on the intent and failure to cure violations, ranging from $10,000 to $200,000, with continued violations subject to penalties of not less than $2,000 and not more than $40,000 “for each day the violation continues.”

Effective Date

The law is set to go into effect on January 1, 2026. Stakeholders should take this time to determine whether the law applies to them and what measures they need to implement to ensure compliance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...