Texas Takes Charge: New AI Governance Law Enacted

Texas AI Governance Law Signed by Governor

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law. This legislative action comes amidst ongoing debates in the U.S. Senate regarding a proposed moratorium on state legislation concerning artificial intelligence (AI). The signing of HB 149 serves as a declaration that states will continue to legislate on matters of consumer protection and AI usage unless preempted by a final reconciliation bill, which remains pending in the Senate.

Governor Abbott’s Statement

According to Abbott’s office, “By enacting the Texas Responsible AI Governance Act, Gov. Abbott is showing Texas-style leadership in governing artificial intelligence. During a time when others are asserting that AI is an exceptional technology that should have no guardrails, Texas shows that it is critically important to ensure both innovation and citizen safety. Gov. Abbott’s support also highlights the importance of the states as bipartisan national laboratories for nimbly developing AI policy.”

Key Objectives of TRAIGA

The bill aims to:

  • Facilitate and advance the responsible development and use of AI systems;
  • Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems;
  • Provide transparency regarding risks in the development, deployment, and use of AI systems;
  • Offer reasonable notice concerning the use or contemplated use of AI systems by state agencies.

Scope and Requirements

TRAIGA applies to both developers and deployers of AI systems, including government entities. A developer and deployer is broadly defined as any entity that “develops or deploys an artificial intelligence system in Texas.”

The law mandates government entities to provide clear and conspicuous notice to consumers before or at the time of interaction that they are engaging with AI. This can be accomplished through a hyperlink. Moreover, it prohibits the use of AI by government entities to assign a social score, which includes evaluating individuals based on personal characteristics of social behavior, or uniquely identifying a consumer using biometric data without their consent.

Prohibitions Under TRAIGA

TRAIGA explicitly prohibits any entity from developing or deploying an AI system that:

  • Intentionally aims to incite or encourage a person to commit physical self-harm, including suicide;
  • Harms another person;
  • Engages in criminal activity.

It also prohibits the development or deployment of an AI system with the “sole intent” to:

  • Infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution;
  • Unlawfully discriminate against a protected class;
  • Produce, assist, or aid in producing or distributing sexually explicit content and child pornography, including deep fakes.

Enforcement and Penalties

The Texas Attorney General has exclusive jurisdiction over the enforcement of TRAIGA and can levy civil penalties after a court determination. These penalties depend on the intent and failure to cure violations, ranging from $10,000 to $200,000, with continued violations subject to penalties of not less than $2,000 and not more than $40,000 “for each day the violation continues.”

Effective Date

The law is set to go into effect on January 1, 2026. Stakeholders should take this time to determine whether the law applies to them and what measures they need to implement to ensure compliance.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...