Texas Takes Charge: New AI Governance Law Enacted

Texas AI Governance Law Signed by Governor

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law. This legislative action comes amidst ongoing debates in the U.S. Senate regarding a proposed moratorium on state legislation concerning artificial intelligence (AI). The signing of HB 149 serves as a declaration that states will continue to legislate on matters of consumer protection and AI usage unless preempted by a final reconciliation bill, which remains pending in the Senate.

Governor Abbott’s Statement

According to Abbott’s office, “By enacting the Texas Responsible AI Governance Act, Gov. Abbott is showing Texas-style leadership in governing artificial intelligence. During a time when others are asserting that AI is an exceptional technology that should have no guardrails, Texas shows that it is critically important to ensure both innovation and citizen safety. Gov. Abbott’s support also highlights the importance of the states as bipartisan national laboratories for nimbly developing AI policy.”

Key Objectives of TRAIGA

The bill aims to:

  • Facilitate and advance the responsible development and use of AI systems;
  • Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems;
  • Provide transparency regarding risks in the development, deployment, and use of AI systems;
  • Offer reasonable notice concerning the use or contemplated use of AI systems by state agencies.

Scope and Requirements

TRAIGA applies to both developers and deployers of AI systems, including government entities. A developer and deployer is broadly defined as any entity that “develops or deploys an artificial intelligence system in Texas.”

The law mandates government entities to provide clear and conspicuous notice to consumers before or at the time of interaction that they are engaging with AI. This can be accomplished through a hyperlink. Moreover, it prohibits the use of AI by government entities to assign a social score, which includes evaluating individuals based on personal characteristics of social behavior, or uniquely identifying a consumer using biometric data without their consent.

Prohibitions Under TRAIGA

TRAIGA explicitly prohibits any entity from developing or deploying an AI system that:

  • Intentionally aims to incite or encourage a person to commit physical self-harm, including suicide;
  • Harms another person;
  • Engages in criminal activity.

It also prohibits the development or deployment of an AI system with the “sole intent” to:

  • Infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution;
  • Unlawfully discriminate against a protected class;
  • Produce, assist, or aid in producing or distributing sexually explicit content and child pornography, including deep fakes.

Enforcement and Penalties

The Texas Attorney General has exclusive jurisdiction over the enforcement of TRAIGA and can levy civil penalties after a court determination. These penalties depend on the intent and failure to cure violations, ranging from $10,000 to $200,000, with continued violations subject to penalties of not less than $2,000 and not more than $40,000 “for each day the violation continues.”

Effective Date

The law is set to go into effect on January 1, 2026. Stakeholders should take this time to determine whether the law applies to them and what measures they need to implement to ensure compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...