Texas Takes Charge: New AI Governance Law Enacted

Texas AI Governance Law Signed by Governor

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law. This legislative action comes amidst ongoing debates in the U.S. Senate regarding a proposed moratorium on state legislation concerning artificial intelligence (AI). The signing of HB 149 serves as a declaration that states will continue to legislate on matters of consumer protection and AI usage unless preempted by a final reconciliation bill, which remains pending in the Senate.

Governor Abbott’s Statement

According to Abbott’s office, “By enacting the Texas Responsible AI Governance Act, Gov. Abbott is showing Texas-style leadership in governing artificial intelligence. During a time when others are asserting that AI is an exceptional technology that should have no guardrails, Texas shows that it is critically important to ensure both innovation and citizen safety. Gov. Abbott’s support also highlights the importance of the states as bipartisan national laboratories for nimbly developing AI policy.”

Key Objectives of TRAIGA

The bill aims to:

  • Facilitate and advance the responsible development and use of AI systems;
  • Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems;
  • Provide transparency regarding risks in the development, deployment, and use of AI systems;
  • Offer reasonable notice concerning the use or contemplated use of AI systems by state agencies.

Scope and Requirements

TRAIGA applies to both developers and deployers of AI systems, including government entities. A developer and deployer is broadly defined as any entity that “develops or deploys an artificial intelligence system in Texas.”

The law mandates government entities to provide clear and conspicuous notice to consumers before or at the time of interaction that they are engaging with AI. This can be accomplished through a hyperlink. Moreover, it prohibits the use of AI by government entities to assign a social score, which includes evaluating individuals based on personal characteristics of social behavior, or uniquely identifying a consumer using biometric data without their consent.

Prohibitions Under TRAIGA

TRAIGA explicitly prohibits any entity from developing or deploying an AI system that:

  • Intentionally aims to incite or encourage a person to commit physical self-harm, including suicide;
  • Harms another person;
  • Engages in criminal activity.

It also prohibits the development or deployment of an AI system with the “sole intent” to:

  • Infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution;
  • Unlawfully discriminate against a protected class;
  • Produce, assist, or aid in producing or distributing sexually explicit content and child pornography, including deep fakes.

Enforcement and Penalties

The Texas Attorney General has exclusive jurisdiction over the enforcement of TRAIGA and can levy civil penalties after a court determination. These penalties depend on the intent and failure to cure violations, ranging from $10,000 to $200,000, with continued violations subject to penalties of not less than $2,000 and not more than $40,000 “for each day the violation continues.”

Effective Date

The law is set to go into effect on January 1, 2026. Stakeholders should take this time to determine whether the law applies to them and what measures they need to implement to ensure compliance.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...