Texas Sets New Standards for AI Regulation with Comprehensive Law

Texas Enacts Sweeping AI Law: Disclosure, Consent, and Compliance Requirements Take Effect in 2026

On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, which enacts the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This landmark legislation establishes one of the nation’s most comprehensive state-level regulatory frameworks for artificial intelligence (AI). The law is set to take effect on January 1, 2026, imposing critical disclosure, consent, and compliance requirements on developers, deployers, and governmental entities utilizing AI systems.

Defining Artificial Intelligence

TRAIGA defines an “artificial intelligence system” as any machine-based system that employs inputs to generate content, decisions, predictions, or recommendations that can influence both physical and virtual environments. The law aims to foster the responsible development and use of AI while safeguarding individuals from foreseeable risks through structured oversight and disclosure requirements.

Key Provisions of TRAIGA

  • Consumer Protection: The law prohibits the deployment of AI models that intentionally discriminate against protected classes, infringe on constitutional rights, or incite harm. Furthermore, governmental entities are barred from using AI to identify individuals through biometric data without informed consent or from assigning social scores based on behaviors or personal characteristics.
  • Disclosure Guidelines: Any governmental or commercial entity deploying an AI system intended for consumer interaction must provide clear and conspicuous disclosures in plain language. These disclosures must be made prior to or at the time of interaction, avoiding deceptive designs known as “dark patterns.”
  • AI Regulatory Sandbox Program: Subject to approval from the Department of Information Resources, individuals may test an AI program in a controlled environment without needing to be licensed under Texas laws. During this testing phase, the attorney general cannot file or pursue violations occurring during this period.
  • Safe Harbors: Entities that substantially comply with recognized risk management frameworks, such as the NIST AI Risk Management Framework, or detect violations through internal audits may qualify for protection against enforcement actions.
  • Enforcement and Civil Penalties: The Texas Attorney General retains exclusive enforcement authority, with civil penalties ranging from $10,000 to $200,000 per violation, including daily penalties for ongoing noncompliance.

Putting It Into Practice

With the enactment of TRAIGA, Texas becomes the second state to adopt a comprehensive AI regulatory framework, following Colorado, which implemented its own AI law in 2024. As states adopt varying approaches to AI regulation, stakeholders must closely monitor the evolving landscape of state-level regulations to assess compliance obligations, adjust risk management strategies, and consider operational impacts across jurisdictions.

In conclusion, the Texas Responsible Artificial Intelligence Governance Act marks a significant step towards establishing a structured framework for AI use in the state. By prioritizing consumer protection and transparency in AI deployment, TRAIGA sets a precedent for future legislation addressing the complexities of artificial intelligence in society.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...