Texas Sets New Standards for AI Regulation with Comprehensive Law

Texas Enacts Sweeping AI Law: Disclosure, Consent, and Compliance Requirements Take Effect in 2026

On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, which enacts the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This landmark legislation establishes one of the nation’s most comprehensive state-level regulatory frameworks for artificial intelligence (AI). The law is set to take effect on January 1, 2026, imposing critical disclosure, consent, and compliance requirements on developers, deployers, and governmental entities utilizing AI systems.

Defining Artificial Intelligence

TRAIGA defines an “artificial intelligence system” as any machine-based system that employs inputs to generate content, decisions, predictions, or recommendations that can influence both physical and virtual environments. The law aims to foster the responsible development and use of AI while safeguarding individuals from foreseeable risks through structured oversight and disclosure requirements.

Key Provisions of TRAIGA

  • Consumer Protection: The law prohibits the deployment of AI models that intentionally discriminate against protected classes, infringe on constitutional rights, or incite harm. Furthermore, governmental entities are barred from using AI to identify individuals through biometric data without informed consent or from assigning social scores based on behaviors or personal characteristics.
  • Disclosure Guidelines: Any governmental or commercial entity deploying an AI system intended for consumer interaction must provide clear and conspicuous disclosures in plain language. These disclosures must be made prior to or at the time of interaction, avoiding deceptive designs known as “dark patterns.”
  • AI Regulatory Sandbox Program: Subject to approval from the Department of Information Resources, individuals may test an AI program in a controlled environment without needing to be licensed under Texas laws. During this testing phase, the attorney general cannot file or pursue violations occurring during this period.
  • Safe Harbors: Entities that substantially comply with recognized risk management frameworks, such as the NIST AI Risk Management Framework, or detect violations through internal audits may qualify for protection against enforcement actions.
  • Enforcement and Civil Penalties: The Texas Attorney General retains exclusive enforcement authority, with civil penalties ranging from $10,000 to $200,000 per violation, including daily penalties for ongoing noncompliance.

Putting It Into Practice

With the enactment of TRAIGA, Texas becomes the second state to adopt a comprehensive AI regulatory framework, following Colorado, which implemented its own AI law in 2024. As states adopt varying approaches to AI regulation, stakeholders must closely monitor the evolving landscape of state-level regulations to assess compliance obligations, adjust risk management strategies, and consider operational impacts across jurisdictions.

In conclusion, the Texas Responsible Artificial Intelligence Governance Act marks a significant step towards establishing a structured framework for AI use in the state. By prioritizing consumer protection and transparency in AI deployment, TRAIGA sets a precedent for future legislation addressing the complexities of artificial intelligence in society.

More Insights

EU’s AI Code of Practice Set for Late 2025 Release

The European Commission announced that a code of practice to assist companies in complying with the EU's artificial intelligence rules may not be issued until the end of 2025, marking a potential...

Texas Sets New Standards for AI Regulation with Comprehensive Law

On June 22, 2025, Texas Governor Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which establishes a comprehensive regulatory framework for...

From Safety to Standards: The Shift in AI Governance Priorities

The rebranding of the US AI Safety Institute to the Center for AI Standards and Innovation signifies a shift in national priorities from safety and accountability to innovation and speed. This change...

Empowering Innovation Through Responsible AI

NetApp is committed to responsible AI, integrating ethical principles and governance into its AI frameworks to build trust with customers. The company emphasizes innovation while ensuring that AI...

Harnessing Trusted Data for AI Success in Telecommunications

Artificial Intelligence (AI) is transforming the telecommunications sector by enhancing operations and delivering value through innovations like IoT services and smart cities. However, the...

Morocco’s Leadership in Global AI Governance

Morocco has taken an early lead in advancing global AI governance, as stated by Ambassador Omar Hilale during a recent round table discussion. The Kingdom aims to facilitate common views and encourage...

Regulating AI: The Ongoing Battle for Control

The article discusses the ongoing debate over AI regulation, emphasizing the recent passage of legislation that could impact state-level control over AI. It highlights the tension between innovation...

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...