Texas Sets New Standards for AI Regulation with Comprehensive Law

Texas Enacts Sweeping AI Law: Disclosure, Consent, and Compliance Requirements Take Effect in 2026

On June 22, 2025, Texas Governor Greg Abbott signed into law House Bill 149, which enacts the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). This landmark legislation establishes one of the nation’s most comprehensive state-level regulatory frameworks for artificial intelligence (AI). The law is set to take effect on January 1, 2026, imposing critical disclosure, consent, and compliance requirements on developers, deployers, and governmental entities utilizing AI systems.

Defining Artificial Intelligence

TRAIGA defines an “artificial intelligence system” as any machine-based system that employs inputs to generate content, decisions, predictions, or recommendations that can influence both physical and virtual environments. The law aims to foster the responsible development and use of AI while safeguarding individuals from foreseeable risks through structured oversight and disclosure requirements.

Key Provisions of TRAIGA

  • Consumer Protection: The law prohibits the deployment of AI models that intentionally discriminate against protected classes, infringe on constitutional rights, or incite harm. Furthermore, governmental entities are barred from using AI to identify individuals through biometric data without informed consent or from assigning social scores based on behaviors or personal characteristics.
  • Disclosure Guidelines: Any governmental or commercial entity deploying an AI system intended for consumer interaction must provide clear and conspicuous disclosures in plain language. These disclosures must be made prior to or at the time of interaction, avoiding deceptive designs known as “dark patterns.”
  • AI Regulatory Sandbox Program: Subject to approval from the Department of Information Resources, individuals may test an AI program in a controlled environment without needing to be licensed under Texas laws. During this testing phase, the attorney general cannot file or pursue violations occurring during this period.
  • Safe Harbors: Entities that substantially comply with recognized risk management frameworks, such as the NIST AI Risk Management Framework, or detect violations through internal audits may qualify for protection against enforcement actions.
  • Enforcement and Civil Penalties: The Texas Attorney General retains exclusive enforcement authority, with civil penalties ranging from $10,000 to $200,000 per violation, including daily penalties for ongoing noncompliance.

Putting It Into Practice

With the enactment of TRAIGA, Texas becomes the second state to adopt a comprehensive AI regulatory framework, following Colorado, which implemented its own AI law in 2024. As states adopt varying approaches to AI regulation, stakeholders must closely monitor the evolving landscape of state-level regulations to assess compliance obligations, adjust risk management strategies, and consider operational impacts across jurisdictions.

In conclusion, the Texas Responsible Artificial Intelligence Governance Act marks a significant step towards establishing a structured framework for AI use in the state. By prioritizing consumer protection and transparency in AI deployment, TRAIGA sets a precedent for future legislation addressing the complexities of artificial intelligence in society.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...