Texas Enacts Groundbreaking AI Governance Law

Texas Takes a Shot at AI Regulation With ‘Responsible Artificial Intelligence Governance Act’

On June 22, 2025, Texas became the latest state to enact comprehensive AI legislation, introducing the Texas Responsible Artificial Intelligence Governance Act. This law, effective January 1, 2026, positions Texas as the second state, following Colorado, to implement comprehensive AI regulations. The act aims to balance corporate desires for AI innovation with consumer protection, anti-discrimination, and ethical considerations.

Quick Hits

  • The Texas Responsible Artificial Intelligence Governance Act establishes a broad framework for the acceptable development, deployment, and oversight of AI systems in Texas.
  • The act identifies certain acceptable and unacceptable uses of AI systems, creates the Texas Artificial Intelligence Council to oversee AI governance, and introduces a regulatory sandbox program for testing AI innovations.
  • Enforcement authority is vested exclusively in the Texas Office of the Attorney General, with significant civil penalties for violations and structured opportunities to cure noncompliance.

Overview

The Texas Responsible Artificial Intelligence Governance Act marks a significant move by Texas to lead in AI regulation at the state level. The act applies to any person or entity conducting business in Texas, producing products or services used by Texas residents, or developing or deploying AI systems within the state. Notably, certain governmental and healthcare entities are exempted.

The act defines an “artificial intelligence system” as any machine-based system that infers from inputs to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments. This definition encompasses systems involving machine learning, natural language processing, perception, speech, and content generation.

Unlike some other state AI laws that broadly address risks associated with AI, the Texas law focuses on a narrow, explicitly delineated set of harmful uses, particularly those involving biometric information.

Prohibited Practices

The legislation outlines several prohibited AI practices that businesses operating in Texas must avoid. These include:

  • Manipulating human behavior, particularly to incite self-harm, harm to others, or criminal activity.
  • Infringing upon constitutional rights or unlawfully discriminating against protected classes, such as race, color, national origin, sex, age, religion, or disability.
  • Creating illegal content, including AI-generated child sexual abuse material or deepfake content in violation of the Texas Penal Code.

Furthermore, governmental entities are prohibited from using AI tools to uniquely identify individuals through biometric data or capture images without consent if it infringes on constitutional rights or violates other laws. Healthcare providers must also provide clear disclosures when patients interact with AI systems in their care or treatment.

Promoting Innovation: The Texas Artificial Intelligence Council and the Regulatory Sandbox Program

The act establishes the Texas Artificial Intelligence Council, a seven-member body with varied expertise appointed by state leadership. The Council’s mandate includes:

  • Identifying legislative improvements and providing guidance on the use of AI systems.
  • Evaluating laws that hinder AI system innovation and proposing reforms.
  • Assessing potential regulatory capture risks, such as undue influence by technology companies.

Additionally, the act introduces a regulatory sandbox, allowing approved participants to test AI systems for up to thirty-six months. This framework fosters innovation while maintaining oversight, although participants must submit detailed applications and quarterly performance reports.

Enforcement and Penalties

The act does not provide a private right of action; however, enforcement authority lies with the Texas Office of the Attorney General. The penalties for violations range significantly, including:

  • $10,000 to $12,000 per curable violation.
  • $80,000 to $200,000 per uncurable violation.
  • $2,000 to $40,000 per day for continuing violations.

A sixty-day cure period is provided before enforcement action is taken, and compliance with recognized AI risk management frameworks may establish a rebuttable presumption of reasonable care.

Looking Forward

The Texas Responsible Artificial Intelligence Governance Act positions Texas as a leader in state-level AI regulation. It represents a new approach to AI governance in the U.S., aiming to balance technological progress with consumer protections and common-sense restrictions. While its effectiveness remains to be seen, businesses operating in Texas should remain aware of the new law and consider revisions to their practices to align with its requirements.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...