Texas Enacts Groundbreaking AI Governance Law

Texas Takes a Shot at AI Regulation With ‘Responsible Artificial Intelligence Governance Act’

On June 22, 2025, Texas became the latest state to enact comprehensive AI legislation, introducing the Texas Responsible Artificial Intelligence Governance Act. This law, effective January 1, 2026, positions Texas as the second state, following Colorado, to implement comprehensive AI regulations. The act aims to balance corporate desires for AI innovation with consumer protection, anti-discrimination, and ethical considerations.

Quick Hits

  • The Texas Responsible Artificial Intelligence Governance Act establishes a broad framework for the acceptable development, deployment, and oversight of AI systems in Texas.
  • The act identifies certain acceptable and unacceptable uses of AI systems, creates the Texas Artificial Intelligence Council to oversee AI governance, and introduces a regulatory sandbox program for testing AI innovations.
  • Enforcement authority is vested exclusively in the Texas Office of the Attorney General, with significant civil penalties for violations and structured opportunities to cure noncompliance.

Overview

The Texas Responsible Artificial Intelligence Governance Act marks a significant move by Texas to lead in AI regulation at the state level. The act applies to any person or entity conducting business in Texas, producing products or services used by Texas residents, or developing or deploying AI systems within the state. Notably, certain governmental and healthcare entities are exempted.

The act defines an “artificial intelligence system” as any machine-based system that infers from inputs to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments. This definition encompasses systems involving machine learning, natural language processing, perception, speech, and content generation.

Unlike some other state AI laws that broadly address risks associated with AI, the Texas law focuses on a narrow, explicitly delineated set of harmful uses, particularly those involving biometric information.

Prohibited Practices

The legislation outlines several prohibited AI practices that businesses operating in Texas must avoid. These include:

  • Manipulating human behavior, particularly to incite self-harm, harm to others, or criminal activity.
  • Infringing upon constitutional rights or unlawfully discriminating against protected classes, such as race, color, national origin, sex, age, religion, or disability.
  • Creating illegal content, including AI-generated child sexual abuse material or deepfake content in violation of the Texas Penal Code.

Furthermore, governmental entities are prohibited from using AI tools to uniquely identify individuals through biometric data or capture images without consent if it infringes on constitutional rights or violates other laws. Healthcare providers must also provide clear disclosures when patients interact with AI systems in their care or treatment.

Promoting Innovation: The Texas Artificial Intelligence Council and the Regulatory Sandbox Program

The act establishes the Texas Artificial Intelligence Council, a seven-member body with varied expertise appointed by state leadership. The Council’s mandate includes:

  • Identifying legislative improvements and providing guidance on the use of AI systems.
  • Evaluating laws that hinder AI system innovation and proposing reforms.
  • Assessing potential regulatory capture risks, such as undue influence by technology companies.

Additionally, the act introduces a regulatory sandbox, allowing approved participants to test AI systems for up to thirty-six months. This framework fosters innovation while maintaining oversight, although participants must submit detailed applications and quarterly performance reports.

Enforcement and Penalties

The act does not provide a private right of action; however, enforcement authority lies with the Texas Office of the Attorney General. The penalties for violations range significantly, including:

  • $10,000 to $12,000 per curable violation.
  • $80,000 to $200,000 per uncurable violation.
  • $2,000 to $40,000 per day for continuing violations.

A sixty-day cure period is provided before enforcement action is taken, and compliance with recognized AI risk management frameworks may establish a rebuttable presumption of reasonable care.

Looking Forward

The Texas Responsible Artificial Intelligence Governance Act positions Texas as a leader in state-level AI regulation. It represents a new approach to AI governance in the U.S., aiming to balance technological progress with consumer protections and common-sense restrictions. While its effectiveness remains to be seen, businesses operating in Texas should remain aware of the new law and consider revisions to their practices to align with its requirements.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...