Texas Takes the Lead in AI Governance with New Legislation

Texas Legislature Passes Comprehensive AI Governance Act

On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act (TX AI Act), which is now awaiting the governor’s signature or veto. If signed into law, the bill will take effect on January 1, 2026, positioning Texas as the fourth state after Colorado, Utah, and California to enact AI-specific legislation.

This legislation emerges at a critical juncture following the U.S. House of Representatives’ approval of a 10-year federal moratorium on state regulation of AI systems, which threatens to nullify existing and future state laws. Notably, 40 state attorneys general sent a bipartisan letter opposing this moratorium, highlighting the tension between federal and state governance of AI.

Scope of the Act

The TX AI Act applies to developers and deployers of any “artificial intelligence system,” defined as any machine-based system that infers outputs from inputs, potentially influencing physical or virtual environments. This broad scope exceeds the regulations of Colorado and Utah, which focus primarily on “high-risk” AI systems.

Key mandates include:

  • Providers of health care services must disclose to patients when AI systems are utilized in their practice.
  • Prohibitions on developing or deploying AI that causes harm, encourages self-harm, or engages in criminal activity.
  • Restrictions against AI development that infringes on rights guaranteed under the U.S. Constitution or discriminates based on protected characteristics, although exceptions exist for insurance and financial institutions compliant with industry regulations.
  • Specific prohibitions against creating deep fake sexually explicit videos or child pornography, which would incur criminal penalties.

Furthermore, state and local governments are restricted from using AI for social scoring or capturing biometric data of individuals, with mandatory disclosures required when deploying AI systems that interact with consumers.

Regulatory and Enforcement Framework

The Texas Attorney General (AG) will hold exclusive enforcement powers, including issuing civil investigative demands to obtain training data and related metrics. Violators of the statute will receive a notice and a 60-day period to rectify the violation. Civil penalties range from $10,000 to $12,000 for curable violations, $80,000 to $200,000 for uncurable violations, and $2,000 to $40,000 for each day of continuing violations.

The legislation also establishes a Texas AI Council under the Department of Information Resources, tasked with overseeing the development and deployment of AI systems in the best interests of Texas citizens. This council will evaluate laws related to AI, advise state and local governments, and coordinate with other regulators. Each member serves a four-year term.

Additionally, a Regulatory Sandbox Program will allow companies to develop and test innovative AI systems in a controlled environment, free from regulatory scrutiny.

Implications for Businesses

Should the Texas AI Act be enacted, it will impose the most comprehensive governance regulations on AI systems to date. Given Texas’s size and its business-friendly environment, this law is likely to have significant national implications for AI development and regulation.

The act will empower the Texas AG, Ken Paxton, in his consumer protection enforcement efforts related to AI systems. Recent actions have included settlements and the formation of a specialized team focused on privacy laws, which will intensify scrutiny on AI technologies.

Takeaways

Businesses using AI across multiple jurisdictions must remain vigilant regarding the rapid evolution of state-level regulations. Unique requirements in Colorado, Utah, California, and Texas each carry substantial civil penalties for noncompliance. Texas’s comprehensive approach may serve as a model for other states considering similar legislation.

Moreover, businesses must be aware that traditional state laws can be applied to AI use. Companies must avoid misleading claims about AI capabilities, safeguard consumer personal information, and ensure their AI systems produce fair and unbiased results in compliance with state anti-discrimination statutes.

Ensuring compliance early in the AI system lifecycle is crucial for mitigating regulatory risks. Companies aiming to develop or deploy AI systems should consult experienced legal counsel to navigate this complex landscape.

More Insights

Data Governance Essentials in the EU AI Act

The EU AI Act proposes a framework to regulate AI, focusing on "high-risk" systems and emphasizing the importance of data governance to prevent biases and discrimination. Article 10 outlines strict...

EU’s New Code of Practice Sets Standards for General-Purpose AI Compliance

The European Commission has released a voluntary Code of Practice for general-purpose AI models to help industry comply with the AI Act's obligations on safety, transparency, and copyright. The AI...

EU Implements Strict AI Compliance Regulations for High-Risk Models

The European Commission has released guidelines to assist companies in complying with the EU's artificial intelligence law, which will take effect on August 2 for high-risk and general-purpose AI...

Navigating Systemic Risks in AI Compliance with EU Regulations

The post discusses the systemic risks associated with AI models and provides guidance on how to comply with the EU AI regulations. It highlights the importance of understanding these risks to ensure...

Artists Unite to Protect Music Rights in the Age of AI

More than 30 European musicians have launched a united video campaign urging the European Commission to preserve the integrity of the EU AI Act. The Stay True To The Act campaign calls for...

AI Agents: The New Security Challenge for Enterprises

The rise of AI agents in enterprise applications is creating new security challenges due to the autonomous nature of their outbound API calls. This "agentic traffic" can lead to unpredictable costs...

11 Essential Steps for a Successful AI Audit in the Workplace

As organizations increasingly adopt generative AI tools, particularly in human resources, conducting thorough AI audits is essential to mitigate legal, operational, and reputational risks. A...

Future-Proof Your Career with AI Compliance Certification

AI compliance certification is essential for professionals to navigate the complex regulatory landscape as artificial intelligence increasingly integrates into various industries. This certification...

States Lead the Charge in AI Regulation Amid Congressional Inaction

The U.S. Senate recently voted to eliminate a provision that would have prevented states from regulating AI for the next decade, leading to a surge in state-level legislative action on AI-related...