Texas Legislature Passes Comprehensive AI Governance Act
On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act (TX AI Act), which is now awaiting the governor’s signature or veto. If signed into law, the bill will take effect on January 1, 2026, positioning Texas as the fourth state after Colorado, Utah, and California to enact AI-specific legislation.
This legislation emerges at a critical juncture following the U.S. House of Representatives’ approval of a 10-year federal moratorium on state regulation of AI systems, which threatens to nullify existing and future state laws. Notably, 40 state attorneys general sent a bipartisan letter opposing this moratorium, highlighting the tension between federal and state governance of AI.
Scope of the Act
The TX AI Act applies to developers and deployers of any “artificial intelligence system,” defined as any machine-based system that infers outputs from inputs, potentially influencing physical or virtual environments. This broad scope exceeds the regulations of Colorado and Utah, which focus primarily on “high-risk” AI systems.
Key mandates include:
- Providers of health care services must disclose to patients when AI systems are utilized in their practice.
- Prohibitions on developing or deploying AI that causes harm, encourages self-harm, or engages in criminal activity.
- Restrictions against AI development that infringes on rights guaranteed under the U.S. Constitution or discriminates based on protected characteristics, although exceptions exist for insurance and financial institutions compliant with industry regulations.
- Specific prohibitions against creating deep fake sexually explicit videos or child pornography, which would incur criminal penalties.
Furthermore, state and local governments are restricted from using AI for social scoring or capturing biometric data of individuals, with mandatory disclosures required when deploying AI systems that interact with consumers.
Regulatory and Enforcement Framework
The Texas Attorney General (AG) will hold exclusive enforcement powers, including issuing civil investigative demands to obtain training data and related metrics. Violators of the statute will receive a notice and a 60-day period to rectify the violation. Civil penalties range from $10,000 to $12,000 for curable violations, $80,000 to $200,000 for uncurable violations, and $2,000 to $40,000 for each day of continuing violations.
The legislation also establishes a Texas AI Council under the Department of Information Resources, tasked with overseeing the development and deployment of AI systems in the best interests of Texas citizens. This council will evaluate laws related to AI, advise state and local governments, and coordinate with other regulators. Each member serves a four-year term.
Additionally, a Regulatory Sandbox Program will allow companies to develop and test innovative AI systems in a controlled environment, free from regulatory scrutiny.
Implications for Businesses
Should the Texas AI Act be enacted, it will impose the most comprehensive governance regulations on AI systems to date. Given Texas’s size and its business-friendly environment, this law is likely to have significant national implications for AI development and regulation.
The act will empower the Texas AG, Ken Paxton, in his consumer protection enforcement efforts related to AI systems. Recent actions have included settlements and the formation of a specialized team focused on privacy laws, which will intensify scrutiny on AI technologies.
Takeaways
Businesses using AI across multiple jurisdictions must remain vigilant regarding the rapid evolution of state-level regulations. Unique requirements in Colorado, Utah, California, and Texas each carry substantial civil penalties for noncompliance. Texas’s comprehensive approach may serve as a model for other states considering similar legislation.
Moreover, businesses must be aware that traditional state laws can be applied to AI use. Companies must avoid misleading claims about AI capabilities, safeguard consumer personal information, and ensure their AI systems produce fair and unbiased results in compliance with state anti-discrimination statutes.
Ensuring compliance early in the AI system lifecycle is crucial for mitigating regulatory risks. Companies aiming to develop or deploy AI systems should consult experienced legal counsel to navigate this complex landscape.