Texas Takes the Lead in AI Governance with New Legislation

Texas Legislature Passes Comprehensive AI Governance Act

On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act (TX AI Act), which is now awaiting the governor’s signature or veto. If signed into law, the bill will take effect on January 1, 2026, positioning Texas as the fourth state after Colorado, Utah, and California to enact AI-specific legislation.

This legislation emerges at a critical juncture following the U.S. House of Representatives’ approval of a 10-year federal moratorium on state regulation of AI systems, which threatens to nullify existing and future state laws. Notably, 40 state attorneys general sent a bipartisan letter opposing this moratorium, highlighting the tension between federal and state governance of AI.

Scope of the Act

The TX AI Act applies to developers and deployers of any “artificial intelligence system,” defined as any machine-based system that infers outputs from inputs, potentially influencing physical or virtual environments. This broad scope exceeds the regulations of Colorado and Utah, which focus primarily on “high-risk” AI systems.

Key mandates include:

  • Providers of health care services must disclose to patients when AI systems are utilized in their practice.
  • Prohibitions on developing or deploying AI that causes harm, encourages self-harm, or engages in criminal activity.
  • Restrictions against AI development that infringes on rights guaranteed under the U.S. Constitution or discriminates based on protected characteristics, although exceptions exist for insurance and financial institutions compliant with industry regulations.
  • Specific prohibitions against creating deep fake sexually explicit videos or child pornography, which would incur criminal penalties.

Furthermore, state and local governments are restricted from using AI for social scoring or capturing biometric data of individuals, with mandatory disclosures required when deploying AI systems that interact with consumers.

Regulatory and Enforcement Framework

The Texas Attorney General (AG) will hold exclusive enforcement powers, including issuing civil investigative demands to obtain training data and related metrics. Violators of the statute will receive a notice and a 60-day period to rectify the violation. Civil penalties range from $10,000 to $12,000 for curable violations, $80,000 to $200,000 for uncurable violations, and $2,000 to $40,000 for each day of continuing violations.

The legislation also establishes a Texas AI Council under the Department of Information Resources, tasked with overseeing the development and deployment of AI systems in the best interests of Texas citizens. This council will evaluate laws related to AI, advise state and local governments, and coordinate with other regulators. Each member serves a four-year term.

Additionally, a Regulatory Sandbox Program will allow companies to develop and test innovative AI systems in a controlled environment, free from regulatory scrutiny.

Implications for Businesses

Should the Texas AI Act be enacted, it will impose the most comprehensive governance regulations on AI systems to date. Given Texas’s size and its business-friendly environment, this law is likely to have significant national implications for AI development and regulation.

The act will empower the Texas AG, Ken Paxton, in his consumer protection enforcement efforts related to AI systems. Recent actions have included settlements and the formation of a specialized team focused on privacy laws, which will intensify scrutiny on AI technologies.

Takeaways

Businesses using AI across multiple jurisdictions must remain vigilant regarding the rapid evolution of state-level regulations. Unique requirements in Colorado, Utah, California, and Texas each carry substantial civil penalties for noncompliance. Texas’s comprehensive approach may serve as a model for other states considering similar legislation.

Moreover, businesses must be aware that traditional state laws can be applied to AI use. Companies must avoid misleading claims about AI capabilities, safeguard consumer personal information, and ensure their AI systems produce fair and unbiased results in compliance with state anti-discrimination statutes.

Ensuring compliance early in the AI system lifecycle is crucial for mitigating regulatory risks. Companies aiming to develop or deploy AI systems should consult experienced legal counsel to navigate this complex landscape.

More Insights

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...

Avoiding AI Governance Pitfalls

As AI-infused tools become increasingly prevalent in enterprises, the importance of effective AI governance has grown. However, many businesses are falling short in their governance efforts, often...