Texas Takes the Lead in AI Governance with New Legislation

Texas Legislature Passes Comprehensive AI Governance Act

On June 2, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act (TX AI Act), which is now awaiting the governor’s signature or veto. If signed into law, the bill will take effect on January 1, 2026, positioning Texas as the fourth state after Colorado, Utah, and California to enact AI-specific legislation.

This legislation emerges at a critical juncture following the U.S. House of Representatives’ approval of a 10-year federal moratorium on state regulation of AI systems, which threatens to nullify existing and future state laws. Notably, 40 state attorneys general sent a bipartisan letter opposing this moratorium, highlighting the tension between federal and state governance of AI.

Scope of the Act

The TX AI Act applies to developers and deployers of any “artificial intelligence system,” defined as any machine-based system that infers outputs from inputs, potentially influencing physical or virtual environments. This broad scope exceeds the regulations of Colorado and Utah, which focus primarily on “high-risk” AI systems.

Key mandates include:

  • Providers of health care services must disclose to patients when AI systems are utilized in their practice.
  • Prohibitions on developing or deploying AI that causes harm, encourages self-harm, or engages in criminal activity.
  • Restrictions against AI development that infringes on rights guaranteed under the U.S. Constitution or discriminates based on protected characteristics, although exceptions exist for insurance and financial institutions compliant with industry regulations.
  • Specific prohibitions against creating deep fake sexually explicit videos or child pornography, which would incur criminal penalties.

Furthermore, state and local governments are restricted from using AI for social scoring or capturing biometric data of individuals, with mandatory disclosures required when deploying AI systems that interact with consumers.

Regulatory and Enforcement Framework

The Texas Attorney General (AG) will hold exclusive enforcement powers, including issuing civil investigative demands to obtain training data and related metrics. Violators of the statute will receive a notice and a 60-day period to rectify the violation. Civil penalties range from $10,000 to $12,000 for curable violations, $80,000 to $200,000 for uncurable violations, and $2,000 to $40,000 for each day of continuing violations.

The legislation also establishes a Texas AI Council under the Department of Information Resources, tasked with overseeing the development and deployment of AI systems in the best interests of Texas citizens. This council will evaluate laws related to AI, advise state and local governments, and coordinate with other regulators. Each member serves a four-year term.

Additionally, a Regulatory Sandbox Program will allow companies to develop and test innovative AI systems in a controlled environment, free from regulatory scrutiny.

Implications for Businesses

Should the Texas AI Act be enacted, it will impose the most comprehensive governance regulations on AI systems to date. Given Texas’s size and its business-friendly environment, this law is likely to have significant national implications for AI development and regulation.

The act will empower the Texas AG, Ken Paxton, in his consumer protection enforcement efforts related to AI systems. Recent actions have included settlements and the formation of a specialized team focused on privacy laws, which will intensify scrutiny on AI technologies.

Takeaways

Businesses using AI across multiple jurisdictions must remain vigilant regarding the rapid evolution of state-level regulations. Unique requirements in Colorado, Utah, California, and Texas each carry substantial civil penalties for noncompliance. Texas’s comprehensive approach may serve as a model for other states considering similar legislation.

Moreover, businesses must be aware that traditional state laws can be applied to AI use. Companies must avoid misleading claims about AI capabilities, safeguard consumer personal information, and ensure their AI systems produce fair and unbiased results in compliance with state anti-discrimination statutes.

Ensuring compliance early in the AI system lifecycle is crucial for mitigating regulatory risks. Companies aiming to develop or deploy AI systems should consult experienced legal counsel to navigate this complex landscape.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...