Texas Takes a Stand: New AI Regulations Set the Tone for Responsible Innovation

Texas Enacts New AI Law

On June 22, 2025, the Texas governor signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law, solidifying Texas as the second state in the U.S. to implement comprehensive regulations on artificial intelligence (AI), following Colorado. The Act establishes categorical limitations on the deployment and development of AI systems and is set to take effect on January 1, 2026, one month before the Colorado AI Act.

Given the impending enforcement timeline and the civil penalties stipulated, companies utilizing AI are urged to evaluate their practices for compliance ahead of the new regulations.

Key Provisions of TRAIGA

TRAIGA details a range of prohibited practices regarding AI, specifically targeting the following:

  • Manipulation of Human Behavior: The Act prohibits the development or deployment of AI systems that intentionally aim to incite harmful behaviors, such as self-harm or criminal activity.
  • Social Scoring: TRAIGA forbids governmental entities from using AI systems to evaluate or classify individuals based on personal characteristics, with the intent to assign social scores that could lead to unfair treatment.
  • Capture of Biometric Data: The use of AI systems that identify individuals using their biometric data without consent is expressly prohibited.
  • Infringing on Constitutional Rights: The Act seeks to prevent AI systems from infringing upon individual rights guaranteed by the Constitution.
  • Unlawful Discrimination: Discriminatory practices against protected classes are prohibited under TRAIGA.
  • Certain Sexually Explicit Content: The Act restricts the development and distribution of AI systems related to explicit content involving minors.

Transparency and Consumer Disclosure

TRAIGA mandates that governmental agencies and healthcare services disclose to consumers when they are interacting with AI systems. This requirement aims for clarity and transparency, ensuring that consumers are informed prior to or at the time of interaction.

Regulatory Sandbox and Innovation

A notable feature of TRAIGA is the establishment of a regulatory sandbox program, allowing businesses to test innovative AI systems without immediate regulatory compliance. This initiative is designed to foster safe experimentation while providing clear guidelines.

Artificial Intelligence Council

The Act also creates the Texas Artificial Intelligence Council, a group of experts tasked with advising on various aspects of AI regulation, including ethics and public safety concerns.

Amendments to Texas’s Biometric Privacy Law

TRAIGA introduces amendments to Texas’s existing biometric privacy law, clarifying consent regarding the capture and storage of biometric identifiers. It emphasizes that consent is not implied solely by the public availability of an image or media.

Enforcement Mechanisms

The Texas attorney general holds exclusive authority to initiate actions against violations of TRAIGA, with provisions allowing for civil penalties of up to $12,000 for curable violations and $200,000 for uncurable violations. Notably, there is no provision for a private right of action, emphasizing the regulatory focus on state enforcement.

Conclusion

TRAIGA represents a significant step towards the regulation of AI technologies in Texas, emphasizing the need for ethical standards, consumer protection, and responsible innovation. As companies prepare for the law’s implementation, they must critically assess their AI systems to ensure compliance and mitigate potential risks associated with AI deployment.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...