Texas Takes a Stand: New AI Regulations Set the Tone for Responsible Innovation

Texas Enacts New AI Law

On June 22, 2025, the Texas governor signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law, solidifying Texas as the second state in the U.S. to implement comprehensive regulations on artificial intelligence (AI), following Colorado. The Act establishes categorical limitations on the deployment and development of AI systems and is set to take effect on January 1, 2026, one month before the Colorado AI Act.

Given the impending enforcement timeline and the civil penalties stipulated, companies utilizing AI are urged to evaluate their practices for compliance ahead of the new regulations.

Key Provisions of TRAIGA

TRAIGA details a range of prohibited practices regarding AI, specifically targeting the following:

  • Manipulation of Human Behavior: The Act prohibits the development or deployment of AI systems that intentionally aim to incite harmful behaviors, such as self-harm or criminal activity.
  • Social Scoring: TRAIGA forbids governmental entities from using AI systems to evaluate or classify individuals based on personal characteristics, with the intent to assign social scores that could lead to unfair treatment.
  • Capture of Biometric Data: The use of AI systems that identify individuals using their biometric data without consent is expressly prohibited.
  • Infringing on Constitutional Rights: The Act seeks to prevent AI systems from infringing upon individual rights guaranteed by the Constitution.
  • Unlawful Discrimination: Discriminatory practices against protected classes are prohibited under TRAIGA.
  • Certain Sexually Explicit Content: The Act restricts the development and distribution of AI systems related to explicit content involving minors.

Transparency and Consumer Disclosure

TRAIGA mandates that governmental agencies and healthcare services disclose to consumers when they are interacting with AI systems. This requirement aims for clarity and transparency, ensuring that consumers are informed prior to or at the time of interaction.

Regulatory Sandbox and Innovation

A notable feature of TRAIGA is the establishment of a regulatory sandbox program, allowing businesses to test innovative AI systems without immediate regulatory compliance. This initiative is designed to foster safe experimentation while providing clear guidelines.

Artificial Intelligence Council

The Act also creates the Texas Artificial Intelligence Council, a group of experts tasked with advising on various aspects of AI regulation, including ethics and public safety concerns.

Amendments to Texas’s Biometric Privacy Law

TRAIGA introduces amendments to Texas’s existing biometric privacy law, clarifying consent regarding the capture and storage of biometric identifiers. It emphasizes that consent is not implied solely by the public availability of an image or media.

Enforcement Mechanisms

The Texas attorney general holds exclusive authority to initiate actions against violations of TRAIGA, with provisions allowing for civil penalties of up to $12,000 for curable violations and $200,000 for uncurable violations. Notably, there is no provision for a private right of action, emphasizing the regulatory focus on state enforcement.

Conclusion

TRAIGA represents a significant step towards the regulation of AI technologies in Texas, emphasizing the need for ethical standards, consumer protection, and responsible innovation. As companies prepare for the law’s implementation, they must critically assess their AI systems to ensure compliance and mitigate potential risks associated with AI deployment.

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...