Texas Takes a Stand: New AI Regulations Set the Tone for Responsible Innovation

Texas Enacts New AI Law

On June 22, 2025, the Texas governor signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) into law, solidifying Texas as the second state in the U.S. to implement comprehensive regulations on artificial intelligence (AI), following Colorado. The Act establishes categorical limitations on the deployment and development of AI systems and is set to take effect on January 1, 2026, one month before the Colorado AI Act.

Given the impending enforcement timeline and the civil penalties stipulated, companies utilizing AI are urged to evaluate their practices for compliance ahead of the new regulations.

Key Provisions of TRAIGA

TRAIGA details a range of prohibited practices regarding AI, specifically targeting the following:

  • Manipulation of Human Behavior: The Act prohibits the development or deployment of AI systems that intentionally aim to incite harmful behaviors, such as self-harm or criminal activity.
  • Social Scoring: TRAIGA forbids governmental entities from using AI systems to evaluate or classify individuals based on personal characteristics, with the intent to assign social scores that could lead to unfair treatment.
  • Capture of Biometric Data: The use of AI systems that identify individuals using their biometric data without consent is expressly prohibited.
  • Infringing on Constitutional Rights: The Act seeks to prevent AI systems from infringing upon individual rights guaranteed by the Constitution.
  • Unlawful Discrimination: Discriminatory practices against protected classes are prohibited under TRAIGA.
  • Certain Sexually Explicit Content: The Act restricts the development and distribution of AI systems related to explicit content involving minors.

Transparency and Consumer Disclosure

TRAIGA mandates that governmental agencies and healthcare services disclose to consumers when they are interacting with AI systems. This requirement aims for clarity and transparency, ensuring that consumers are informed prior to or at the time of interaction.

Regulatory Sandbox and Innovation

A notable feature of TRAIGA is the establishment of a regulatory sandbox program, allowing businesses to test innovative AI systems without immediate regulatory compliance. This initiative is designed to foster safe experimentation while providing clear guidelines.

Artificial Intelligence Council

The Act also creates the Texas Artificial Intelligence Council, a group of experts tasked with advising on various aspects of AI regulation, including ethics and public safety concerns.

Amendments to Texas’s Biometric Privacy Law

TRAIGA introduces amendments to Texas’s existing biometric privacy law, clarifying consent regarding the capture and storage of biometric identifiers. It emphasizes that consent is not implied solely by the public availability of an image or media.

Enforcement Mechanisms

The Texas attorney general holds exclusive authority to initiate actions against violations of TRAIGA, with provisions allowing for civil penalties of up to $12,000 for curable violations and $200,000 for uncurable violations. Notably, there is no provision for a private right of action, emphasizing the regulatory focus on state enforcement.

Conclusion

TRAIGA represents a significant step towards the regulation of AI technologies in Texas, emphasizing the need for ethical standards, consumer protection, and responsible innovation. As companies prepare for the law’s implementation, they must critically assess their AI systems to ensure compliance and mitigate potential risks associated with AI deployment.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...