Texas AI Regulation: Implications for Employers

The Texas Responsible AI Governance Act and Its Potential Impact on Employers

On December 23, 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act), which positions Texas alongside other states in regulating artificial intelligence (AI) in the absence of federal legislation. The Act delineates obligations for developers, deployers, and distributors of specific AI systems within Texas.

The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems

The Act aims to govern the use of high-risk artificial intelligence systems by employers and other deployers in Texas. High-risk systems are defined as those that contribute to or make consequential decisions, which can encompass significant employment-related choices such as hiring, performance evaluations, compensation, disciplinary actions, and terminations.

Interestingly, the Act does not extend its coverage to several common AI technologies, including systems designed to detect decision-making patterns, as well as anti-malware and antivirus programs.

Under the Act, employers will have a duty to exercise reasonable care to prevent algorithmic discrimination. This includes the responsibility to withdraw, disable, or recall any high-risk AI systems that fail to comply with the outlined regulations.

Key Requirements for Employers

To fulfill their responsibilities under the Act, employers must adhere to several requirements:

Human Oversight

Employers are mandated to ensure human oversight of high-risk AI systems. This oversight must be conducted by individuals possessing adequate competence, training, authority, and organizational support to oversee consequential decisions made by the AI.

Prompt Reporting of Discrimination Risks

Employers are required to report any discrimination risks without delay. They must notify the Artificial Intelligence Council, which will be established under the Act, no later than 10 days after becoming aware of such issues.

Regular AI Tool Assessments

Covered employers must conduct regular assessments of their high-risk AI systems. This includes an annual review to ensure that the system does not contribute to algorithmic discrimination.

Prompt Suspension

If an employer suspects that a system does not comply with the requirements of the Act, they must suspend its use and inform the system’s developer of their concerns.

Frequent Impact Assessments

Employers must perform impact assessments on a semi-annual basis and within 90 days following any intentional or significant modifications to the system.

Clear Disclosure of AI Use

Prior to or at the time of interaction, employers must disclose to any Texas-based individual that they are interacting with an AI system. The disclosure must include:

  1. That they are interacting with an AI system.
  2. The purpose of the system.
  3. That the system may or will make a consequential decision affecting them.
  4. The nature of any consequential decision in which the system is or may be a contributing factor.
  5. The factors used in making any consequential decisions.
  6. Contact information of the deployer.
  7. A description of the system.

Takeaways for Employers

The Texas Responsible AI Governance Act is poised to be a significant topic during Texas’s upcoming legislative session, set to commence on January 14, 2025. If enacted, the Act will establish a consumer protection-focused framework for AI regulation.

Employers should monitor the progress of the Act and any amendments to the proposed bill while also preparing for its potential passage. Here are some recommended actions:

  1. Develop policies and procedures governing the use of AI systems for employment decisions, including clear guidelines on the systems’ uses, decision-making processes, and approved users.
  2. Create an AI governance and risk-management framework that includes internal policies, procedures, and systems for reviewing, flagging risks, and reporting.
  3. Ensure human oversight over AI systems and provide training for users and those overseeing the AI systems.
  4. Allocate sufficient resources and budget for the management and compliance with the Act.
  5. Conduct due diligence on AI vendors and developers to ensure compliance with the Act’s requirements regarding high-risk AI systems.

As the regulatory landscape for AI continues to evolve, employers must stay informed and proactive in adapting to new legal requirements.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...