Texas AI Regulation: Implications for Employers

The Texas Responsible AI Governance Act and Its Potential Impact on Employers

On December 23, 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act), which positions Texas alongside other states in regulating artificial intelligence (AI) in the absence of federal legislation. The Act delineates obligations for developers, deployers, and distributors of specific AI systems within Texas.

The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems

The Act aims to govern the use of high-risk artificial intelligence systems by employers and other deployers in Texas. High-risk systems are defined as those that contribute to or make consequential decisions, which can encompass significant employment-related choices such as hiring, performance evaluations, compensation, disciplinary actions, and terminations.

Interestingly, the Act does not extend its coverage to several common AI technologies, including systems designed to detect decision-making patterns, as well as anti-malware and antivirus programs.

Under the Act, employers will have a duty to exercise reasonable care to prevent algorithmic discrimination. This includes the responsibility to withdraw, disable, or recall any high-risk AI systems that fail to comply with the outlined regulations.

Key Requirements for Employers

To fulfill their responsibilities under the Act, employers must adhere to several requirements:

Human Oversight

Employers are mandated to ensure human oversight of high-risk AI systems. This oversight must be conducted by individuals possessing adequate competence, training, authority, and organizational support to oversee consequential decisions made by the AI.

Prompt Reporting of Discrimination Risks

Employers are required to report any discrimination risks without delay. They must notify the Artificial Intelligence Council, which will be established under the Act, no later than 10 days after becoming aware of such issues.

Regular AI Tool Assessments

Covered employers must conduct regular assessments of their high-risk AI systems. This includes an annual review to ensure that the system does not contribute to algorithmic discrimination.

Prompt Suspension

If an employer suspects that a system does not comply with the requirements of the Act, they must suspend its use and inform the system’s developer of their concerns.

Frequent Impact Assessments

Employers must perform impact assessments on a semi-annual basis and within 90 days following any intentional or significant modifications to the system.

Clear Disclosure of AI Use

Prior to or at the time of interaction, employers must disclose to any Texas-based individual that they are interacting with an AI system. The disclosure must include:

  1. That they are interacting with an AI system.
  2. The purpose of the system.
  3. That the system may or will make a consequential decision affecting them.
  4. The nature of any consequential decision in which the system is or may be a contributing factor.
  5. The factors used in making any consequential decisions.
  6. Contact information of the deployer.
  7. A description of the system.

Takeaways for Employers

The Texas Responsible AI Governance Act is poised to be a significant topic during Texas’s upcoming legislative session, set to commence on January 14, 2025. If enacted, the Act will establish a consumer protection-focused framework for AI regulation.

Employers should monitor the progress of the Act and any amendments to the proposed bill while also preparing for its potential passage. Here are some recommended actions:

  1. Develop policies and procedures governing the use of AI systems for employment decisions, including clear guidelines on the systems’ uses, decision-making processes, and approved users.
  2. Create an AI governance and risk-management framework that includes internal policies, procedures, and systems for reviewing, flagging risks, and reporting.
  3. Ensure human oversight over AI systems and provide training for users and those overseeing the AI systems.
  4. Allocate sufficient resources and budget for the management and compliance with the Act.
  5. Conduct due diligence on AI vendors and developers to ensure compliance with the Act’s requirements regarding high-risk AI systems.

As the regulatory landscape for AI continues to evolve, employers must stay informed and proactive in adapting to new legal requirements.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...