The Texas Responsible AI Governance Act and Its Potential Impact on Employers
On December 23, 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act), which positions Texas alongside other states in regulating artificial intelligence (AI) in the absence of federal legislation. The Act delineates obligations for developers, deployers, and distributors of specific AI systems within Texas.
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act aims to govern the use of high-risk artificial intelligence systems by employers and other deployers in Texas. High-risk systems are defined as those that contribute to or make consequential decisions, which can encompass significant employment-related choices such as hiring, performance evaluations, compensation, disciplinary actions, and terminations.
Interestingly, the Act does not extend its coverage to several common AI technologies, including systems designed to detect decision-making patterns, as well as anti-malware and antivirus programs.
Under the Act, employers will have a duty to exercise reasonable care to prevent algorithmic discrimination. This includes the responsibility to withdraw, disable, or recall any high-risk AI systems that fail to comply with the outlined regulations.
Key Requirements for Employers
To fulfill their responsibilities under the Act, employers must adhere to several requirements:
Human Oversight
Employers are mandated to ensure human oversight of high-risk AI systems. This oversight must be conducted by individuals possessing adequate competence, training, authority, and organizational support to oversee consequential decisions made by the AI.
Prompt Reporting of Discrimination Risks
Employers are required to report any discrimination risks without delay. They must notify the Artificial Intelligence Council, which will be established under the Act, no later than 10 days after becoming aware of such issues.
Regular AI Tool Assessments
Covered employers must conduct regular assessments of their high-risk AI systems. This includes an annual review to ensure that the system does not contribute to algorithmic discrimination.
Prompt Suspension
If an employer suspects that a system does not comply with the requirements of the Act, they must suspend its use and inform the system’s developer of their concerns.
Frequent Impact Assessments
Employers must perform impact assessments on a semi-annual basis and within 90 days following any intentional or significant modifications to the system.
Clear Disclosure of AI Use
Prior to or at the time of interaction, employers must disclose to any Texas-based individual that they are interacting with an AI system. The disclosure must include:
- That they are interacting with an AI system.
- The purpose of the system.
- That the system may or will make a consequential decision affecting them.
- The nature of any consequential decision in which the system is or may be a contributing factor.
- The factors used in making any consequential decisions.
- Contact information of the deployer.
- A description of the system.
Takeaways for Employers
The Texas Responsible AI Governance Act is poised to be a significant topic during Texas’s upcoming legislative session, set to commence on January 14, 2025. If enacted, the Act will establish a consumer protection-focused framework for AI regulation.
Employers should monitor the progress of the Act and any amendments to the proposed bill while also preparing for its potential passage. Here are some recommended actions:
- Develop policies and procedures governing the use of AI systems for employment decisions, including clear guidelines on the systems’ uses, decision-making processes, and approved users.
- Create an AI governance and risk-management framework that includes internal policies, procedures, and systems for reviewing, flagging risks, and reporting.
- Ensure human oversight over AI systems and provide training for users and those overseeing the AI systems.
- Allocate sufficient resources and budget for the management and compliance with the Act.
- Conduct due diligence on AI vendors and developers to ensure compliance with the Act’s requirements regarding high-risk AI systems.
As the regulatory landscape for AI continues to evolve, employers must stay informed and proactive in adapting to new legal requirements.