What Does the 2025 Artificial Intelligence Legislative and Regulatory Landscape Look Like for Employers?
The landscape of artificial intelligence (AI) legislation and regulation is rapidly evolving, particularly concerning how employers utilize AI systems. In the absence of comprehensive federal regulation, numerous states have enacted or are considering laws aimed at mitigating the risks associated with algorithmic discrimination in employment decisions.
The Concept of Algorithmic Discrimination
Algorithmic discrimination refers to the differential treatment or impact that results from the use of AI systems based on protected characteristics such as age, color, ethnicity, disability, national origin, race, religion, veteran status, and sex. AI systems can inadvertently create discriminatory outcomes due to flawed or unrepresentative training data, or by replicating existing human biases. This issue is particularly concerning for employers who rely on AI for critical employment decisions.
Current Federal Landscape
Despite an executive order from President Biden addressing AI development and usage, there remains a significant gap in federal legislation that specifically regulates AI systems in the context of employment and algorithmic discrimination. As a result, many states are stepping in to fill this regulatory void.
State Laws and Proposed Legislation
Various states have passed or are considering laws aimed at mitigating the risks of algorithmic discrimination. These laws generally impose a duty of reasonable care on employers, requiring them to assess and mitigate potential biases associated with their AI systems. Key components often include:
- Affirmative Reporting Requirements: Employers must notify individuals affected by AI-driven decisions and often provide options for data correction and appeals.
- Impact Assessments: Many states require assessments to identify biases against protected groups.
Enacted Laws
Colorado: Senate Bill 24-205
The Colorado Artificial Intelligence Act will take effect on February 1, 2026. This law adopts a risk-based approach similar to the European Union’s AI Act. It applies to Colorado businesses using AI systems for employment decisions, imposing reasonable care requirements and mandating disclosures when AI systems interact with consumers.
Illinois: House Bill 3773
Illinois’ HB 3773 amends the Human Rights Act to protect employees against discrimination stemming from AI use in employment. The law prohibits AI that results in discriminatory outcomes in areas such as recruitment and hiring while requiring employers to notify employees about AI use in decision-making.
New York City: Local Law 144 (LL 144)
Effective July 5, 2023, LL 144 mandates employers to conduct bias audits of automated employment decision-making tools (AEDT) and provide notice to employees about their use. This law emphasizes transparency and accountability in AI systems that substantially influence employment decisions.
Pending Bills and Regulatory Developments
California Privacy Protection Agency (CPPA)
The California Privacy Protection Agency has released draft regulations under the California Consumer Privacy Act (CCPA) focusing on AI and automated decision-making. These rules aim to ensure businesses provide pre-use notices to consumers and explain how AI affects decision-making processes.
California Civil Rights Council
California’s proposed regulations prohibit the use of AI that discriminates based on religious beliefs, disabilities, or medical conditions in hiring practices. They further require employers to maintain comprehensive records of AI system use and conduct anti-bias testing.
Texas: 88(R) HB 1709
The proposed Texas Responsible AI Governance Act would impose obligations on developers and distributors of high-risk AI systems, mandating risk assessments and transparency measures while establishing penalties for non-compliance.
Conclusion
The regulatory landscape surrounding AI in employment is shifting, with many states taking proactive measures to address algorithmic discrimination. Employers utilizing AI must remain vigilant regarding the patchwork of emerging laws and prioritize transparency and proactive audits as part of their compliance strategy. As state legislation continues to proliferate, understanding and adapting to these changes will be crucial for effective risk management in AI-driven decision-making.