AI-Assisted Hiring in 2026: Managing Discrimination Risk
For large employers—and for tech companies providing hiring platform technology for employers—the risk profile around AI-assisted hiring has shifted from a “future concern” to an immediate issue. The core point is simple: using an algorithm does not mitigate anti-discrimination duties; rather, it often increases the need for validation, monitoring, documentation, and vendor oversight.
States and even municipalities have moved from general statements to targeted requirements (e.g., audits, notices, recordkeeping, and “high-risk” system duties), creating a patchwork that large, multi-state employers must navigate. Furthermore, existing state and federal anti-discrimination laws remain in effect and apply to AI-assisted employment decisions.
The Federal Baseline: AI Does Not Change the Rules
Even without a single comprehensive federal “AI hiring law,” the existing federal framework already poses significant exposure:
- Title VII and Disparate Impact: The Equal Employment Opportunity Commission (EEOC) has emphasized that employers using software or algorithms as “selection procedures” can face disparate impact liability if outcomes disproportionately exclude protected groups and the employer cannot demonstrate job-relatedness or business necessity.
- Americans with Disabilities Act (ADA) and Accommodations: Algorithmic tools can introduce federal discrimination law risk in three key ways:
- “Screening out” individuals based on disability-related traits;
- Using tools that conduct disability-related inquiries or medical examinations pre-offer;
- Failing to provide reasonable accommodations in an AI-driven process.
In the AI context, the defense posture may depend less on whether discrimination was “intended” and more on whether the hiring system is measurable, monitored, and defensible as job-related.
The Workday Litigation: Vendor Accountability
The case Mobley v. Workday is considered a bellwether as it tests whether an HR technology provider can be treated as an “agent” performing hiring functions for employers, thus facing liability under federal anti-discrimination statutes. The court allowed claims to proceed on the theory that Workday could be considered an “agent” carrying out delegated hiring functions.
Two aspects are particularly relevant for general counsel at large enterprises:
- Delegation Creates Potential Liability: The more a company relies on a system to reject, rank, or route candidates with minimal human intervention, the easier it becomes for plaintiffs to argue that the tool effectively performs a hiring function—making both the employer and vendor accountable.
- Collective/Class Posture Increases Settlement Pressure: The court granted conditional certification of an Age Discrimination in Employment Act (ADEA) collective action, escalating the notice, discovery scope, and downside exposure.
AI-Specific State and Municipal Hiring Laws
New York City: Local Law 144 mandates that employers and employment agencies may not use an “automated employment decision tool” (AEDT) unless it has undergone a bias audit within the prior year, with the audit information publicly available, and candidates receiving prescribed notices.
Operational challenges include aligning audits with actual tool functionalities and usage, clarifying relevant stages, and ensuring transparency regarding data categories.
California: The FEHA “automated-decision systems” regulations clarify how existing Fair Employment and Housing Act principles apply to hiring, promotion, and employment decisions influenced by automated systems, effective October 1, 2025.
Illinois: Illinois has two distinct tracks for governance, requiring employers to inventory their HR tech stack to identify different functionalities of tools used in hiring processes.
Colorado: SB 24-205 establishes obligations for developers and deployers of “high-risk” AI systems used in consequential decisions, focusing on protecting consumers from known or foreseeable risks of algorithmic discrimination.
Clarifications and Existing Laws
Some states have clarified that existing laws apply to algorithmic discrimination:
- New Jersey: The Division on Civil Rights issued guidance stating that the NJ Law Against Discrimination covers algorithmic discrimination.
- Oregon: The Attorney General issued guidance emphasizing that companies using AI must comply with existing consumer protection and privacy laws.
Practical Compliance Programs for Large Employers in 2026
For in-house counsel, the goal is to create a defensible system that reduces the likelihood of discriminatory outcomes:
- Start with an Inventory: Document every tool that screens, ranks, recommends, or routes candidates, classifying them by their influence on outcomes.
- Build a Testing Cycle: Treat adverse impact analysis as a recurring control, documenting thresholds and preserving prior versions of models for transparency.
- Plan Accommodations: Embed reasonable accommodation pathways into workflows, ensuring they are not an afterthought.
- Spread Risk in Vendor Contracts: Address transparency, audit cooperation, record retention, and jurisdiction-specific disclosures in vendor contracts.
- Align a Litigation-Ready Record: Prepare for deep discovery by maintaining documentation related to model inputs, adverse impact analyses, and vendor communications.
Summary
In 2026, the challenge lies in demonstrating governance over automated selection procedures commensurate with their influence on hiring outcomes. The most defensible posture for large employers is to treat AI-assisted hiring as a regulated selection system, ensuring adequate documentation of processes when challenges arise.