California’s AI Hiring Rules Are Now in Effect—and New Lawsuits Expand Employer Risk
Artificial intelligence and automated decision systems (ADS) have become common features of modern recruiting and hiring. As California regulators and courts increasingly focus on how employers use AI tools in employment decisions, the legal risks associated with these technologies have evolved.
California’s ADS Regulations Are Now in Effect
As anticipated, California’s Civil Rights Council amended the Fair Employment and Housing Act (FEHA) regulations to clarify that the use of ADS—including AI-driven tools in recruiting, hiring, promotion, discipline, and other employment decisions—is subject to California’s anti-discrimination laws. These regulations took effect on October 1, 2025.
Importantly, these regulations do not prohibit the use of AI or ADS but reinforce several core principles for compliance:
- Employers remain responsible for discriminatory outcomes resulting from AI-assisted decisions, even when tools are developed or administered by third-party vendors.
- ADS-driven decisions must be job-related and consistent with business necessity if they result in a disparate impact on protected groups.
- Documentation, testing, and record retention are critical, especially where employers need to explain how automated tools influenced employment decisions.
- Vendors may be treated as agents under FEHA, increasing the importance of vendor diligence and contract governance.
A New Theory of Liability for AI Hiring Tools: The Eightfold AI Consumer Reporting Lawsuit
A recently filed class action against Eightfold AI highlights a different area of exposure for employers using AI hiring tools. In January 2026, job applicants filed a proposed class action in California state court alleging that Eightfold’s AI-generated applicant scores and rankings function as “consumer reports” under the federal Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act (ICRAA).
This lawsuit centers on process and transparency, unlike previous cases that focused on discriminatory outcomes. The complaint alleges that Eightfold’s platform assembles and evaluates applicant information—including data beyond the application—and provides employers with numerical scores or rankings that influence hiring decisions. The plaintiffs contend that:
- Applicants were not provided with disclosures.
- Applicants did not authorize the creation of such reports.
- Applicants were not given access to or an opportunity to dispute the information before adverse hiring decisions were made.
If courts accept this theory, the implications could extend beyond Eightfold. Any third-party AI tool that assembles applicant data and produces evaluative outputs could potentially implicate consumer-reporting obligations.
What This Means for Employers in 2026
Together, California’s ADS regulations, the Workday litigation, and the Eightfold lawsuit illustrate that AI-related employment risk now spans multiple legal frameworks, including:
- Anti-discrimination law (FEHA and federal civil rights statutes).
- Consumer-reporting statutes focused on notice, authorization, and dispute rights.
- Broader transparency and documentation expectations tied to automated decision-making.
For employers, this means that AI governance should be both outcome-focused and process-focused. In practical terms, employers should consider:
- Inventorying AI and ADS tools used at any stage of the hiring process.
- Understanding what data those tools collect, evaluate, or infer.
- Evaluating whether AI outputs could be characterized as reports or scores that materially influence employment decisions.
- Reviewing vendor agreements, disclosures, and internal workflows to ensure compliance with both anti-discrimination and procedural obligations.
Bottom Line
AI-enabled hiring tools remain permissible and valuable, but the legal landscape governing their use continues to evolve. California’s ADS regulations confirm that AI is squarely within the scope of employment discrimination law, while emerging litigation like the Eightfold case signals that procedural compliance and transparency may be the next major frontier of AI-related employment risk.
Employers that treat AI governance as a holistic compliance issue—rather than a narrow technology concern—will be best positioned to manage these overlapping risks in 2026 and beyond.