California’s Evolving AI Hiring Regulations and Legal Challenges

California’s AI Hiring Rules Are Now in Effect—and New Lawsuits Expand Employer Risk

Artificial intelligence and automated decision systems (ADS) have become common features of modern recruiting and hiring. As California regulators and courts increasingly focus on how employers use AI tools in employment decisions, the legal risks associated with these technologies have evolved.

California’s ADS Regulations Are Now in Effect

As anticipated, California’s Civil Rights Council amended the Fair Employment and Housing Act (FEHA) regulations to clarify that the use of ADS—including AI-driven tools in recruiting, hiring, promotion, discipline, and other employment decisions—is subject to California’s anti-discrimination laws. These regulations took effect on October 1, 2025.

Importantly, these regulations do not prohibit the use of AI or ADS but reinforce several core principles for compliance:

  • Employers remain responsible for discriminatory outcomes resulting from AI-assisted decisions, even when tools are developed or administered by third-party vendors.
  • ADS-driven decisions must be job-related and consistent with business necessity if they result in a disparate impact on protected groups.
  • Documentation, testing, and record retention are critical, especially where employers need to explain how automated tools influenced employment decisions.
  • Vendors may be treated as agents under FEHA, increasing the importance of vendor diligence and contract governance.

A New Theory of Liability for AI Hiring Tools: The Eightfold AI Consumer Reporting Lawsuit

A recently filed class action against Eightfold AI highlights a different area of exposure for employers using AI hiring tools. In January 2026, job applicants filed a proposed class action in California state court alleging that Eightfold’s AI-generated applicant scores and rankings function as “consumer reports” under the federal Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act (ICRAA).

This lawsuit centers on process and transparency, unlike previous cases that focused on discriminatory outcomes. The complaint alleges that Eightfold’s platform assembles and evaluates applicant information—including data beyond the application—and provides employers with numerical scores or rankings that influence hiring decisions. The plaintiffs contend that:

  • Applicants were not provided with disclosures.
  • Applicants did not authorize the creation of such reports.
  • Applicants were not given access to or an opportunity to dispute the information before adverse hiring decisions were made.

If courts accept this theory, the implications could extend beyond Eightfold. Any third-party AI tool that assembles applicant data and produces evaluative outputs could potentially implicate consumer-reporting obligations.

What This Means for Employers in 2026

Together, California’s ADS regulations, the Workday litigation, and the Eightfold lawsuit illustrate that AI-related employment risk now spans multiple legal frameworks, including:

  • Anti-discrimination law (FEHA and federal civil rights statutes).
  • Consumer-reporting statutes focused on notice, authorization, and dispute rights.
  • Broader transparency and documentation expectations tied to automated decision-making.

For employers, this means that AI governance should be both outcome-focused and process-focused. In practical terms, employers should consider:

  • Inventorying AI and ADS tools used at any stage of the hiring process.
  • Understanding what data those tools collect, evaluate, or infer.
  • Evaluating whether AI outputs could be characterized as reports or scores that materially influence employment decisions.
  • Reviewing vendor agreements, disclosures, and internal workflows to ensure compliance with both anti-discrimination and procedural obligations.

Bottom Line

AI-enabled hiring tools remain permissible and valuable, but the legal landscape governing their use continues to evolve. California’s ADS regulations confirm that AI is squarely within the scope of employment discrimination law, while emerging litigation like the Eightfold case signals that procedural compliance and transparency may be the next major frontier of AI-related employment risk.

Employers that treat AI governance as a holistic compliance issue—rather than a narrow technology concern—will be best positioned to manage these overlapping risks in 2026 and beyond.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...