Employment Law Update: AI Hiring Under Fire
A major class action lawsuit filed in January 2026 is reshaping the legal landscape around AI-powered hiring tools. Notably, algorithmic bias is not the basis of the lawsuit. In the case of Kistler et al. v. Eightfold AI Inc., filed in California’s Contra Costa County Superior Court, the plaintiffs allege that Eightfold AI scraped personal data on over one billion workers, scored applicants on a zero-to-five scale, and discarded low-ranked candidates before any human reviewed their applications.
The lawsuit, brought by former EEOC chair Jenny R. Yang and the nonprofit Towards Justice, does not claim the algorithm was biased; rather, it asserts that the algorithm existed in secret. The plaintiffs’ theory is based on the Fair Credit Reporting Act (FCRA), which mandates specific procedures, including disclosure, access, and the opportunity to dispute errors, when companies compile “consumer reports” for employment decisions. This FCRA theory does not require proving discriminatory outcomes, providing a more accessible avenue for challenge. With statutory damages of $100 to $1,000 per willful violation applied to a database of a billion profiles, the financial exposure is astronomical.
The Eightfold case gains further significance alongside Mobley v. Workday, in which a federal judge determined that Workday acted as an “agent” of the employers using its automated screening tools, triggering direct liability under the Age Discrimination in Employment Act. Together, these cases form what commentators describe as a “pincer movement”: Workday establishes that the vendor is an agent liable for discrimination, while Eightfold frames the vendor as a consumer reporting agency subject to transparency mandates. One case attacks outcomes; the other addresses process. Both indicate that AI hiring vendors may no longer shield themselves behind the argument of providing neutral tools.
For employers, these developments intensify the AI vendor “liability squeeze”. Industry data highlights the risk: 88% of AI vendors cap their own liability, often to monthly subscription fees, while only 17% warrant regulatory compliance. An employer’s platform may scrape data from unknown sources, score candidates using opaque logic, and filter applicants before any human review—yet vendor agreements typically cap liability, disclaim compliance warranties, and restrict algorithmic audits.
Employers should take concrete steps now to close the gap between contractual protections and actual legal exposure. Vendor contracts should require transparency on data sources, independent audit rights for bias and FCRA compliance, training data indemnities, and carve-outs for regulatory fines, litigation, and class-action settlements from standard liability caps. Organizations should establish governance infrastructure, including AI hiring oversight spanning HR, legal, IT, and compliance, pre-procurement vendor due diligence, and periodic adverse impact analyses under the EEOC’s four-fifths rule. Equally important is documentation, which should include AI governance and use policies, impact assessments, vendor due diligence files, and human oversight and override logs. These measures establish compliance and demonstrate that an organization takes its obligations seriously.
The legal environment surrounding AI hiring tools is rapidly shifting in a direction that places increasing risk on employers. Courts are treating AI vendors as agents and consumer reporting agencies, state AI employment laws are proliferating, and the gap between contractual protection and legal exposure is widening. Employers who assume that their vendor agreements insulate them from this risk may face significant liability. Organizations best positioned to weather this shift are those that can explain how their AI hiring tools work, identify what data feeds them, and demonstrate meaningful oversight.