Workers Challenge ‘Hidden’ AI Hiring Tools in Class Action
Workers are increasingly frustrated with AI-based hiring practices. A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that operate in the background, collecting sensitive and often inaccurate information about unsuspecting job applicants.
Targeting Eightfold AI
The lawsuit specifically targets Eightfold AI, arguing that the tools used by the company should be regulated similarly to credit report bureaus under the Fair Credit Reporting Act (FCRA) and relevant state laws. According to analysts, this case could have significant implications for the increasing use of AI in hiring.
“This lawsuit is a pivot point,” stated a chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”
Allegations and Violations
The suit was filed by the New York City-based law firm Outten & Golden LLP on behalf of plaintiffs Erin Kistler and Sruti Bhaumik. They claim they were denied employment multiple times due to AI-based hiring tools. The class action alleges that Eightfold AI violated federal and state fair credit and consumer reporting acts by collecting data on applicants and selling reports to companies for employment decisions, which can have profound consequences for job-seekers across the US.
Eightfold positions itself as the “world’s largest, self-refreshing source of talent data,” incorporating over 1.5 billion data points, including job titles and worker profiles from various industries. Its clientele includes major corporations like Microsoft, Morgan Stanley, and Starbucks.
The Mechanics of AI Hiring
The lawsuit claims that Eightfold’s proprietary large language model (LLM) and deep learning technology analyze data from public resources, including career sites and résumé databases. AI algorithms rank candidates’ suitability on a scale from 0 to 5, based on inferences about their culture fit and future career trajectory. However, these reports are “unreviewable” and largely invisible to candidates, who cannot dispute their contents before they are sent to hiring managers.
The plaintiffs argue that lower-ranked candidates are often discarded before a human reviews their applications.
Legal Implications
The lawsuit claims that these practices violate longstanding FCRA requirements, which encompass consumer reports for employment purposes. The FCRA defines consumer reports broadly, including information on habits, morals, and life experiences.
According to the lawsuit, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers were already concerned about the potential for unfair barriers to employment due to impersonal data analysis. The suit argues that companies like Eightfold must disclose their data practices and provide a mechanism for consumers to review and correct their reports.
Accountability and Governance
The lawsuit highlights a “governance failure” and accountability gap in AI hiring practices. This isn’t an isolated incident; other companies like Workday face similar lawsuits regarding discrimination in AI hiring tools.
If courts determine that AI evaluations are akin to credit reports, hiring practices may enter a regulated domain. This would require Chief Information Officers (CIOs) to ensure transparency, accountability, and audit rights in their AI systems.
Balancing AI Use in Hiring
While AI can enhance hiring processes, it can also complicate them. AI has proven valuable in talent sourcing, screening, and assessment. However, when AI begins to make subjective judgments, issues arise.
Human Resource professionals are encouraged to demand human-readable evidence from AI vendors, including logs and bias audits. Questions about the decision-making process and candidate evaluations should be standard practice.
One European manufacturing client of Greyhound restructured its hiring pipeline to require managers to log rationales for their decisions, even if AI had created a shortlist. This improved audit trails and minimized errors, reinforcing the notion that AI should serve as input rather than final judgment.
Conclusion
As AI becomes increasingly integrated into hiring practices, restoring the human element as an accountable actor is essential. The future of hiring technology must prioritize collaboration between humans and machines, governed from day one.