Workers Challenge AI Hiring Practices in Landmark Lawsuit

Workers Challenge ‘Hidden’ AI Hiring Tools in Class Action

Workers are increasingly frustrated with AI-based hiring practices. A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that operate in the background, collecting sensitive and often inaccurate information about unsuspecting job applicants.

Targeting Eightfold AI

The lawsuit specifically targets Eightfold AI, arguing that the tools used by the company should be regulated similarly to credit report bureaus under the Fair Credit Reporting Act (FCRA) and relevant state laws. According to analysts, this case could have significant implications for the increasing use of AI in hiring.

“This lawsuit is a pivot point,” stated a chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”

Allegations and Violations

The suit was filed by the New York City-based law firm Outten & Golden LLP on behalf of plaintiffs Erin Kistler and Sruti Bhaumik. They claim they were denied employment multiple times due to AI-based hiring tools. The class action alleges that Eightfold AI violated federal and state fair credit and consumer reporting acts by collecting data on applicants and selling reports to companies for employment decisions, which can have profound consequences for job-seekers across the US.

Eightfold positions itself as the “world’s largest, self-refreshing source of talent data,” incorporating over 1.5 billion data points, including job titles and worker profiles from various industries. Its clientele includes major corporations like Microsoft, Morgan Stanley, and Starbucks.

The Mechanics of AI Hiring

The lawsuit claims that Eightfold’s proprietary large language model (LLM) and deep learning technology analyze data from public resources, including career sites and résumé databases. AI algorithms rank candidates’ suitability on a scale from 0 to 5, based on inferences about their culture fit and future career trajectory. However, these reports are “unreviewable” and largely invisible to candidates, who cannot dispute their contents before they are sent to hiring managers.

The plaintiffs argue that lower-ranked candidates are often discarded before a human reviews their applications.

Legal Implications

The lawsuit claims that these practices violate longstanding FCRA requirements, which encompass consumer reports for employment purposes. The FCRA defines consumer reports broadly, including information on habits, morals, and life experiences.

According to the lawsuit, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers were already concerned about the potential for unfair barriers to employment due to impersonal data analysis. The suit argues that companies like Eightfold must disclose their data practices and provide a mechanism for consumers to review and correct their reports.

Accountability and Governance

The lawsuit highlights a “governance failure” and accountability gap in AI hiring practices. This isn’t an isolated incident; other companies like Workday face similar lawsuits regarding discrimination in AI hiring tools.

If courts determine that AI evaluations are akin to credit reports, hiring practices may enter a regulated domain. This would require Chief Information Officers (CIOs) to ensure transparency, accountability, and audit rights in their AI systems.

Balancing AI Use in Hiring

While AI can enhance hiring processes, it can also complicate them. AI has proven valuable in talent sourcing, screening, and assessment. However, when AI begins to make subjective judgments, issues arise.

Human Resource professionals are encouraged to demand human-readable evidence from AI vendors, including logs and bias audits. Questions about the decision-making process and candidate evaluations should be standard practice.

One European manufacturing client of Greyhound restructured its hiring pipeline to require managers to log rationales for their decisions, even if AI had created a shortlist. This improved audit trails and minimized errors, reinforcing the notion that AI should serve as input rather than final judgment.

Conclusion

As AI becomes increasingly integrated into hiring practices, restoring the human element as an accountable actor is essential. The future of hiring technology must prioritize collaboration between humans and machines, governed from day one.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...