AI Hiring Regulations: What HR Must Know to Stay Compliant

New AI Hiring Rules and Lawsuits Put Employers on Notice

Artificial intelligence (AI) is transforming how companies find, evaluate, and hire talent, but it’s also raising red flags among regulators and courts. Recent developments highlight the need for HR teams to closely examine their hiring tools to mitigate legal and compliance risks.

What’s Happening in California?

California is set to implement new civil rights regulations that will likely affect the use of automated decision-making systems (ADSs) in employment and state-supported programs. These rules, expected to take effect by July 1, 2025, aim to prevent discrimination based on protected characteristics such as race, gender, age, disability, or religion.

While these regulations do not outright ban AI tools, they make it unlawful to employ any system, automated or otherwise, that results in discriminatory outcomes.

What Counts as Discriminatory?

The new rules specifically target AI tools that analyze candidates’ voices, facial expressions, personality, or availability, particularly when such tools yield biased outcomes. For example, an AI tool that interprets a lack of smiling during a video interview as a sign of unfriendliness could unfairly penalize candidates from cultures where smiling less is the norm.

If an AI tool produces different outcomes for individuals in protected groups, it could violate the law, even in the absence of intent to discriminate.

What About the Workday Lawsuit?

Simultaneously, a significant collective action lawsuit against Workday, a popular HR tech provider, is advancing in federal court. The claim alleges that its AI-powered hiring software discriminated against applicants over the age of 40.

The case is led by a Black man over 40 who states that he applied to over 100 jobs using Workday’s systems but was rejected each time. A judge recently ruled that this age discrimination case can proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), potentially involving hundreds of thousands or even millions of job seekers.

This case serves as a wake-up call for employers: even if they did not develop the AI tool themselves, they can still be liable for the discriminatory impacts of third-party algorithms used in their hiring processes.

What Should HR Teams Do Now?

Regardless of location, these developments underscore that AI compliance is now a priority for HR teams. Here’s an actionable plan:

  • Review your tools: Audit your hiring systems, especially those involving AI. Do they analyze résumés, screen video interviews, or assign “fit scores”? If so, request proof they have been bias-tested.
  • Demand transparency from the vendor: If you utilize third-party platforms like Workday, ask for:
    • Documentation of bias testing.
    • Clear explanations of how decisions are made.
    • Contracts that protect you from legal risk.
  • Keep a human in the loop: Ensure someone in HR reviews and can override automated decisions.
  • Track outcomes: Regularly analyze hiring data. Are there unexplained gaps by age, race, or gender? These may indicate disparate impacts, which could be legal red flags.
  • Form an AI governance team: Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing AI usage in employment.

Why It Matters

California’s regulations and the ongoing Workday lawsuit are just the beginning. As the federal government reduces enforcement, states and private lawsuits are filling the gap, increasing legal exposure for companies that use AI without vigilance.

HR is no longer just a user of these tools; it is now the first line of defense against AI-driven bias. AI can enhance hiring efficiency, but it must be employed responsibly and equitably. Organizations should take these changes seriously, get ahead of the curve, and ensure their hiring processes are both efficient and fair.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...