AI Hiring Regulations: What HR Must Know to Stay Compliant

New AI Hiring Rules and Lawsuits Put Employers on Notice

Artificial intelligence (AI) is transforming how companies find, evaluate, and hire talent, but it’s also raising red flags among regulators and courts. Recent developments highlight the need for HR teams to closely examine their hiring tools to mitigate legal and compliance risks.

What’s Happening in California?

California is set to implement new civil rights regulations that will likely affect the use of automated decision-making systems (ADSs) in employment and state-supported programs. These rules, expected to take effect by July 1, 2025, aim to prevent discrimination based on protected characteristics such as race, gender, age, disability, or religion.

While these regulations do not outright ban AI tools, they make it unlawful to employ any system, automated or otherwise, that results in discriminatory outcomes.

What Counts as Discriminatory?

The new rules specifically target AI tools that analyze candidates’ voices, facial expressions, personality, or availability, particularly when such tools yield biased outcomes. For example, an AI tool that interprets a lack of smiling during a video interview as a sign of unfriendliness could unfairly penalize candidates from cultures where smiling less is the norm.

If an AI tool produces different outcomes for individuals in protected groups, it could violate the law, even in the absence of intent to discriminate.

What About the Workday Lawsuit?

Simultaneously, a significant collective action lawsuit against Workday, a popular HR tech provider, is advancing in federal court. The claim alleges that its AI-powered hiring software discriminated against applicants over the age of 40.

The case is led by a Black man over 40 who states that he applied to over 100 jobs using Workday’s systems but was rejected each time. A judge recently ruled that this age discrimination case can proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), potentially involving hundreds of thousands or even millions of job seekers.

This case serves as a wake-up call for employers: even if they did not develop the AI tool themselves, they can still be liable for the discriminatory impacts of third-party algorithms used in their hiring processes.

What Should HR Teams Do Now?

Regardless of location, these developments underscore that AI compliance is now a priority for HR teams. Here’s an actionable plan:

  • Review your tools: Audit your hiring systems, especially those involving AI. Do they analyze résumés, screen video interviews, or assign “fit scores”? If so, request proof they have been bias-tested.
  • Demand transparency from the vendor: If you utilize third-party platforms like Workday, ask for:
    • Documentation of bias testing.
    • Clear explanations of how decisions are made.
    • Contracts that protect you from legal risk.
  • Keep a human in the loop: Ensure someone in HR reviews and can override automated decisions.
  • Track outcomes: Regularly analyze hiring data. Are there unexplained gaps by age, race, or gender? These may indicate disparate impacts, which could be legal red flags.
  • Form an AI governance team: Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing AI usage in employment.

Why It Matters

California’s regulations and the ongoing Workday lawsuit are just the beginning. As the federal government reduces enforcement, states and private lawsuits are filling the gap, increasing legal exposure for companies that use AI without vigilance.

HR is no longer just a user of these tools; it is now the first line of defense against AI-driven bias. AI can enhance hiring efficiency, but it must be employed responsibly and equitably. Organizations should take these changes seriously, get ahead of the curve, and ensure their hiring processes are both efficient and fair.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...