AI Hiring Regulations: What HR Must Know to Stay Compliant

New AI Hiring Rules and Lawsuits Put Employers on Notice

Artificial intelligence (AI) is transforming how companies find, evaluate, and hire talent, but it’s also raising red flags among regulators and courts. Recent developments highlight the need for HR teams to closely examine their hiring tools to mitigate legal and compliance risks.

What’s Happening in California?

California is set to implement new civil rights regulations that will likely affect the use of automated decision-making systems (ADSs) in employment and state-supported programs. These rules, expected to take effect by July 1, 2025, aim to prevent discrimination based on protected characteristics such as race, gender, age, disability, or religion.

While these regulations do not outright ban AI tools, they make it unlawful to employ any system, automated or otherwise, that results in discriminatory outcomes.

What Counts as Discriminatory?

The new rules specifically target AI tools that analyze candidates’ voices, facial expressions, personality, or availability, particularly when such tools yield biased outcomes. For example, an AI tool that interprets a lack of smiling during a video interview as a sign of unfriendliness could unfairly penalize candidates from cultures where smiling less is the norm.

If an AI tool produces different outcomes for individuals in protected groups, it could violate the law, even in the absence of intent to discriminate.

What About the Workday Lawsuit?

Simultaneously, a significant collective action lawsuit against Workday, a popular HR tech provider, is advancing in federal court. The claim alleges that its AI-powered hiring software discriminated against applicants over the age of 40.

The case is led by a Black man over 40 who states that he applied to over 100 jobs using Workday’s systems but was rejected each time. A judge recently ruled that this age discrimination case can proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), potentially involving hundreds of thousands or even millions of job seekers.

This case serves as a wake-up call for employers: even if they did not develop the AI tool themselves, they can still be liable for the discriminatory impacts of third-party algorithms used in their hiring processes.

What Should HR Teams Do Now?

Regardless of location, these developments underscore that AI compliance is now a priority for HR teams. Here’s an actionable plan:

  • Review your tools: Audit your hiring systems, especially those involving AI. Do they analyze résumés, screen video interviews, or assign “fit scores”? If so, request proof they have been bias-tested.
  • Demand transparency from the vendor: If you utilize third-party platforms like Workday, ask for:
    • Documentation of bias testing.
    • Clear explanations of how decisions are made.
    • Contracts that protect you from legal risk.
  • Keep a human in the loop: Ensure someone in HR reviews and can override automated decisions.
  • Track outcomes: Regularly analyze hiring data. Are there unexplained gaps by age, race, or gender? These may indicate disparate impacts, which could be legal red flags.
  • Form an AI governance team: Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing AI usage in employment.

Why It Matters

California’s regulations and the ongoing Workday lawsuit are just the beginning. As the federal government reduces enforcement, states and private lawsuits are filling the gap, increasing legal exposure for companies that use AI without vigilance.

HR is no longer just a user of these tools; it is now the first line of defense against AI-driven bias. AI can enhance hiring efficiency, but it must be employed responsibly and equitably. Organizations should take these changes seriously, get ahead of the curve, and ensure their hiring processes are both efficient and fair.

More Insights

Harnessing Generative AI for Enhanced Risk and Compliance in 2025

In 2025, the demand for Generative AI in risk and compliance certification is surging as organizations face complex regulatory landscapes and increasing threats. This certification equips...

Building Sustainable Generative AI: Mitigating Carbon Emissions

Generative AI is revolutionizing industries, but it comes with a significant environmental cost due to carbon emissions from extensive compute resources. As the demand for large-scale models grows...

AI Regulation: Balancing Innovation and Oversight

Experts discuss the implications of the recently passed H.R. 1, which would pause state and local regulations on artificial intelligence for ten years. The article examines the benefits and drawbacks...

AI Governance in India: Shaping the Future of Technology

This article examines the evolving landscape of AI governance in India, highlighting both the initiatives aimed at promoting AI adoption and the regulatory frameworks being developed to manage...

AI’s Shadow: Exposing and Addressing Harms Against Women and Girls

AI's rapid advancement presents risks, especially for vulnerable populations targeted by cyber-harassment, hate speech, and impersonation. AI systems can amplify biases and be exploited to harm...

AI Readiness Framework for the Pharmaceutical Industry

This article presents an AI readiness assessment framework tailored for the pharmaceutical industry, emphasizing the importance of aligning AI initiatives with regulatory standards and ethical...

AI as a Strategic Partner in Governance

The UAE has announced that a National Artificial Intelligence System will become a non-voting member of all federal and government company boards, marking a significant shift in governance. This...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...

New Code of Practice for AI Compliance Set for 2025

The European Commission announced that a code of practice to help companies comply with the EU's artificial intelligence rules may only be implemented by the end of 2025. This delay follows calls from...