AI Hiring Regulations: What HR Must Know to Stay Compliant

New AI Hiring Rules and Lawsuits Put Employers on Notice

Artificial intelligence (AI) is transforming how companies find, evaluate, and hire talent, but it’s also raising red flags among regulators and courts. Recent developments highlight the need for HR teams to closely examine their hiring tools to mitigate legal and compliance risks.

What’s Happening in California?

California is set to implement new civil rights regulations that will likely affect the use of automated decision-making systems (ADSs) in employment and state-supported programs. These rules, expected to take effect by July 1, 2025, aim to prevent discrimination based on protected characteristics such as race, gender, age, disability, or religion.

While these regulations do not outright ban AI tools, they make it unlawful to employ any system, automated or otherwise, that results in discriminatory outcomes.

What Counts as Discriminatory?

The new rules specifically target AI tools that analyze candidates’ voices, facial expressions, personality, or availability, particularly when such tools yield biased outcomes. For example, an AI tool that interprets a lack of smiling during a video interview as a sign of unfriendliness could unfairly penalize candidates from cultures where smiling less is the norm.

If an AI tool produces different outcomes for individuals in protected groups, it could violate the law, even in the absence of intent to discriminate.

What About the Workday Lawsuit?

Simultaneously, a significant collective action lawsuit against Workday, a popular HR tech provider, is advancing in federal court. The claim alleges that its AI-powered hiring software discriminated against applicants over the age of 40.

The case is led by a Black man over 40 who states that he applied to over 100 jobs using Workday’s systems but was rejected each time. A judge recently ruled that this age discrimination case can proceed as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), potentially involving hundreds of thousands or even millions of job seekers.

This case serves as a wake-up call for employers: even if they did not develop the AI tool themselves, they can still be liable for the discriminatory impacts of third-party algorithms used in their hiring processes.

What Should HR Teams Do Now?

Regardless of location, these developments underscore that AI compliance is now a priority for HR teams. Here’s an actionable plan:

  • Review your tools: Audit your hiring systems, especially those involving AI. Do they analyze résumés, screen video interviews, or assign “fit scores”? If so, request proof they have been bias-tested.
  • Demand transparency from the vendor: If you utilize third-party platforms like Workday, ask for:
    • Documentation of bias testing.
    • Clear explanations of how decisions are made.
    • Contracts that protect you from legal risk.
  • Keep a human in the loop: Ensure someone in HR reviews and can override automated decisions.
  • Track outcomes: Regularly analyze hiring data. Are there unexplained gaps by age, race, or gender? These may indicate disparate impacts, which could be legal red flags.
  • Form an AI governance team: Create a cross-functional team (HR, legal, IT) to set policies, vet systems, and monitor ongoing AI usage in employment.

Why It Matters

California’s regulations and the ongoing Workday lawsuit are just the beginning. As the federal government reduces enforcement, states and private lawsuits are filling the gap, increasing legal exposure for companies that use AI without vigilance.

HR is no longer just a user of these tools; it is now the first line of defense against AI-driven bias. AI can enhance hiring efficiency, but it must be employed responsibly and equitably. Organizations should take these changes seriously, get ahead of the curve, and ensure their hiring processes are both efficient and fair.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...