California’s Pioneering AI Employment Regulations: What Employers Must Know

California’s New Potential AI Employment Regulations: What Employers Need to Know

In a first-of-its-kind move, California has finalized groundbreaking regulations that directly address the use of artificial intelligence (AI) and automated decision systems (ADS) in employment. These rules, approved by the California Civil Rights Council in March 2025, signal a clear message: while AI tools can be valuable in recruitment, hiring, and workforce management, they must not be used in ways that discriminate against applicants or employees.

These regulations are expected to take effect later this year, once approved by the Office of Administrative Law. Here’s what California employers need to know.

1. Purpose and Scope of the New Regulations

The regulations aim to ensure that the increasing use of technology in employment decisions complies with the Fair Employment and Housing Act (FEHA). In essence, the rules extend traditional anti-discrimination protections to the digital age by:

  • Defining when and how automated systems are covered under California employment law
  • Prohibiting discriminatory impacts stemming from ADS
  • Setting recordkeeping and notice obligations for employers using these technologies

2. What Is an Automated Decision System (ADS)?

The regulations define an ADS as: “A computational process that makes a decision or facilitates human decision-making regarding an employment benefit,” including tools that rely on AI, machine learning, algorithms, or statistics.

Examples of ADS:

  • Resume screeners
  • Automated interview scoring systems that make predictive assessments about applicants or employees, measure skills, abilities, or characteristics, or screen, evaluate, categorize, and/or recommend applicants or employees
  • Video software that analyzes voice or facial expressions
  • Tools that prioritize or rank candidates
  • Systems that direct job ads to certain groups

Excluded: Basic tools like word processors, spreadsheets, and security software—as long as they don’t make or influence employment decisions.

3. Key Prohibitions and Requirements

No Discrimination:

The regulations provide that, “It is unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the Act, subject to any available defense.”

Specific High-Risk Areas – Criminal Background Checks:

Employers may not use ADS to screen for criminal history before a conditional offer. Even after an offer, they must perform individualized assessments and cannot rely solely on automated outputs.

Duty to Provide Accommodations:

If an AI tool may disadvantage a candidate with a disability or protected characteristic, the employer must offer a reasonable accommodation (e.g., alternative assessment formats).

Third-Party Vendors May Create Liability:

If a vendor or recruiter uses an ADS tool on your behalf, you may still be legally responsible. Contracts should clarify compliance responsibilities and include indemnification provisions.

4. Documentation and Compliance Requirements

Employers using ADS must:

  • Retain relevant data, including results of automated decisions and demographic data, for at least four years
  • Keep records separate from personnel files
  • Conduct and document anti-bias testing on AI tools
  • Respond appropriately to testing outcomes

5. Next Steps for Employers

If the regulations are adopted in California, employers should:

  • Review All ADS and AI Tools in Use – Conduct an audit of technologies used in recruiting, hiring, promotions, and discipline.
  • Engage Legal Counsel or Compliance Experts – Evaluate whether the tools are likely to have a discriminatory impact or violate FEHA.
  • Request Transparency from Vendors – Ask for information on bias testing, training data, and system logic.
  • Implement Notice and Accommodation Policies – Clearly inform applicants when ADS will be used and how they can request an accommodation.
  • Use Human Oversight – Do not rely exclusively on AI for employment decisions. A human should review and approve final decisions.

If these regulations are adopted, California could join jurisdictions like New York City, Illinois, and Colorado in regulating workplace AI. While the federal government is still developing its approach, states like California could begin regulating how AI can be used in employment decisions.

Employers operating in California must treat AI and automation with the same care and diligence as any other employment practice subject to anti-discrimination laws.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...