California’s AI Employment Regulations: What Employers Need to Know

Navigating California’s New and Emerging AI Employment Regulations

The California Civil Rights Council and the California Privacy Protection Agency have recently enacted regulations that impose specific requirements on employers utilizing automated-decision systems or automated decision-making technology in employment processes. These legislative efforts are critical as they outline obligations for employers and enhance employee protections.

California Civil Rights Council – ADS Regulations

The California Civil Rights Council’s (CCRC) regulations concerning Automated-Decision Systems (ADS) took effect on October 1, 2025. These regulations aim to tackle potential discrimination arising from the use of AI tools in personnel decisions. They apply to employers with five or more employees in California.

Key provisions include:

  • Covered Technologies: ADS is defined as a computational process that aids in decision-making regarding employment benefits. Examples include:
    • Screening resumes for specific terms or patterns.
    • Targeting job advertisements to specific demographics.
    • Analyzing facial expressions or voice during online interviews.
    • Utilizing computer-based tests to assess skills or characteristics.
    • Analyzing third-party data on applicants or employees.
  • Employment Discrimination: The regulations clarify that it’s unlawful under the California Fair Employment and Housing Act (FEHA) to employ ADS that leads to discrimination against protected classes, such as race, gender, or disability. Employers may face liability even without intent to discriminate if there’s a disparate impact on these groups.
  • Anti-bias Testing: Employers can use anti-bias testing to defend against discrimination claims. Conversely, a lack of such testing may be used as evidence in claims against them.
  • Records Retention: Employers must maintain ADS-related records for four years, including selection criteria and audit findings.

California Privacy Protection Agency – ADMT Regulations

The California Privacy Protection Agency (CPPA) has approved regulations on the use of automated decision-making technologies (ADMT) in significant consumer decisions, including employment. Compliance is required by January 1, 2027, covering any existing ADMT.

Notable provisions include:

  • Covered Technologies: ADMT is defined as any technology processing personal information to replace or significantly reduce human decision-making.
  • Significant Decision: This includes employment opportunities like hiring, compensation, and termination.
  • Pre-use Notice: Businesses must provide a notice to consumers regarding the use of ADMT and their rights to opt-out.
  • Opt-out: Consumers must have the option to opt-out of ADMT for significant decisions, with exceptions for human appeals.
  • Access: Businesses must inform consumers about the purpose and logic behind ADMT when requested.
  • Risk Assessments: Businesses must conduct risk assessments prior to using ADMT for significant decisions, detailing potential impacts on employees.

Legislation Updates

Recently, the California Legislature passed SB 7, known as the “No Robo Bosses Act,” requiring notice and human oversight when using ADS for employment-related decisions. Meanwhile, SB 53, the Transparency in Frontier Artificial Intelligence Act, has been signed, which introduces whistleblower protections for employees at large AI companies regarding public health and safety concerns.

Next Steps for Employers

To prepare for these regulations and new laws, employers should:

  • Understand the technologies their HR departments use that may trigger these legal requirements.
  • Develop policies to ensure compliance when implementing AI in employment contexts.
  • Provide timely notices to applicants and employees.
  • Conduct necessary risk assessments and bias audits.
  • Adhere to recordkeeping requirements.

For additional best practices on AI usage in the workplace, refer to previous discussions on this topic.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...