California’s AI Employment Regulations: What Employers Need to Know

Navigating California’s New and Emerging AI Employment Regulations

The California Civil Rights Council and the California Privacy Protection Agency have recently enacted regulations that impose specific requirements on employers utilizing automated-decision systems or automated decision-making technology in employment processes. These legislative efforts are critical as they outline obligations for employers and enhance employee protections.

California Civil Rights Council – ADS Regulations

The California Civil Rights Council’s (CCRC) regulations concerning Automated-Decision Systems (ADS) took effect on October 1, 2025. These regulations aim to tackle potential discrimination arising from the use of AI tools in personnel decisions. They apply to employers with five or more employees in California.

Key provisions include:

  • Covered Technologies: ADS is defined as a computational process that aids in decision-making regarding employment benefits. Examples include:
    • Screening resumes for specific terms or patterns.
    • Targeting job advertisements to specific demographics.
    • Analyzing facial expressions or voice during online interviews.
    • Utilizing computer-based tests to assess skills or characteristics.
    • Analyzing third-party data on applicants or employees.
  • Employment Discrimination: The regulations clarify that it’s unlawful under the California Fair Employment and Housing Act (FEHA) to employ ADS that leads to discrimination against protected classes, such as race, gender, or disability. Employers may face liability even without intent to discriminate if there’s a disparate impact on these groups.
  • Anti-bias Testing: Employers can use anti-bias testing to defend against discrimination claims. Conversely, a lack of such testing may be used as evidence in claims against them.
  • Records Retention: Employers must maintain ADS-related records for four years, including selection criteria and audit findings.

California Privacy Protection Agency – ADMT Regulations

The California Privacy Protection Agency (CPPA) has approved regulations on the use of automated decision-making technologies (ADMT) in significant consumer decisions, including employment. Compliance is required by January 1, 2027, covering any existing ADMT.

Notable provisions include:

  • Covered Technologies: ADMT is defined as any technology processing personal information to replace or significantly reduce human decision-making.
  • Significant Decision: This includes employment opportunities like hiring, compensation, and termination.
  • Pre-use Notice: Businesses must provide a notice to consumers regarding the use of ADMT and their rights to opt-out.
  • Opt-out: Consumers must have the option to opt-out of ADMT for significant decisions, with exceptions for human appeals.
  • Access: Businesses must inform consumers about the purpose and logic behind ADMT when requested.
  • Risk Assessments: Businesses must conduct risk assessments prior to using ADMT for significant decisions, detailing potential impacts on employees.

Legislation Updates

Recently, the California Legislature passed SB 7, known as the “No Robo Bosses Act,” requiring notice and human oversight when using ADS for employment-related decisions. Meanwhile, SB 53, the Transparency in Frontier Artificial Intelligence Act, has been signed, which introduces whistleblower protections for employees at large AI companies regarding public health and safety concerns.

Next Steps for Employers

To prepare for these regulations and new laws, employers should:

  • Understand the technologies their HR departments use that may trigger these legal requirements.
  • Develop policies to ensure compliance when implementing AI in employment contexts.
  • Provide timely notices to applicants and employees.
  • Conduct necessary risk assessments and bias audits.
  • Adhere to recordkeeping requirements.

For additional best practices on AI usage in the workplace, refer to previous discussions on this topic.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...