California’s No Robo Bosses Act: Regulating AI in Employment Decisions

“No Robo Bosses Act” Proposed in California

A new bill in California, known as the No Robo Bosses Act, has been proposed to limit and regulate the use of artificial intelligence (AI) decision-making in employment contexts. This bill, identified as SB 7, aims to address the growing concern over the reliance on automated systems in critical areas such as hiring, promotions, disciplinary actions, and terminations.

Definition of Automated Decision Systems

The act applies a broad definition of an automated decision system (ADS). According to the proposal, an ADS is defined as any computational process derived from machine learning, statistical modeling, data analytics, or AI that results in simplified outputs like scores, classifications, or recommendations. These outputs are used to assist or replace human decision-making, significantly impacting individuals in the workforce.

Key Provisions of SB 7

SB 7 encompasses several critical provisions designed to enhance transparency and fairness in the usage of ADS:

  1. Employers must provide a plain-language, standalone notice to employees, contractors, and applicants about the use of ADS in employment-related decisions at least 30 days prior to the introduction of the system.
  2. Employers are required to maintain a list of all ADS in use and include this list in the notice provided to employees, contractors, and applicants.
  3. The act prohibits employers from relying primarily on ADS for hiring, promotion, discipline, or termination decisions.
  4. Employers cannot use ADS that prevent compliance with existing laws or regulations, infer a protected status, conduct predictive behavior analysis, or take action against workers for exercising their legal rights.
  5. Workers will have the right to access the data collected by ADS and correct any errors.
  6. Workers can appeal employment-related decisions made by ADS, necessitating a human reviewer for such cases.
  7. The bill establishes enforcement measures against any discharges, discrimination, or retaliation against workers for exercising rights granted under SB 7.

Complementary Regulations by the California Civil Rights Council

In conjunction with SB 7, the California Civil Rights Council has proposed regulations aimed at protecting employees from discrimination, harassment, and retaliation resulting from the use of ADS by employers. These regulations identify various tools and assessments, such as predictive assessments measuring skills or personality traits and resume screening tools, which may inadvertently discriminate against certain employees or applicants based on protected characteristics.

The proposed rule and SB 7 are intended to work in tandem, enhancing the legal framework surrounding the use of ADS in employment if both are passed through the legislative process.

Current Status and Future Implications

As of now, the No Robo Bosses Act is in its early stages, with its first committee hearing scheduled for April 9, 2025. The potential for the bill to evolve before it becomes law remains uncertain. However, due to its broad implications and the likelihood of similar legislation being adopted in other states, SB 7 warrants close attention from stakeholders in the labor and technology sectors.

In summary, the No Robo Bosses Act represents a significant step towards regulating the intersection of artificial intelligence and employment practices, aiming to protect workers’ rights and ensure fairness in decision-making processes within organizations.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...