Regulating AI in the Workplace: The Need for Federal Standards

AI in the Workplace—Jobs, Regulation, and the Case for Federal Standards

Across the country, state legislatures are moving quickly to regulate artificial intelligence in the workplace. California’s proposed SB 947 – the Automated Decision Systems in the Workplace bill – introduced in the California Legislature on February 2, 2026, is one prominent example. This bill is part of a broader trend aimed at governing how employers may adopt, deploy, and rely on AI-driven tools when making employment-related decisions.

Other proposed AI-related legislation underscores how rapidly this movement is accelerating. For instance, SB 951 — the California Worker Technological Displacement Act — would require employers to provide at least 90 days’ advance notice before layoffs caused by “technological displacement.” Additionally, the California Labor Federation has publicly stated its intent to sponsor or support more than two dozen bills this year focused on the impact of artificial intelligence on workers in California.

While these bills are framed as worker-protection measures, they also reflect a deeper policy tension: whether AI in the workplace should be regulated piecemeal at the state level, or whether regulation must occur at the federal level to avoid a patchwork of rules that could hinder innovation and economic growth.

Case Study: SB 947

Using SB 947 as a case study illustrates that many of these proposals proceed from similar assumptions and raise the same structural problems. Bills like SB 947 seek to regulate employers’ use of automated decision systems (ADS)—a term defined broadly enough to encompass AI-driven tools, analytics software, and other technologies used to assist with employment decisions. The scope typically extends beyond hiring and firing to include scheduling, compensation, performance evaluation, work assignments, and discipline.

Under SB 947, employers would be prohibited from relying solely on an automated system for disciplinary or termination decisions, requiring them to conduct a human “independent investigation” to corroborate any AI-generated output. Similar proposals impose restrictions on the types of data that may be used, prohibit predictive behavior analysis, and bar the use of systems that could infer protected characteristics.

These bills commonly create new notice and disclosure obligations. For instance, if an AI-assisted tool is used in connection with discipline or termination, employers may be required to provide written post-use notices, identify vendors, explain human review processes, and produce data inputs, outputs, corroborating materials, and impact assessments upon request.

Enforcement mechanisms are expansive. Using SB 947 as an example, compliance would be enforced not only by labor agencies and public prosecutors but also through private civil actions, with attorneys’ fees and punitive damages available. This creates a new, litigation-driven compliance regime layered on top of existing complex employment laws.

Challenges of State-Level AI Regulation

Layered onto this regulatory push is a fundamental uncertainty: the actual impact of AI on jobs remains unclear. Yet many of these bills assume the answers are already settled. Below are five reasons why state-level efforts to regulate employer adoption of AI—illustrated by SB 947—are misaligned with the evolving AI policy conversation.

1. Federal Consensus on Uniform Standards

At the federal level, there is increasing bipartisan agreement that a state-by-state approach to AI regulation is incompatible with innovation and economic growth. Although Congress has not yet enacted comprehensive AI legislation, federal policymakers emphasize the need for a national framework, particularly for technologies deployed at scale.

2. A Patchwork of State Laws

A patchwork of state laws does not necessarily protect workers; rather, it may discourage responsible AI adoption. If employers are discouraged from using standardized, data-driven tools due to legal risks, decision-making becomes more subjective, potentially reducing fairness rather than enhancing it.

3. Uncertainty About AI’s Impact on Jobs

State-level AI regulation often assumes that AI is primarily a job-elimination tool that must be constrained. This assumption is premature. History shows that productivity-enhancing technologies often create new categories of work and increase demand in unexpected areas.

4. Risks of Overregulation

Overregulation may drive AI use underground rather than making it transparent. If deploying AI triggers extensive notice obligations and litigation exposure, employers may still rely on AI, but in less visible and documented ways, which could be detrimental for workers.

5. Future Federal Standards

States risk becoming outliers as federal AI standards likely emerge. If federal standards are enacted, employers may find themselves burdened with state-specific compliance regimes that become obsolete.

The Bottom Line

SB 947 exemplifies the broader legislative trend of state efforts to regulate AI in the workplace before its impacts are fully understood. These proposals often assume harm before evidence, substitute procedural mandates for substantive outcomes, and overlook the federal consensus in favor of uniform standards.

AI will undoubtedly change work, but how, how fast, and for whom remain open questions. A national framework focused on outcomes rather than fear is far more likely to protect workers and encourage responsible innovation than a growing patchwork of state experiments.

What Employers Should Do Now

Regardless of how AI regulation develops, AI is already present in the workplace, often before employers realize it. The real risk for California employers is not AI itself, but using it without clear policies, training, and legal guardrails.

To help employers navigate this evolving landscape, a one-hour masterclass is being hosted, focusing on the practical, real-world use of AI in the California workplace.

Masterclass: AI in the California Workplace — Practical Tools, Real Use Cases, and Legal Guardrails

This session will cover how employers are using AI today—from hiring and scheduling to performance management and documentation—along with the key legal and compliance issues to understand, including wage-and-hour exposure, discrimination risk, privacy concerns, and PAGA implications. Attendees will leave with practical guidance on how to use AI responsibly and reduce risk.

Wednesday, February 25, 2026 | 10:00 a.m. PT – Register here.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...