AI Regulation: States vs. Federal Government in the Workplace

AI in the Workplace: Navigating New Regulations

The landscape of artificial intelligence (AI) in the workplace is rapidly evolving, as businesses prepare for an influx of new regulations. As states tighten their rules on AI, the federal government is simultaneously pushing for deregulation, leading to a complex interplay of legislative efforts aimed at addressing the use of AI in hiring and employment decisions.

State vs. Federal Regulations

President Donald Trump has initiated a federal push towards deregulating AI. However, states like California, New York, Connecticut, and Texas are considering legislation that aims to regulate AI systems involved in hiring and firing processes. This divergence in regulatory approaches could create friction as states implement stricter laws while the federal administration seeks to reduce oversight.

High-Risk AI Systems

Proposed legislation categorizes workplace AI systems that influence hiring and firing as “high risk.” Such classifications would impose special requirements on businesses, including:

  • Mandatory disclosure of AI usage to job candidates.
  • Restrictions on reliance on automated decision-making for significant employment actions.

Reena Richtermeyer, an attorney at CM Law, emphasizes that the real test of any new legislation lies in its enforcement. The ambiguity and complexity of some laws could complicate their implementation.

Enforcement and Its Implications

The enforcement of these new laws will play a critical role in shaping how businesses respond to AI regulations. Richtermeyer notes that the initial enforcement cases will set a precedent for corporate compliance. For instance, California’s proposed No Robo Bosses Act seeks to ensure human oversight in AI-driven hiring, promotion, and termination decisions.

State Senator Jerry McNerney has articulated the core principle behind this legislation: “AI must remain a tool controlled by humans, not the other way around.” This sentiment is echoed in similar legislation proposed in Connecticut, which also demands human oversight for consequential AI decisions made in human resources.

The Role of States in AI Governance

As the federal government’s stance on AI regulation evolves, experts believe that states will take the lead in establishing guidelines and standards. David Trier, vice president of product at ModelOp, highlights the necessity for businesses to adapt to these impending regulations, stating that companies operating in AI-regulated states will have to comply with local laws.

Federal Deregulation Efforts

Amid these state-level efforts, the Trump administration has called for a reduction of barriers to AI development. An executive order requested public input on AI policy ideas, signaling a shift toward less regulation.

While the Equal Employment Opportunity Commission (EEOC) previously asserted that existing federal laws could safeguard against AI-induced employment discrimination, the recent shift towards a focus on “bias” rather than “discrimination” introduces new challenges. Robert Taylor, an attorney at Carstens, Allen & Gourley, warns that the varying definitions of bias across state laws could expose companies to broader legal risks.

Conclusion

The regulatory environment surrounding AI in the workplace is in a state of flux. As states move forward with legislation, the implications for businesses will be significant. Companies must remain vigilant and proactive in understanding and adapting to these changes to mitigate legal risks associated with AI technologies in employment practices.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...