AI Hiring Regulations Face Political Pushback

State Efforts to Regulate AI Hiring Pivot After Trump Pushback

State lawmakers are strategizing on how to win passage of bills regulating AI use in employment decisions despite opposition from the tech industry and the Trump administration.

Legislative Measures and Challenges

Legislators from New York to Texas have pushed measures ranging from requiring employers to mitigate potential bias caused by artificial intelligence hiring tools to mandating disclosure and appeals of AI-generated decisions for job applicants.

Despite resistance from a well-resourced tech industry that has sidelined some legislation, and new threats from the federal government and White House to override state AI laws altogether, the movement to regulate remains strong, even if bills are narrowed in scope, according to lawmakers from several states.

“I’m not intimidated, but we also have to be practical,” said Virginia state delegate Michelle Maldonado (D), whose AI bill was vetoed this year by Gov. Glenn Youngkin (R).

Federal Government Response

President Donald Trump released an AI Action Plan in July that included threats to federal funding and possible preemption by the Federal Communications Commission for state AI laws the administration deems overly restrictive. The Senate also seriously considered legislative language this summer banning state laws that regulate AI.

“This is going to require us to think about, is there a different approach?” Maldonado said, suggesting targeting legislation narrowly on transparency and disclosures.

State-Specific Developments

Colorado passed the nation’s broadest AI bias law to date, but lawmakers voted in August to delay it and try to revise it before it takes effect.

Connecticut Sen. James Maroney (D) mentioned that even blue-state lawmakers that attempted to imitate Colorado’s measure might be ready to narrow their focus in 2026, either via industry-specific restrictions or a transparency-only focus.

The latter would require employers to disclose to job applicants and employees when and how they’re using AI tools, but stop short of mandatory bias audits or detailed risk management plans.

Transparency Details

Even if state lawmakers adopt a transparency focus, the details could vary widely. An Illinois law set to take effect Jan. 1, 2026, requires employers to give workers notice when using AI for employment decisions, but offers no specifics on what the notice should include.

By contrast, Colorado lawmakers considered an AI Sunshine Act to replace their broader AI bias law. This act would have required businesses to notify individuals of up to 20 factors that AI tools considered before rejecting them, plus the opportunity to correct inaccurate data.

The tech industry balked at the Colorado bill, including the measure’s language seeking to impose joint liability for discrimination claims on AI technology developers alongside the companies using the tools.

“We’re encouraged to see more lawmakers considering transparency-focused approaches,” said David Edmonson, senior vice president of state policy at the industry association TechNet. “While the details matter, transparency can often be a more workable path than some of the more onerous mandates that have been proposed.”

Broader Job Bias Protections

Not everyone is ready to surrender efforts at broader job bias protections. Colorado Rep. Brianna Titone (D), a cosponsor of the law delayed to June 2026, said she still sees hope for legislation forcing technology developers to share liability for discrimination claims, rather than assign it all to businesses using the tools to boost hiring efficiency.

“I still get denied my job. I still get denied my health care. I still get denied my insurance policy or whatever it is, but I have no recourse,” she said.

Even in California, which is often first in the nation on pro-worker legislation, supporters of AI bills affecting employment had mixed success in 2025. State legislators passed a bill targeting AI-powered workforce management while letting die a comprehensive proposal focused on discrimination.

Compliance Challenges for Employers

Absent action from Washington, differing state requirements as they evolve can make compliance more difficult. Some states are pressuring employers to take steps like bias testing, maintaining documentation, and other system controls.

In contrast, some states focus more on transparency, data privacy, or notice obligations about automated decision making, or offering rights of appeal or opt-out.

“Employers now have an obligation to really dig in and develop a deep understanding of the software they are using,” said Lauren Hicks, a shareholder at Ogletree Deakins. “That’s extremely critical now, so that they can work to meet these compliance obligations that are going to vary state by state.”

Preemption Threat

How Congress might address preemption in future artificial intelligence legislation is yet to be seen, creating risk for states that advance AI protections. Trump’s action plan instructs federal agencies to deny AI-related funding to states whose laws undermine the funds’ purpose, such as promoting AI industry growth.

However, the plan doesn’t clarify which kinds of state laws are prone to federal scrutiny. “Conditioning federal grants on a state’s AI regulatory climate is inherently malleable and thus hard to predict,” said Mackenzie Arnold, director of US policy at the Institute for Law & AI.

For some employment laws, like the Fair Labor Standards Act, federal rules act as a floor, and states are able to set higher standards. However, in other areas, the US Supreme Court has recognized the primacy of federal law, preempting state statutes regulating conduct covered by the National Labor Relations Act.

Maldonado, who plans to reintroduce her legislation next session, expressed her determination: “We should put something in place, and if it gets preempted, then fine,” she said, “but more likely than not, it may not.”

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...