AI Hiring Regulations Face Political Pushback

State Efforts to Regulate AI Hiring Pivot After Trump Pushback

State lawmakers are strategizing on how to win passage of bills regulating AI use in employment decisions despite opposition from the tech industry and the Trump administration.

Legislative Measures and Challenges

Legislators from New York to Texas have pushed measures ranging from requiring employers to mitigate potential bias caused by artificial intelligence hiring tools to mandating disclosure and appeals of AI-generated decisions for job applicants.

Despite resistance from a well-resourced tech industry that has sidelined some legislation, and new threats from the federal government and White House to override state AI laws altogether, the movement to regulate remains strong, even if bills are narrowed in scope, according to lawmakers from several states.

“I’m not intimidated, but we also have to be practical,” said Virginia state delegate Michelle Maldonado (D), whose AI bill was vetoed this year by Gov. Glenn Youngkin (R).

Federal Government Response

President Donald Trump released an AI Action Plan in July that included threats to federal funding and possible preemption by the Federal Communications Commission for state AI laws the administration deems overly restrictive. The Senate also seriously considered legislative language this summer banning state laws that regulate AI.

“This is going to require us to think about, is there a different approach?” Maldonado said, suggesting targeting legislation narrowly on transparency and disclosures.

State-Specific Developments

Colorado passed the nation’s broadest AI bias law to date, but lawmakers voted in August to delay it and try to revise it before it takes effect.

Connecticut Sen. James Maroney (D) mentioned that even blue-state lawmakers that attempted to imitate Colorado’s measure might be ready to narrow their focus in 2026, either via industry-specific restrictions or a transparency-only focus.

The latter would require employers to disclose to job applicants and employees when and how they’re using AI tools, but stop short of mandatory bias audits or detailed risk management plans.

Transparency Details

Even if state lawmakers adopt a transparency focus, the details could vary widely. An Illinois law set to take effect Jan. 1, 2026, requires employers to give workers notice when using AI for employment decisions, but offers no specifics on what the notice should include.

By contrast, Colorado lawmakers considered an AI Sunshine Act to replace their broader AI bias law. This act would have required businesses to notify individuals of up to 20 factors that AI tools considered before rejecting them, plus the opportunity to correct inaccurate data.

The tech industry balked at the Colorado bill, including the measure’s language seeking to impose joint liability for discrimination claims on AI technology developers alongside the companies using the tools.

“We’re encouraged to see more lawmakers considering transparency-focused approaches,” said David Edmonson, senior vice president of state policy at the industry association TechNet. “While the details matter, transparency can often be a more workable path than some of the more onerous mandates that have been proposed.”

Broader Job Bias Protections

Not everyone is ready to surrender efforts at broader job bias protections. Colorado Rep. Brianna Titone (D), a cosponsor of the law delayed to June 2026, said she still sees hope for legislation forcing technology developers to share liability for discrimination claims, rather than assign it all to businesses using the tools to boost hiring efficiency.

“I still get denied my job. I still get denied my health care. I still get denied my insurance policy or whatever it is, but I have no recourse,” she said.

Even in California, which is often first in the nation on pro-worker legislation, supporters of AI bills affecting employment had mixed success in 2025. State legislators passed a bill targeting AI-powered workforce management while letting die a comprehensive proposal focused on discrimination.

Compliance Challenges for Employers

Absent action from Washington, differing state requirements as they evolve can make compliance more difficult. Some states are pressuring employers to take steps like bias testing, maintaining documentation, and other system controls.

In contrast, some states focus more on transparency, data privacy, or notice obligations about automated decision making, or offering rights of appeal or opt-out.

“Employers now have an obligation to really dig in and develop a deep understanding of the software they are using,” said Lauren Hicks, a shareholder at Ogletree Deakins. “That’s extremely critical now, so that they can work to meet these compliance obligations that are going to vary state by state.”

Preemption Threat

How Congress might address preemption in future artificial intelligence legislation is yet to be seen, creating risk for states that advance AI protections. Trump’s action plan instructs federal agencies to deny AI-related funding to states whose laws undermine the funds’ purpose, such as promoting AI industry growth.

However, the plan doesn’t clarify which kinds of state laws are prone to federal scrutiny. “Conditioning federal grants on a state’s AI regulatory climate is inherently malleable and thus hard to predict,” said Mackenzie Arnold, director of US policy at the Institute for Law & AI.

For some employment laws, like the Fair Labor Standards Act, federal rules act as a floor, and states are able to set higher standards. However, in other areas, the US Supreme Court has recognized the primacy of federal law, preempting state statutes regulating conduct covered by the National Labor Relations Act.

Maldonado, who plans to reintroduce her legislation next session, expressed her determination: “We should put something in place, and if it gets preempted, then fine,” she said, “but more likely than not, it may not.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...