State AI Regulations and Their Impact on Employer Wage Decisions

State Lawmakers Seek to Regulate Employer Use of AI for Wage Decisions

As employers continue to explore new avenues for utilizing artificial intelligence (AI) tools and software to enhance business operations, state legislators have begun to take significant notice. Lawmakers are increasingly scrutinizing the use of AI and automated decision tools in setting or influencing employee compensation. The primary objective is to curb potential discriminatory impacts that may arise from algorithmic wage-setting practices while enhancing transparency for employees and applicants regarding the application of such technologies.

Recent State Legislative Activity

Several states, including California, Colorado, Georgia, and Illinois, introduced bills in 2025 that aim to establish parameters on AI-driven compensation decisions. In January 2026, New York and Maryland lawmakers followed suit with similar legislative proposals.

While the proposed laws are not identical, they share several common features. Notably, they define “automated decision systems” as systems, software, or processes—often employing machine learning or AI techniques—designed to assist or replace human decision-making. In the employment context, these definitions encompass automated human resources tools and software systems that utilize predefined rules to process data through algorithms and facilitate HR functions. This can range from basic rule-based systems to more sophisticated technologies powered by generative AI.

Moreover, the majority of these proposed state laws outline guidelines for conduct that would not be considered unlawful use of algorithmic wage-setting. Examples of these exclusions include:

  • Offering individualized wages based on data related to the services performed by workers.
  • Providing clear disclosures to employees and applicants about the use of automated decision systems, including the data considered and the methodology employed.
  • Establishing procedures to ensure the accuracy of data utilized by automated decision systems in wage determination.

Legal Risks Associated With AI-Driven Compensation Decisions

Lawmakers advocating for these proposed regulations emphasize that the unregulated application of AI in compensation decisions may lead to discriminatory outcomes. Employers’ AI-driven compensation decisions could be actionable under various employment laws, including:

  • Title VII of the Civil Rights Act
  • The Americans with Disabilities Act
  • The Age Discrimination in Employment Act
  • The Equal Pay Act
  • Applicable state and local laws

The inherent nature of automated decision systems poses unique legal risks for employers, particularly in the realm of employee compensation. A significant challenge arises from the lack of transparency regarding how these tools arrive at their conclusions or recommendations. While human decision-makers can typically articulate the rationale behind compensation decisions, it may be difficult—or even impossible—to discern the reasoning of certain AI systems. This lack of clarity makes employers vulnerable to legal challenges concerning compensation outcomes determined by AI tools or software. The potential liability may increase if these systems are used to influence the compensation of a large number of employees or applicants.

Takeaways for Employers

For the time being, employers should ensure compliance with applicable federal and state laws set to take effect in 2026. This includes, at a minimum, identifying each AI tool currently utilized in employment decision-making and evaluating whether those tools fall under any state or local regulations.

Employers are also advised to develop and implement a comprehensive AI policy that outlines internal procedures for employing AI, provides necessary notifications to employees and applicants about AI usage, and mandates human oversight for AI-driven recommendations.

Looking ahead, employers should actively monitor developments in federal, state, and local legislation and agency regulations that govern AI’s role in employee compensation and other employment terms. As states rapidly work to establish boundaries for AI in workplace decision-making, employers who proactively assess their AI-related practices and prioritize transparent human involvement in decision-making processes—including compensation decisions—will be better positioned to mitigate legal risks and adapt to evolving regulatory requirements.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...