Congressional Witnesses Split on AI Regulation: State Laws Struggle
If your organization is navigating AI regulation using a patchwork of state laws, you’re not alone in feeling the strain. Congress recently heard testimony revealing that these very laws may be broken.
The House Education & Workforce Committee’s Health, Employment, Labor and Pensions Subcommittee is contemplating whether new federal laws are needed to regulate artificial intelligence in the workplace.
Key Takeaways for HR Leaders
Here are five key takeaways for HR leaders from the hearing titled “Building an AI-Ready America: Adopting AI at Work.”
State AI Laws Struggling in Practice
Brad Kelley, a shareholder at Littler and former chief counsel to the EEOC commissioner, highlighted that rushed state legislation is creating an “increasingly unworkable regulatory environment” for employers. He cited Colorado’s AI Act as a cautionary tale, stating that the law, originally scheduled to take effect in early 2025, has been delayed until June 2026 due to significant flaws.
Similarly, New York City’s AI law, which took effect in July 2023, has been criticized for its ineffectiveness. An audit by the New York City Department of Consumer and Worker Protection revealed only two automated employment decision tool (AEDT) complaints were filed during its two-year scope. These examples demonstrate the pitfalls of legislation that gets ahead of itself.
Existing Laws May Be Sufficient
Kelley argued that the U.S. already has a “well-established technology-neutral legal framework” capable of addressing most AI-related misconduct. Laws like Title VII of the Civil Rights Act (1964) and the Fair Labor Standards Act (1938) have proven resilient, even as technology evolves.
He noted that in his experience, employers typically use AI to enhance efficiency rather than interfere with union activities, asserting that extreme hypotheticals, such as using AI to suppress union efforts, remain largely theoretical.
Union Considerations
U.S. Rep. Summer Lee (D-Pa.) countered this perspective, citing reports of Whole Foods using an AI-driven “heat map” to track stores at risk of union activity. Additionally, she mentioned the National Eating Disorders Association’s decision to replace helpline staff with an AI chatbot shortly after those workers voted to unionize, though the bot was later shut down for providing harmful advice.
Lack of Data on AI’s Impact
Revana Sharfuddin, a labor economist and research fellow, pointed out that federal statistical agencies currently lack the means to measure AI’s real impact on American workers. She explained that current labor statistics focus on job counts rather than how work is performed.
Sharfuddin recommended three achievable steps to improve data collection: adding an AI supplement to federal surveys, linking firm-level adoption data to worker outcomes, and coordinating annual reports across federal agencies. She emphasized that good policy requires good data, which is currently inadequate.
Algorithmic Management is Pervasive
Tanya Goldman, a fellow at The Workshop, presented a different view of AI adoption, citing an OECD survey indicating that 90% of U.S. employers use algorithmic management tools. She described how AI allows for invasive surveillance measures, such as tracking bathroom breaks and monitoring mouse movements, which can create undue pressure on employees.
AI in Employment Decisions
Goldman noted that a significant percentage of employers utilize AI tools for recruitment and screening, often lacking transparency. This lack of clarity can leave workers without recourse to challenge decisions affecting their employment. She criticized algorithmic wage-setting practices that can lead to pay disparities.
Challenges in Enforcement
Witnesses acknowledged that existing employment laws only function effectively if there are adequate enforcement mechanisms. Goldman pointed out that the EEOC has significantly fewer investigators than in the past while facing an increased workload, which hampers its ability to enforce laws effectively.
Self-Governance vs. Blanket Restrictions
David Walton, a partner at Fisher Phillips, argued for “robust self-governance” rather than blanket restrictions. He outlined governance frameworks that many employers are establishing to ensure transparency and worker involvement in AI-related decisions.
The Future of AI Legislation
The committee did not announce its next steps regarding potential AI workplace legislation. Both Subcommittee Chairman Rick Allen (R-Ga.) and Ranking Member Mark DeSaulnier (D-Calif.) stressed the importance of bipartisan collaboration, with DeSaulnier urging the committee to proactively address worker protections before they suffer further.