Approaching AI Risk as the Staffing Industry Evolves
In today’s rapidly changing staffing industry, the only constant is change. Markets fluctuate, geopolitical challenges arise, and technological innovations have surged like a tidal wave, particularly with the advent of artificial intelligence (AI).
AI-Specific Regulation
Despite the explosive growth of AI tools in areas like talent acquisition, scheduling, productivity tracking, and workforce analytics, there remains a significant regulatory vacuum at the federal level in the United States. This void is exacerbated by a patchwork of state and local regulations.
States such as California, Illinois, Colorado, and New York City have begun to implement their own regulations. These regulations generally encompass four key themes:
- Disclosure to candidates when automated decision tools are utilized.
- Transparency regarding how these tools function and the data they rely on.
- Bias and impact assessments to evaluate potential discriminatory outcomes.
- Ongoing auditing and governance requirements to ensure compliance over time.
On December 11, 2025, an executive order from the White House aimed to create a uniform federal AI policy, which may eventually consolidate the varying state requirements. However, the timeline for this change remains uncertain.
Existing Law
Even in the absence of a comprehensive AI legal framework, staffing companies must navigate existing equal employment opportunity laws. These laws prohibit both intentional discrimination and practices that disproportionately affect protected groups. Key federal statutes include:
- Title VII of the Civil Rights Act
- The Americans with Disabilities Act
- The Age Discrimination in Employment Act
These principles are technology-agnostic; thus, if an AI algorithm inadvertently screens out candidates based on gender, age, disability, or race, the employer remains liable, irrespective of the technology used.
Moreover, many states provide broader protections than federal law. For example, California’s Fair Employment and Housing Act mandates affirmative actions to prevent discrimination, requiring staffing companies to apply the same compliance standards to AI tools as they would to human decision-makers.
Ethics
The ethical implications of using AI in employment decisions extend beyond statutory compliance. The principles guiding employment law are rooted in core human values that prioritize growth, explanation, accommodation, and redemption. This means that candidates with unconventional backgrounds should not be permanently excluded from opportunities due to rigid AI algorithms.
Unfortunately, poorly designed or governed AI systems risk perpetuating past biases and eroding human judgment. When decisions become opaque and automated, the danger is not just disparate impact but also the potential loss of discretion when it matters most.
The essential takeaway for staffing companies is that AI should augment, not replace, lawful, fair, and humane decision-making. A thoughtful approach to AI risk recognizes that compliance involves not only avoiding penalties but also preserving the fundamental values that underpin employment relationships.