How HR Can Navigate the EU AI Act
The adoption of AI in HR presents vast potential to streamline processes, enhance decision-making, and drive innovation. However, realizing these benefits necessitates addressing the complexities of the EU AI Act, particularly as the regulatory approaches diverge between the EU and the US. This divergence underscores the critical need for robust AI governance to ensure ethical and safe adoption across borders.
For companies that move data back and forth across the Atlantic—most modern businesses—it’s essential to understand that what is acceptable in the US might not align with European regulations, especially when data concerning EU or UK citizens is involved. Even in the US, the absence of AI-specific regulations does not equate to a lack of risk. Many states enforce local data protection and AI regulations, and there is always the risk of litigation over issues such as intellectual property infringement, discrimination, and deceptive practices.
To thrive in a regulatory environment that demands agility and accountability, HR must adopt the following strategies for a future-proof AI strategy:
1. Understand the Risk Tiers
The EU AI Act categorizes AI tools into four risk levels: minimal, limited, high, and unacceptable. Each tier carries specific compliance obligations, particularly the high-risk category, which includes technologies used in recruitment and employee management. These systems require stringent measures to ensure transparency, mitigate bias, and uphold robust data governance.
To comply, organizations should conduct a comprehensive audit of current and planned AI systems to determine their risk tier. It is also essential to establish whether the organization qualifies as an ‘AI provider’ under the Act, which comes with additional responsibilities.
2. Conduct Ongoing Audits
Unlike traditional software, which can be installed and disregarded, AI tools are non-deterministic and their behavior can change over time and circumstances. New input data can yield unexpected responses, generating new forms of bias, misinformation, or even risks related to data protection and information security. Therefore, ongoing evaluation of AI systems is crucial to avoid compromising users or damaging the company’s reputation.
3. Train HR Teams
AI governance is not solely a technical challenge; it also involves cultural and educational dimensions. Providing training on AI governance and ethics empowers HR professionals to implement AI responsibly. Tailored training programs equip HR professionals to act as informed custodians of AI systems, rather than merely passive users of software. The goal is to enhance collaboration between humans and AI, ensuring that the combined output is superior to what either could achieve independently.
4. Build Trust Through Transparency
Clear disclosures about AI use in recruitment and workforce management foster trust among employees and candidates. This trust is further reinforced when organizations demonstrate a commitment to ethical AI practices, establishing themselves as fair and forward-thinking employers.
Employees who understand how AI impacts their interactions are more likely to embrace its benefits and contribute to its improvement. Regular communication should cover:
- How AI-driven decisions are made and monitored;
- Steps taken to improve the accuracy and fairness of these systems;
- Mechanisms in place for addressing errors or concerns.
5. Monitor the Regulatory Environment
Change is the only constant, particularly in the AI regulatory landscape, which is often a political issue. Collaborating with legal, data protection, and information security teams is essential to stay ahead of updates. Organizations should consider adopting AI systems with built-in compliance features to simplify future regulatory adjustments. Observing the actions of others can provide valuable lessons, both from their successes and missteps.
As a standard practice, organizations should:
- Maintain detailed records of AI system operations and compliance measures;
- Engage with regulators and industry bodies to stay informed about upcoming changes;
- Align AI initiatives with broader organizational goals to ensure they deliver measurable value.
Outsourcing some of these responsibilities can also be beneficial, helping to accelerate change, manage risk, and drive competitive advantage.