The EU AI Act from an HR Perspective
The EU AI Act came into force on 1 August 2024, establishing the world’s first comprehensive set of rules for artificial intelligence. Its aim is to standardize requirements for the development and use of AI in the European Union, significantly impacting organizations, especially in the HR sector.
Framework Conditions of the AI Act
The AI Act regulates the market launch, commissioning, and use of AI systems in the EU. It employs a horizontal, risk-based approach that focuses on product safety, classifying AI systems into five risk categories:
- Unacceptable risk: Prohibited AI
- High risk: High-risk AI
- Systemic risk: General-purpose AI with systemic risks
- Limited risk: Specific and general-purpose AI
- Low risk: All other AIs
Impact of the AI Act on HR AI Systems
AI systems are prevalent in HR, particularly in recruitment. Organizations must classify their AI systems according to the relevant risk group and prepare for compliance based on their designation:
AI with Unacceptable Risk
The AI Act prohibits emotion recognition systems in the workplace unless they are installed for medical or safety reasons. For instance, systems that recognize fatigue in pilots or lorry drivers may qualify for exemptions but are classified as high-risk AI.
High-Risk AI in HR
Many AI systems in the employment sector qualify as high-risk, including those used for:
- Recruitment or selection of candidates
- Employment condition decisions, terminations, or promotions
- Task assignments based on personal traits
- Performance evaluation of employees
Requirements for Providers and Deployers of High-Risk AI
The most stringent obligations apply to providers of AI systems, while deployers (users of AI systems) also have significant requirements, particularly if the AI system is used professionally.
Providers of High-Risk AI Systems
Providers must:
- Establish a risk management system
- Test AI systems for compliance before operation
- Ensure supervision by natural persons
- Disclose interactions with AI systems
Deployers of High-Risk AI Systems
Deployers, typically employers using AI in HR, must:
- Implement technical and organizational protective measures
- Supervise AI systems with trained personnel
- Monitor AI use in accordance with guidelines
Practical Tips for Compliance
To meet compliance requirements:
- Develop AI literacy within the organization, mandatory from 2 February 2025.
- Ensure high-quality input data to prevent bias.
- Inform employees and representatives about AI use in the workplace.
Exemptions for High-Risk AI Systems
Some AI systems may be exempt from high-risk classification if they do not pose significant risks to health or rights. For example, an AI system performing narrow tasks like CV analysis may qualify for exemption under Article 6(3).
AI with Limited or Low Risk
Organizations using lower risk AI systems must still comply with less stringent obligations, ensuring staff understand AI and meeting transparency requirements when creating AI-generated content.
AI and Data Protection
When using AI, compliance with GDPR is mandatory. Key challenges include:
- Implementing data subject rights, such as the right to erasure.
- Ensuring lawful data processing with the necessary justifications.
Conclusion
The EU AI Act presents significant challenges for deployers and providers in the HR sector, especially regarding high-risk AI systems and personal data processing. Effective collaboration among HR, legal, and data protection functions is essential to meet the Act’s requirements.