Navigating the EU AI Act: Implications for HR and Workforce Management

The EU AI Act from an HR Perspective

The EU AI Act came into force on 1 August 2024, establishing the world’s first comprehensive set of rules for artificial intelligence. Its aim is to standardize requirements for the development and use of AI in the European Union, significantly impacting organizations, especially in the HR sector.

Framework Conditions of the AI Act

The AI Act regulates the market launch, commissioning, and use of AI systems in the EU. It employs a horizontal, risk-based approach that focuses on product safety, classifying AI systems into five risk categories:

  • Unacceptable risk: Prohibited AI
  • High risk: High-risk AI
  • Systemic risk: General-purpose AI with systemic risks
  • Limited risk: Specific and general-purpose AI
  • Low risk: All other AIs

Impact of the AI Act on HR AI Systems

AI systems are prevalent in HR, particularly in recruitment. Organizations must classify their AI systems according to the relevant risk group and prepare for compliance based on their designation:

AI with Unacceptable Risk

The AI Act prohibits emotion recognition systems in the workplace unless they are installed for medical or safety reasons. For instance, systems that recognize fatigue in pilots or lorry drivers may qualify for exemptions but are classified as high-risk AI.

High-Risk AI in HR

Many AI systems in the employment sector qualify as high-risk, including those used for:

  • Recruitment or selection of candidates
  • Employment condition decisions, terminations, or promotions
  • Task assignments based on personal traits
  • Performance evaluation of employees

Requirements for Providers and Deployers of High-Risk AI

The most stringent obligations apply to providers of AI systems, while deployers (users of AI systems) also have significant requirements, particularly if the AI system is used professionally.

Providers of High-Risk AI Systems

Providers must:

  • Establish a risk management system
  • Test AI systems for compliance before operation
  • Ensure supervision by natural persons
  • Disclose interactions with AI systems

Deployers of High-Risk AI Systems

Deployers, typically employers using AI in HR, must:

  • Implement technical and organizational protective measures
  • Supervise AI systems with trained personnel
  • Monitor AI use in accordance with guidelines

Practical Tips for Compliance

To meet compliance requirements:

  • Develop AI literacy within the organization, mandatory from 2 February 2025.
  • Ensure high-quality input data to prevent bias.
  • Inform employees and representatives about AI use in the workplace.

Exemptions for High-Risk AI Systems

Some AI systems may be exempt from high-risk classification if they do not pose significant risks to health or rights. For example, an AI system performing narrow tasks like CV analysis may qualify for exemption under Article 6(3).

AI with Limited or Low Risk

Organizations using lower risk AI systems must still comply with less stringent obligations, ensuring staff understand AI and meeting transparency requirements when creating AI-generated content.

AI and Data Protection

When using AI, compliance with GDPR is mandatory. Key challenges include:

  • Implementing data subject rights, such as the right to erasure.
  • Ensuring lawful data processing with the necessary justifications.

Conclusion

The EU AI Act presents significant challenges for deployers and providers in the HR sector, especially regarding high-risk AI systems and personal data processing. Effective collaboration among HR, legal, and data protection functions is essential to meet the Act’s requirements.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...