Understanding the Implications of the AI Act for HR and Management

AI in the Workplace: What HR and Management Need to Know About the AI Act

The use of AI in HR processes is no longer a future scenario but a current challenge. Employers must prepare for the European AI Act, which will be enacted in stages, with the rules on high-risk systems applying from 2 August 2026. This article outlines essential information for employers to navigate these changes.

When is AI Considered “High Risk”?

The AI Act classifies systems according to risk. High-risk AI systems are those that affect the career prospects or employment conditions of applicants or employees. Examples include:

  • Automated applicant screening
  • Algorithms that determine bonuses or promotions
  • Automated workforce scheduling
  • AI-driven disciplinary decisions, such as dismissals and warnings

Many AI systems utilized in HR will thus qualify as high-risk, necessitating preparation for new obligations starting from 2 August 2026.

It’s crucial to note that some AI systems may even be prohibited. This includes systems that recognize emotions in the workplace or infer sensitive information about employees, such as race, sexual orientation, or political beliefs, from biometric data.

What the AI Act Will Require from Employers Using High-Risk Systems

Follow the Instructions

Employers must use AI tools according to the developer’s instructions and user manual. Proper training for staff is essential to ensure compliance with these guidelines.

Human Oversight is Mandatory

AI cannot be the sole decision-maker. A human must always be able to intervene, correct, or explain decisions made by AI. For instance, if an algorithm rejects a candidate, the recruiter must provide an explanation and have the ability to adjust or override that decision.

Risk Management: Demonstrate Risk Assessment

Employers must show that:

  • The datasets used are representative and relevant for the intended purpose.
  • Efforts have been made to identify and mitigate bias.

For example, an AI tool that selects candidates might be trained on seemingly neutral data, but this data could unintentionally discriminate against part-time workers or those with career breaks, often impacting women. Employers must proactively identify and eliminate such biases as part of a continuous obligation.

Transparency: Informing Employees and Applicants

Under the AI Act, affected individuals have the right to:

  • Know which AI system is in use and its purpose.
  • Receive an explanation of how the AI influences decisions.
  • Access a complaints procedure if they dispute the outcome.

Data protection and equal treatment laws may also apply, allowing individuals to request information or file complaints under GDPR or equal treatment legislation.

Logging: Ensuring Traceability of Decisions

Employers must maintain records showing how AI decisions were made. This includes logging the input data, the processes applied, and the resulting outputs. If a human reviews the decision, that must also be documented to ensure transparency and accountability.

The Role of the Works Council

Employee participation is crucial when introducing high-risk systems. Employers must inform employees and their representatives, such as a works council or trade union, prior to implementation. In many instances, the works council will have the right to consent to the introduction of such systems.

Practical Steps: How to Take Action as an Organization

  • Mapping: Identify which AI systems are currently in use within the organization and which qualify as high risk.
  • Documenting: Record decision-making processes, risk analyses, and oversight mechanisms.
  • Involving the Works Council: Ensure timely involvement when AI is intended for HR purposes.
  • Training: Equip HR professionals and managers with the knowledge to review and correct AI-driven decisions, ensuring they are AI literate with appropriate training.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...