AI Act: New Employer Obligations in the EU

EU: Comprehensive AI Act and Its Implications for Employers

The AI Act represents a significant shift in regulatory frameworks, governing the use of artificial intelligence systems within the European Union. As this major piece of legislation unfolds, it introduces a series of obligations for employers that will come into effect in stages through August 2, 2026.

Overview of the AI Act

Designed to enhance the safety and efficacy of AI applications, the AI Act categorizes AI activities based on four levels of risk: Unacceptable, High, Limited, and Minimal. This classification is pivotal for understanding how different AI systems will be regulated:

  • Unacceptable: Applications deemed to pose an unacceptable level of risk are outright prohibited. This includes the use of AI for social scoring, which evaluates individuals based on their behavior or personality traits. Such practices can lead to discriminatory outcomes.
  • High-risk: These systems will face stringent regulations, primarily impacting developers. However, deployers of these systems, including employers, will have obligations such as ensuring human oversight and proper usage of the technology.
  • Limited-risk: AI systems classified as limited-risk will be subject to lighter transparency obligations, ensuring users are informed that they are interacting with AI.
  • Minimal-risk: The majority of AI activities currently in use fall into this category and are largely unregulated.

Employer Responsibilities Under the AI Act

Employers deploying high-risk AI systems in their workplaces must take proactive measures to comply with the AI Act. This includes:

  • Informing workers’ representatives and affected employees about the implementation of high-risk AI systems before they go into service.
  • Ensuring that AI systems used for recruitment, job application analysis, and performance monitoring adhere to the defined guidelines.

Implications of High-Risk AI Systems

The AI Act specifies that high-risk systems are those that influence employment-related decisions, such as:

  • Recruiting and selecting candidates, including targeted job advertisements.
  • Evaluating job applications and candidates.
  • Making decisions regarding promotions, terminations, and task allocations based on individual behaviors or traits.

Employers should undertake a thorough evaluation of the risk classification of any AI systems they deploy, ensuring compliance with the act’s provisions. This includes establishing effective communication channels with employees and assigning appropriate oversight to mitigate risks.

Conclusion

The AI Act’s comprehensive approach to regulating AI usage highlights the importance of ethical considerations in technology deployment. By proactively adapting to these regulations, employers can ensure not only compliance but also foster a safer and more equitable workplace environment. The ongoing discourse surrounding the act emphasizes the balance between innovation and regulation, as the EU seeks to navigate the complexities of AI governance.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...