AI Regulations and Worker Rights in the EU

EU AI Act Brief – Part 4: AI at Work

The EU AI Act represents a landmark regulation aimed at addressing the deployment of AI systems in various contexts, including the workplace. This brief focuses on the implications of the AI Act for workers, particularly in terms of protections and obligations for employers.

Introduction

In recent years, the use of algorithmic management and decision-making systems has surged in workplaces. A recent OECD survey revealed that over 70% of managers reported utilizing at least one automated tool to instruct, monitor, or evaluate employees. This trend has raised significant concerns regarding the rights and protections of workers.

Worker Sentiment and the EU AI Act

A survey conducted by the European Commission highlighted overwhelming support among workers for regulations governing the use of AI in the workplace, particularly calls for a Directive addressing the challenges posed by algorithmic systems.

The EU AI Act recognizes the risks associated with AI systems in employment and introduces specific obligations aimed at safeguarding workers. This comprehensive approach includes prohibitions on certain high-risk AI practices.

Prohibited AI Systems

The AI Act adopts a risk-based approach, prohibiting several AI practices considered to pose unacceptable risks in the workplace. Key prohibitions include:

1. Biometric Categorization

The Act entirely prohibits AI systems that categorize individuals based on their biometric data to deduce attributes such as race or trade union membership. This prohibition safeguards workers from negative consequences related to their identification.

2. Emotion Recognition

Emotion recognition systems are mostly prohibited in employment settings. The Act acknowledges the unreliability of these systems and permits their use only for medical or safety reasons. Exceptions must be strictly necessary and accompanied by adequate safeguards.

3. Social Scoring

The AI Act prohibits social scoring systems that lead to detrimental treatment based on social behavior. This is particularly relevant in employment, where unfair treatment based on inferred personality traits can occur.

4. Real-Time Biometric Identification

While the Act prohibits real-time remote biometric identification in law enforcement, it permits its use in other contexts, potentially allowing employers to surveil workers.

Limited Protections from High-Risk Systems

The majority of the AI Act focuses on regulating high-risk AI systems, which are allowed but subject to strict safeguards. An AI system is classified as high-risk if it falls under specific categories outlined in the Act, including recruitment and performance evaluation.

Employers deploying high-risk AI systems are bound by obligations that enhance transparency and accountability. These include ensuring human oversight and providing notice to affected workers before implementation.

Obligations for Deployers

General Obligations

The AI Act imposes general obligations on all deployers, including:

  • Ensuring human oversight of AI systems.
  • Informing workers’ representatives and affected workers about the deployment of AI systems.

Obligations for Public Authorities

Deployers who are public authorities face heightened obligations, including conducting a fundamental rights impact assessment (FRIA) prior to deploying high-risk AI systems. This requirement extends to private entities performing public functions.

Remedies and Enforcement

The AI Act offers limited remedies for individuals subjected to high-risk AI systems, such as the right to an explanation regarding decisions made based on AI outputs. Individuals can also lodge complaints with national authorities if they believe the Act has been violated.

Oversight mechanisms, including the designation of fundamental rights authorities, facilitate the enforcement of the AI Act, ensuring that workers’ rights are protected in the context of AI deployment.

Conclusion

While the EU AI Act marks a significant advancement in regulating AI in the workplace, its effectiveness in protecting workers’ rights is limited. The Act introduces essential restrictions on AI practices, notably in emotional recognition, and mandates notice and explanation requirements for workers. However, significant loopholes exist, particularly regarding the obligations of private employers. Ongoing monitoring of the implementation of the AI Act and its impact on workers is crucial for ensuring that their rights are adequately safeguarded in the evolving landscape of AI technology.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...