AI Act Exemptions: A Threat to Rights and Security?

All Eyes on AI Act Exemptions as Ban on High-Risk AI Systems Nears

Despite being celebrated as the world’s first comprehensive AI legislation, the European Union’s AI Act has left some questions open, particularly regarding exemptions for the use of otherwise banned AI applications for law enforcement and border control agencies.

On February 2nd, the ban on AI systems that pose “unacceptable risk” will become official. The existence of national security exemptions raises the question of whether the AI rulebook will be able to safeguard rights.

Banned AI Applications

Banned AI applications with “unacceptable risk” levels include:

  • Biometric categorization systems based on sensitive characteristics
  • Emotion recognition in the workplace and schools
  • Social scoring
  • Predictive policing
  • Applications that manipulate human behavior

The European Association for Biometrics (EAB) organized a talk inviting legal experts and industry stakeholders to discuss the use of biometric data in these applications. Abdullah Elbi, a legal researcher at the Centre for IT & IP Law (CiTiP) at KU Leuven, emphasized, “It’s not an absolute prohibition, so it requires a well understanding of rules.”

The most important aspect would be having well-reasoned guidelines from the European Commission, market surveillance authorities, and data protection authorities.

Balancing Security and Rights

Although the AI rulebook seems to establish standards for AI application, it leaves balancing security and rights protections to EU countries. Governments can decide whether to introduce exceptions allowing real-time remote biometric identification in cases such as serious crimes or preventing serious threats like terror attacks.

Elbi noted the possibility of fragmentation in different member states regarding the use of remote biometric identification systems.

Criticism from Rights Groups

The law enforcement carve-outs have led to criticism from rights groups, who argue they dilute protections against potential abuse. Irina Orssich from the EU’s AI Office stated that European member states cannot negotiate aspects of the implementation of the AI Act. While individual countries can introduce stricter regulations, they cannot relax them.

“You still have a tiny bit of margin in practice because these rules will be enforced by member states’ authorities,” Orssich said.

Compliance Assessments

Another exception awaiting regulation is the evaluation of AI systems based on compliance. Providers must subject their high-risk AI systems, including biometric ones, to a conformity assessment procedure. However, the AI Act allows for certain systems to be marketed without prior conformity assessment in exceptional situations like public security.

Lydia Belkadi, another researcher at KU Leuven, explained that market surveillance authorities may allow system usage, but such authorization is limited, with conformity assessment procedures required afterward for law enforcement and civil protection authorities.

The Role of Standardization

To ensure compliance with the AI Act, standardization will play a crucial role. The European Committee for Standardization and the European Electrotechnical Committee for Standardization are working to make the standards available by the end of 2025.

Belkadi concluded, “Overall, high-risk AI systems are now subject to a more comprehensive oversight system that includes providers and deployers. These requirements are key to respecting fundamental rights.”

France’s Role in AI Act Exemptions

A recent investigation revealed that the loopholes and national security exemptions in the AI Act are the result of a campaign led primarily by France. The French administration, under President Emmanuel Macron, strategically engineered amendments to allow law enforcement and border agencies to bypass the ban on remote biometric identification in public spaces.

Countries such as Italy, Hungary, Romania, Sweden, Czech Republic, Lithuania, Finland, and Bulgaria expressed support for France’s maneuvers.

This carve-out could potentially allow climate demonstrations or political protests to come under biometric surveillance if police have national security concerns. France has been experimenting with AI-based surveillance during the Paris Summer Olympics 2024.

Consequences for Vulnerable Populations

While some experts believe these exemptions will have little real-world impact, others caution that the largest effects could be felt by vulnerable populations, who may lack the power to complain. Rosamunde van Brakel, an assistant professor at the Vrije Universiteit Brussel, stated, “In most cases, regulation and oversight only kicks in after the violation has taken place; they do not protect us before.”

More Insights

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...