AI Act Exemptions: A Threat to Rights and Security?

All Eyes on AI Act Exemptions as Ban on High-Risk AI Systems Nears

Despite being celebrated as the world’s first comprehensive AI legislation, the European Union’s AI Act has left some questions open, particularly regarding exemptions for the use of otherwise banned AI applications for law enforcement and border control agencies.

On February 2nd, the ban on AI systems that pose “unacceptable risk” will become official. The existence of national security exemptions raises the question of whether the AI rulebook will be able to safeguard rights.

Banned AI Applications

Banned AI applications with “unacceptable risk” levels include:

  • Biometric categorization systems based on sensitive characteristics
  • Emotion recognition in the workplace and schools
  • Social scoring
  • Predictive policing
  • Applications that manipulate human behavior

The European Association for Biometrics (EAB) organized a talk inviting legal experts and industry stakeholders to discuss the use of biometric data in these applications. Abdullah Elbi, a legal researcher at the Centre for IT & IP Law (CiTiP) at KU Leuven, emphasized, “It’s not an absolute prohibition, so it requires a well understanding of rules.”

The most important aspect would be having well-reasoned guidelines from the European Commission, market surveillance authorities, and data protection authorities.

Balancing Security and Rights

Although the AI rulebook seems to establish standards for AI application, it leaves balancing security and rights protections to EU countries. Governments can decide whether to introduce exceptions allowing real-time remote biometric identification in cases such as serious crimes or preventing serious threats like terror attacks.

Elbi noted the possibility of fragmentation in different member states regarding the use of remote biometric identification systems.

Criticism from Rights Groups

The law enforcement carve-outs have led to criticism from rights groups, who argue they dilute protections against potential abuse. Irina Orssich from the EU’s AI Office stated that European member states cannot negotiate aspects of the implementation of the AI Act. While individual countries can introduce stricter regulations, they cannot relax them.

“You still have a tiny bit of margin in practice because these rules will be enforced by member states’ authorities,” Orssich said.

Compliance Assessments

Another exception awaiting regulation is the evaluation of AI systems based on compliance. Providers must subject their high-risk AI systems, including biometric ones, to a conformity assessment procedure. However, the AI Act allows for certain systems to be marketed without prior conformity assessment in exceptional situations like public security.

Lydia Belkadi, another researcher at KU Leuven, explained that market surveillance authorities may allow system usage, but such authorization is limited, with conformity assessment procedures required afterward for law enforcement and civil protection authorities.

The Role of Standardization

To ensure compliance with the AI Act, standardization will play a crucial role. The European Committee for Standardization and the European Electrotechnical Committee for Standardization are working to make the standards available by the end of 2025.

Belkadi concluded, “Overall, high-risk AI systems are now subject to a more comprehensive oversight system that includes providers and deployers. These requirements are key to respecting fundamental rights.”

France’s Role in AI Act Exemptions

A recent investigation revealed that the loopholes and national security exemptions in the AI Act are the result of a campaign led primarily by France. The French administration, under President Emmanuel Macron, strategically engineered amendments to allow law enforcement and border agencies to bypass the ban on remote biometric identification in public spaces.

Countries such as Italy, Hungary, Romania, Sweden, Czech Republic, Lithuania, Finland, and Bulgaria expressed support for France’s maneuvers.

This carve-out could potentially allow climate demonstrations or political protests to come under biometric surveillance if police have national security concerns. France has been experimenting with AI-based surveillance during the Paris Summer Olympics 2024.

Consequences for Vulnerable Populations

While some experts believe these exemptions will have little real-world impact, others caution that the largest effects could be felt by vulnerable populations, who may lack the power to complain. Rosamunde van Brakel, an assistant professor at the Vrije Universiteit Brussel, stated, “In most cases, regulation and oversight only kicks in after the violation has taken place; they do not protect us before.”

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...