AI Under National Security: Eroding Accountability and Oversight

When National Security Becomes a Shield for Evading AI Accountability

As artificial intelligence (AI) becomes embedded in state security and surveillance across Europe, the legal safeguards meant to constrain its use are increasingly being left behind. EU member states are turning to AI to automate decision-making, expand surveillance, and consolidate state power. Yet many of these applications, particularly biometric surveillance and algorithmic risk assessments, remain largely unregulated when it comes to national security.

Indeed, broad carve-outs and exemptions for national security in existing AI legislation, including Article 2 of the EU AI Act and Article 3(2) CoE Framework Convention on AI and Human Rights, have created significant regulatory gaps. Compounding this issue, “national security” itself is so loosely defined that it allows states to bypass fundamental rights while deploying AI with minimal oversight.

Against the backdrop of a rapidly shifting geopolitical environment and rising authoritarianism, national security risks are becoming a convenient cover for unchecked surveillance and executive authority. This dynamic is setting a dangerous precedent. EU governments and candidate countries are invoking national security to justify AI deployment in ways that evade regulatory scrutiny, particularly in surveillance and counterterrorism.

Legal Framework and Counterterrorism

Upholding the Court of Justice of the European Union (CJEU) jurisprudence is critical because it provides a legal compass for defining national security and setting clear thresholds for when states can override fundamental rights. Without it, Europe risks building a security architecture powered by AI but shielded from accountability.

While existing EU law lacks a concrete definition of national security, the CJEU has provided some guidance on this matter. According to CJEU case law (La Quadrature du Net C‑511/18 and C‑512/18), national security corresponds to the “primary interest in protecting the essential functions of the State and the fundamental interests of society through the prevention and punishment of activities capable of seriously destabilizing the fundamental constitutional, political, economic or social structures of a country and, in particular, of directly threatening society, the population or the State itself, such as terrorist activities.”

This interpretation was reinforced in Commissioner of An Garda Síochána and Others C‑140/20 and in Joined Cases C-793/19 SpaceNet and C-794/19 Telekom Deutschland. By citing the prevention of terrorism as a key example of activities capable of destabilizing national structures, the CJEU closely associates counterterrorism and national security. Under this legal framework, EU member states may seek to justify any counterterrorism initiatives in the name of national security.

However, the court has also imposed limits. In SpaceNet and Telekom Deutschland, it stipulated that a national security threat “must be genuine and present or foreseeable, which presupposes that sufficiently concrete circumstances have arisen,” to justify the indiscriminate retention of data for a limited period of time. As a result, member states are subject to certain conditions when invoking a national security justification.

Case Studies: Governments’ Use of AI under National Security Justifications

Identifying government use of AI for national security purposes is challenging, as such initiatives are often classified. Below, we examine AI-driven surveillance and security programs that governments may justify under national security exemptions, alongside cases where national security has been invoked to potentially avoid oversight and compliance requirements.

France

Since the 2015 Intelligence Act granted broad powers to conduct algorithmic analysis of large metadata sets, France has used AI to strengthen its counterterrorism efforts, with the goal of identifying potential terrorist activity. Over the past decade, authorities have expanded the scope of algorithmic systems to include monitoring of websites, messaging apps, and web searches for signs of extremist activity. The precise scope and safeguards of this experiment remain opaque, raising concerns about whether France is normalizing algorithmic surveillance under the banner of national security.

The country has continued to broaden these powers. The new Foreign Interference Law, adopted in 2024, authorizes the deployment of an experimental algorithm to “monitor suspicious activity” linked to foreign interference. The parliamentary intelligence committee described foreign interference as an “omnipresent and lasting threat,” justifying algorithmic surveillance as necessary to protect national security.

Border control agencies and travel authorities have also adopted AI-based risk assessments. Since 2016, French travel authorities have utilized Passenger Name Record (PNR) risk assessments provided by Idemia to flag travelers deemed suspicious based on their travel routes and/or payment methods. Idemia markets its advanced data analytics and AI capabilities as tools to “detect risks and threat patterns in real time from a huge and growing volume of passenger data.” While PNR risk assessment algorithms are prohibited from explicitly considering protected personal characteristics, they nevertheless may still reproduce bias against marginalized groups, especially when training data is unrepresentative.

Another high-profile case arose during the 2024 Paris Olympics, when the government authorized AI-powered “smart cameras” to monitor crowds and detect “abnormal behavior.” Officials justified the deployment as a matter of ensuring public security during the Olympics. This case illustrates how similar justifications could be used to defend future AI-driven surveillance initiatives.

Hungary

Earlier this year, Hungary criminalized participation in Pride events and deployed facial recognition technology against protesters, enabling real-time remote biometric identification in public spaces. The move may be a direct breach of Article 5 of the EU AI Act. Civil society organizations are urging the European Commission to launch infringement proceedings against Hungary for violating the EU AI Act and the EU Charter of Fundamental Rights. While Hungary has not invoked the national security exemption to justify its usage of facial recognition technology, such practices can easily be justified under the pretense of national security.

Serbia

Serbia purchased facial recognition technology from a Swedish company that claims its software can identify individuals based on eye-related features. Given Serbia’s past misuse of national security exemptions, there is a credible risk that authorities could invoke national security to justify mass surveillance.

For example, Serbia implemented a law in 2021 that introduced algorithmic decision-making into its social services. The law has been heavily criticized for its processing of personal data and potential discrimination against vulnerable communities. Following complaints, the Office of the Commissioner for the Protection of Equality of the Republic of Serbia sought access to the social services algorithm, but the government rejected the request on national security grounds.

Spain

In 2017, Spain implemented an algorithmic decision-making system, BOSCO, to determine the distribution of social vouchers for electricity. Civil society organizations scrutinized the public benefits algorithm and requested access to its source code, but the government denied the request, citing national security. In 2025, the Supreme Court of Spain sided with civil society, rejecting the national security justification and ordering the government to provide access to BOSCO’s source code.

Open Questions about Legal and Regulatory Challenges

The EU AI Act, EU Charter of Fundamental Rights, and existing CJEU case law can serve as useful instruments to constrain government power and surveillance. Important questions for civil society to consider include:

  • How far does existing case law extend? Landmark CJEU decisions establish strict conditions on invoking national security to justify indiscriminate data retention and mass surveillance, but can these rulings be extended to AI systems?
  • Who builds national security AI systems? States may develop algorithmic decision-making systems internally or procure them from private firms, both of which carry responsibilities under the EU AI Act.
  • Do AI regulatory oversight mechanisms apply? If governments classify these systems as national security tools, they may circumvent safeguards entirely.

Recommendations for Accountability and Oversight

To protect fundamental rights in AI-enabled national security initiatives, civil society may consider the following recommendations:

  1. Collaborate with investigative journalists to uncover information about AI-based national security initiatives. Key questions include: Is information publicly available? What are the objectives of the AI system? What datasets are used to train the algorithms?
  2. Pursue strategic litigation advocating for strict application of CJEU conditions on national security justifications invoked under the EU AI Act.
  3. Engage institutional stakeholders to better understand perspectives and inform advocacy strategies.
  4. Build cross-border coalitions with civil society organizations to share best practices and understand emerging trends.

Conclusions

Article 2 of the EU AI Act effectively gives governments a free pass: by classifying AI systems as national security tools, they can bypass transparency, oversight, and fundamental rights safeguards. This loophole risks turning national security into a blanket excuse for mass surveillance and unchecked algorithmic decision-making. Closing it will require clear limits on the Article 2 exemption, strict application of CJEU standards, and active engagement by civil society and courts to hold states accountable. Without these measures, AI in the name of security will continue to expand behind a shield of secrecy and impunity, eroding both rights and democratic accountability.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...