Hungary’s New Biometric Surveillance Laws Violate the AI Act: Legal Analysis
The recent amendments to Hungary’s biometric surveillance laws have raised significant legal concerns, particularly regarding their compatibility with the EU AI Act and the Charter of Fundamental Rights of the EU. These changes, enacted in March 2025, have expanded the use of facial recognition technology (FRT) in ways that many argue are intrusive and detrimental to civil liberties.
Background of the Amendments
In March 2025, the Hungarian Parliament passed three amendments that aimed to criminalise LGBTQAI+ demonstrations and escalate biometric surveillance measures. These amendments were rushed through without public debate, entering into force on April 15. The changes allow for the application of FRT in contexts previously deemed inappropriate, such as minor infractions and peaceful assemblies, including events like Budapest Pride.
Expanded Use of Facial Recognition Technology
The amendments permit the Hungarian police to employ facial recognition technology for all types of infractions, not just serious offenses. Previously, FRT was limited to cases where infractions could lead to custodial sentences, but now it can be used for minor violations like jaywalking or the identification of individuals attending banned protests.
Real-Time Biometric Identification and Regulation
The EU’s Artificial Intelligence Act, adopted in 2024, specifically regulates real-time remote biometric identification (RBI) in public spaces. RBI is the process of identifying individuals through biometric data as they move in public areas, often without their knowledge or consent. This method is highly intrusive, creating a sense of constant surveillance that can deter individuals from exercising their rights, particularly their right to protest.
Under Article 5(1)(h) of the AI Act, real-time biometric surveillance is largely prohibited except in narrowly defined cases, such as locating victims of serious crimes or preventing imminent threats. Even in these situations, strict procedures must be followed to authorise and implement such measures.
Legal Violations of the AI Act by Hungary
Despite the Hungarian system primarily using still images (like those captured by CCTV), it enables automatic comparisons with government databases for identifying individuals during infraction proceedings in real- or near-real-time. This capability presents a significant risk of rapid identification during protests, effectively undermining the principles laid out in the AI Act.
According to the AI Act, systems with slight delays are still classified as “real-time” if they can influence individuals’ behavior during public events. The Hungarian system’s design, particularly in protest scenarios, fits this classification, thus violating the AI Act’s prohibitions against real-time biometric surveillance.
Implications for Rights and Freedoms
The implementation of FRT in Hungary poses a direct threat to fundamental rights, notably the freedoms of assembly and expression. Awareness of potential identification and punishment for participating in peaceful protests may lead to a chilling effect, dissuading individuals from exercising their rights.
This chilling effect contravenes the objectives of both the AI Act and the EU Charter, which aim to protect individual freedoms from excessive surveillance measures. By permitting real-time biometric surveillance for low-level infractions, Hungary is not only violating legal standards but also undermining the spirit of democratic engagement.
Next Steps and Recommendations
Given that Hungary’s new legislation permits intrusive surveillance of peaceful protesters and minor infractions, it starkly contradicts the provisions of the AI Act. Such applications of AI technology threaten free speech, public participation, and the overall trust in democratic processes.
It is imperative for the European Union to closely examine this legislation. The newly established AI Office, responsible for safeguarding against AI-related risks, must ensure its protective measures are enforceable. The situation in Hungary serves as a critical test case for the EU’s commitment to uphold its own AI regulations and protect the fundamental rights of its citizens.