The EU AI Act’s Cybersecurity Gamble: Hackers Don’t Need Permission
As AI development advances, its use in cybersecurity is becoming inevitable – it can help detect and prevent cyber threats in unprecedented ways.
However, there is another side to this coin: bad actors can also leverage AI to develop more sophisticated attack methods, empowering their illicit activities. Criminals generally do not adhere to any constraints on how to utilize this technology.
As the EU forges ahead with the AI Act, questions arise: will this regulation actually enhance security in Europe, or will it become an obstacle, creating new challenges for businesses trying to leverage artificial intelligence for protection?
The AI Act’s Cybersecurity Measures
The EU AI Act is the first major regulatory framework to set clear rules for AI development and deployment. Among its many provisions, it directly addresses cybersecurity risks by introducing measures to ensure AI systems are secure and used responsibly.
One significant aspect of the AI Act is its risk-based classification of AI applications, where each class has different compliance requirements. Naturally, the higher-risk systems—those that could negatively affect people’s health and safety—are subject to stricter security and transparency demands.
Additionally, AI systems must undergo regular mandatory security testing to identify vulnerabilities and reduce the chances of exploitation by cybercriminals. At the same time, the Act establishes better transparency and reporting obligations. These are solid first steps in bringing structure and legitimacy to this industry.
However, this approach has its complications and downsides. Requiring AI systems to undergo numerous checks and certifications means that, in practice, the release of security updates is slowed down considerably. If each modification to AI-based security measures requires a lengthy approval process, attackers have ample time to exploit known weaknesses while target businesses are entangled in red tape.
The issue of transparency is also a double-edged sword. The AI Act mandates that developers disclose technical details about their AI systems to government bodies to ensure accountability. While this is a valid point, it introduces another critical vulnerability: if this information is leaked, it could fall into the hands of bad actors, effectively providing them with a map of how to exploit AI systems. This violates one of the basic tenets of security: security through obscurity.
Compliance as the Source of Vulnerability?
Another layer of risk is the compliance-first mindset. As regulations become stricter, security teams may focus more on meeting legal checkboxes than addressing real-world threats. This could result in AI systems that are technically compliant but operationally brittle.
Systems designed for compliance will inevitably share patterns, making it easier for malicious actors to engineer exploits around them. The end result? Similarly built systems remain equally defenseless.
Furthermore, since the Act requires human oversight of AI decisions, there is a potential avenue for exploitation via social engineering. Attacks may target the human reviewers, who might start approving AI-generated decisions automatically over time, especially in high-volume environments like transaction monitoring. Signs of this are already visible in banking compliance, where oversight fatigue can lead to lapses in judgment.
Restrictions on facial recognition technology, while intended to protect citizens’ privacy, also limit law enforcement’s ability to track and apprehend criminals using advanced surveillance methods. This situation is compounded for dual-use technologies—systems developed for both civilian and military applications.
The Challenges Businesses Will Face
On the business side, the AI Act presents hurdles for compliance. Particularly for small and medium-sized enterprises (SMEs), this can be daunting, as they often lack the resources of larger corporations.
Security testing, compliance audits, and legal consultations require substantial investments, risking a scenario where many companies scale back AI adoption, hindering sector advancement. Consequently, some may opt to relocate operations to friendlier jurisdictions.
This rollback poses significant dangers. Criminals, unfettered by compliance, can innovate with AI at unprecedented speeds, quickly outpacing legitimate businesses.
The process of discovering and exploiting vulnerabilities could soon be accomplished in a matter of hours, if not minutes, while defenders struggle to re-certify their systems for days or weeks before security updates can be implemented.
Social engineering threats are also set to become more dangerous, as AI empowers attackers to mine employee data from public profiles, craft targeted phishing attacks, or even generate real-time deepfake phone calls to exploit human vulnerabilities in security systems.
Integrating AI Act Guidelines Without Losing Ground
Despite its imperfections, the AI Act cannot be ignored. A proactive approach is necessary: building AI systems with regulations in mind from the outset rather than retrofitting later.
This includes leveraging AI-based tools to automate compliance monitoring and engaging with regulatory bodies regularly to stay informed. Participation in industry-wide events to share best practices and emerging trends in cybersecurity and compliance is also crucial.
Ultimately, the AI Act aims to bring order and responsibility to AI development. However, when it comes to cybersecurity, it introduces serious friction and risk. Regulation must evolve as quickly as the technology it seeks to govern to ensure a secure future.