Risk without Borders: The Malicious Use of AI and the EU AI Act’s Global Reach
The EU’s Artificial Intelligence Act (AI Act) stands as one of the first binding AI regulations globally, crafted with the intention of serving as a blueprint for global AI governance. This initiative relies on what is known as the Brussels Effect.
The Importance of Regulatory Quality
In a swiftly evolving domain such as AI, having regulatory quality is essential for influencing global standards. This quality entails providing comprehensive coverage of the most critical risks associated with the usage, deployment, and adoption of AI technologies.
Understanding Malicious Use Risks
Among the various risks identified, malicious use is particularly concerning, arising from the intentional application of AI capabilities to inflict harm. An analysis of the AI Act reveals an uneven coverage of these risks: some are directly addressed, while others are only indirectly managed through supplementary EU or national regulations, or through international initiatives.
By leaving significant gaps, the AI Act risks diminishing its value as a global model. The reliance on domestic and sectoral regulation to fill these gaps, although coherent from an internal perspective to avoid overlaps, assumes that comparable principles are widely accepted or will be adopted internationally—a premise that may not hold true.
Recommendations for Improvement
EU policymakers should utilize periodic revisions of the AI Act to enhance and complete its regulatory coverage. Recent initiatives, such as the Digital Omnibus, suggest a narrowing of the Act’s scope, which could lead to reputational damage. Concurrently, the EU must engage internationally, adopting a narrative that acknowledges the AI Act’s limited exportability in its current form.
AI Safety Efforts Amid Competitive Pressures
In the context of geopolitical competition, the race for AI dominance among states and corporations places emphasis on technological leadership rather than on safety and risk management. This is evident in the policies, investments, and breakthroughs of key geopolitical players.
The U.S. Approach
The U.S. released the America’s AI Action Plan in the summer of 2025, aiming to establish American AI as the global standard. This strategy is pursued through a largely hands-off regulatory approach, which includes revoking previous executive orders on safe AI and blocking state-level AI regulations. This method has primarily benefited the U.S. private sector, which hosts many leading AI firms and led global private AI investment in 2024 with nearly US$110 billion, significantly surpassing Europe.
The Chinese Strategy
Similarly, China is striving for global AI leadership by 2030, focusing on advancements across the AI value chain. This includes a coordinated industrial policy aimed at enhancing capabilities in energy, talent, data, algorithms, hardware, and applications, positioning AI as a solution to economic, social, and security challenges. Goldman Sachs projects that Chinese AI providers will invest US$70 billion in data centers in 2026, backed by substantial state support.
The EU’s Response
Recognizing the competitive landscape, the EU launched the AI Continent Action Plan in April 2025, aiming to mobilize resources such as computing infrastructure, data, talent, and regulation. The EU has announced multiple AI initiatives, including 19 AI Factories and 5 AI Gigafactories in collaboration with the European Investment Bank. Upcoming discussions are expected to cover further AI-related initiatives, including the Cloud and AI Development Act.
The Role of AI Regulatory Frameworks
The dynamics of intense competition necessitate robust AI regulatory frameworks to establish safeguards that can prevent catastrophic risks associated with AI capabilities and rapid deployment. As decision-makers prioritize competitiveness, the AI community remains focused on trust, safety, and risk management.
The EU’s AI Act distinguishes itself as one of the first binding regulations in this field, in contrast to other governments that have only issued broad, non-binding principles. By specifically regulating concrete use cases based on their anticipated risk, the AI Act offers a significant legal innovation with global implications.
In conclusion, while the AI Act serves as a crucial step towards comprehensive AI governance, ongoing efforts are needed to address its limitations, ensuring it remains a viable model for global AI regulation.