Global Implications of the EU AI Act on Malicious AI Use

Risk without Borders: The Malicious Use of AI and the EU AI Act’s Global Reach

The EU’s Artificial Intelligence Act (AI Act) stands as one of the first binding AI regulations globally, crafted with the intention of serving as a blueprint for global AI governance. This initiative relies on what is known as the Brussels Effect.

The Importance of Regulatory Quality

In a swiftly evolving domain such as AI, having regulatory quality is essential for influencing global standards. This quality entails providing comprehensive coverage of the most critical risks associated with the usage, deployment, and adoption of AI technologies.

Understanding Malicious Use Risks

Among the various risks identified, malicious use is particularly concerning, arising from the intentional application of AI capabilities to inflict harm. An analysis of the AI Act reveals an uneven coverage of these risks: some are directly addressed, while others are only indirectly managed through supplementary EU or national regulations, or through international initiatives.

By leaving significant gaps, the AI Act risks diminishing its value as a global model. The reliance on domestic and sectoral regulation to fill these gaps, although coherent from an internal perspective to avoid overlaps, assumes that comparable principles are widely accepted or will be adopted internationally—a premise that may not hold true.

Recommendations for Improvement

EU policymakers should utilize periodic revisions of the AI Act to enhance and complete its regulatory coverage. Recent initiatives, such as the Digital Omnibus, suggest a narrowing of the Act’s scope, which could lead to reputational damage. Concurrently, the EU must engage internationally, adopting a narrative that acknowledges the AI Act’s limited exportability in its current form.

AI Safety Efforts Amid Competitive Pressures

In the context of geopolitical competition, the race for AI dominance among states and corporations places emphasis on technological leadership rather than on safety and risk management. This is evident in the policies, investments, and breakthroughs of key geopolitical players.

The U.S. Approach

The U.S. released the America’s AI Action Plan in the summer of 2025, aiming to establish American AI as the global standard. This strategy is pursued through a largely hands-off regulatory approach, which includes revoking previous executive orders on safe AI and blocking state-level AI regulations. This method has primarily benefited the U.S. private sector, which hosts many leading AI firms and led global private AI investment in 2024 with nearly US$110 billion, significantly surpassing Europe.

The Chinese Strategy

Similarly, China is striving for global AI leadership by 2030, focusing on advancements across the AI value chain. This includes a coordinated industrial policy aimed at enhancing capabilities in energy, talent, data, algorithms, hardware, and applications, positioning AI as a solution to economic, social, and security challenges. Goldman Sachs projects that Chinese AI providers will invest US$70 billion in data centers in 2026, backed by substantial state support.

The EU’s Response

Recognizing the competitive landscape, the EU launched the AI Continent Action Plan in April 2025, aiming to mobilize resources such as computing infrastructure, data, talent, and regulation. The EU has announced multiple AI initiatives, including 19 AI Factories and 5 AI Gigafactories in collaboration with the European Investment Bank. Upcoming discussions are expected to cover further AI-related initiatives, including the Cloud and AI Development Act.

The Role of AI Regulatory Frameworks

The dynamics of intense competition necessitate robust AI regulatory frameworks to establish safeguards that can prevent catastrophic risks associated with AI capabilities and rapid deployment. As decision-makers prioritize competitiveness, the AI community remains focused on trust, safety, and risk management.

The EU’s AI Act distinguishes itself as one of the first binding regulations in this field, in contrast to other governments that have only issued broad, non-binding principles. By specifically regulating concrete use cases based on their anticipated risk, the AI Act offers a significant legal innovation with global implications.

In conclusion, while the AI Act serves as a crucial step towards comprehensive AI governance, ongoing efforts are needed to address its limitations, ensuring it remains a viable model for global AI regulation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...