The EU’s AI Policy Shift: Balancing Sovereignty and Competitiveness

The EU AI Policy Pivot: Adaptation or Capitulation?

The landscape of Brussels’ AI policy has dramatically shifted in recent months. Just 11 months prior, following the adoption of the AI Act, the EU proudly acknowledged its regulatory superpower status. With additional rules, including Codes of Practice and an AI Liability Directive, in the pipeline, the expectation was set for a robust framework aimed at guiding companies towards legal compliance.

However, as the year began, the situation took a turn. The enforcement of the Digital Services Act encountered significant delays, and the AI Liability Directive was announced to be “on ice.” Furthermore, EU Commissioner for Tech Sovereignty, Security, and Democracy, Henna Virkkunen, indicated that the AI Codes of Practice would primarily serve to assist AI companies rather than impose restrictions.

A Question of Legitimacy

This pivot in policy raises critical questions: Is this a capitulation to the aggressive tech policies of the U.S. under its former leadership, or is it a strategic adaptation designed to better align with the evolving economic landscape? The implications of this shift are profound, touching on the legitimacy of the European project and its political autonomy.

Commissioner Virkkunen’s proactive media engagements reflect a desire to counter the narrative of yielding to American pressure. Acknowledging capitulation would undermine Europe’s credibility and its claims of regulatory autonomy against global powers like the U.S. and China.

Regulatory Fatigue

Over the years, the sentiment of regulatory fatigue has permeated Brussels. Politicians traditionally favorable to business interests and tech lobbyists have long expressed skepticism towards stringent regulations. Even among those who initially supported robust regulations, disillusionment has grown, revealing the challenge of operationalizing ambitious laws effectively.

The challenges are particularly evident in complex sectors like AI, where the future remains unpredictable. The decision to halt the AI Liability Directive signifies not a concerted move towards deregulation but rather an inability to mobilize support for its advancement, highlighting a climate of dissensus rather than consensus.

The Changing Dynamics of EU Tech Policy

Despite the challenges, there is a burgeoning recognition of the need to build European technological capacities. At the recent AI summit in Paris, U.S. Vice President JD Vance criticized “onerous international rules” that affect American companies, illustrating the heightened sensitivity surrounding U.S.-EU relations.

The growing reliance on U.S. tech firms raises concerns. With critical digital infrastructure managed by companies like Microsoft, the potential for extortion looms large if the U.S. were to leverage this dependency against Europe.

The Forces Shaping EU AI Policy

Three distinct forces are currently influencing EU AI policy:

  • First, the champions of tech competitiveness advocate for deregulation to enhance market dynamics.
  • Secondly, the crusaders for digital sovereignty argue for developmental measures to reduce dependence on U.S. firms.
  • Lastly, the inherent inertia of the Brussels machinery complicates efforts to forge a unified policy response.

As these forces contend for dominance, the inertia embedded in the EU’s institutional design poses significant challenges to achieving a coherent and forceful tech policy.

The Path Forward

The future trajectory of EU AI policy remains uncertain. Whether the approach will lean towards a neo-liberal framework or a hawkish realist stance—or perhaps no cohesive policy at all—will depend on the political priorities set forth by EU leadership. The upcoming White House initiative to penalize foreign governments taxing U.S. tech firms will serve as a critical test of Europe’s resolve.

As the geopolitical landscape evolves, the focus on digital sovereignty must ascend to the highest political priorities within the EU. The challenges posed by increased tensions, such as the evolving crisis with Russia, may divert attention from technological strategy, complicating the quest for a digitally sovereign Europe.

In conclusion, as EU policymakers navigate these turbulent waters, the balance between fostering a competitive digital environment and maintaining regulatory integrity will be paramount in shaping the future of technology in Europe.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...