The EU’s AI Policy Shift: Balancing Sovereignty and Competitiveness

The EU AI Policy Pivot: Adaptation or Capitulation?

The landscape of Brussels’ AI policy has dramatically shifted in recent months. Just 11 months prior, following the adoption of the AI Act, the EU proudly acknowledged its regulatory superpower status. With additional rules, including Codes of Practice and an AI Liability Directive, in the pipeline, the expectation was set for a robust framework aimed at guiding companies towards legal compliance.

However, as the year began, the situation took a turn. The enforcement of the Digital Services Act encountered significant delays, and the AI Liability Directive was announced to be “on ice.” Furthermore, EU Commissioner for Tech Sovereignty, Security, and Democracy, Henna Virkkunen, indicated that the AI Codes of Practice would primarily serve to assist AI companies rather than impose restrictions.

A Question of Legitimacy

This pivot in policy raises critical questions: Is this a capitulation to the aggressive tech policies of the U.S. under its former leadership, or is it a strategic adaptation designed to better align with the evolving economic landscape? The implications of this shift are profound, touching on the legitimacy of the European project and its political autonomy.

Commissioner Virkkunen’s proactive media engagements reflect a desire to counter the narrative of yielding to American pressure. Acknowledging capitulation would undermine Europe’s credibility and its claims of regulatory autonomy against global powers like the U.S. and China.

Regulatory Fatigue

Over the years, the sentiment of regulatory fatigue has permeated Brussels. Politicians traditionally favorable to business interests and tech lobbyists have long expressed skepticism towards stringent regulations. Even among those who initially supported robust regulations, disillusionment has grown, revealing the challenge of operationalizing ambitious laws effectively.

The challenges are particularly evident in complex sectors like AI, where the future remains unpredictable. The decision to halt the AI Liability Directive signifies not a concerted move towards deregulation but rather an inability to mobilize support for its advancement, highlighting a climate of dissensus rather than consensus.

The Changing Dynamics of EU Tech Policy

Despite the challenges, there is a burgeoning recognition of the need to build European technological capacities. At the recent AI summit in Paris, U.S. Vice President JD Vance criticized “onerous international rules” that affect American companies, illustrating the heightened sensitivity surrounding U.S.-EU relations.

The growing reliance on U.S. tech firms raises concerns. With critical digital infrastructure managed by companies like Microsoft, the potential for extortion looms large if the U.S. were to leverage this dependency against Europe.

The Forces Shaping EU AI Policy

Three distinct forces are currently influencing EU AI policy:

  • First, the champions of tech competitiveness advocate for deregulation to enhance market dynamics.
  • Secondly, the crusaders for digital sovereignty argue for developmental measures to reduce dependence on U.S. firms.
  • Lastly, the inherent inertia of the Brussels machinery complicates efforts to forge a unified policy response.

As these forces contend for dominance, the inertia embedded in the EU’s institutional design poses significant challenges to achieving a coherent and forceful tech policy.

The Path Forward

The future trajectory of EU AI policy remains uncertain. Whether the approach will lean towards a neo-liberal framework or a hawkish realist stance—or perhaps no cohesive policy at all—will depend on the political priorities set forth by EU leadership. The upcoming White House initiative to penalize foreign governments taxing U.S. tech firms will serve as a critical test of Europe’s resolve.

As the geopolitical landscape evolves, the focus on digital sovereignty must ascend to the highest political priorities within the EU. The challenges posed by increased tensions, such as the evolving crisis with Russia, may divert attention from technological strategy, complicating the quest for a digitally sovereign Europe.

In conclusion, as EU policymakers navigate these turbulent waters, the balance between fostering a competitive digital environment and maintaining regulatory integrity will be paramount in shaping the future of technology in Europe.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...