The Risks of Expanding the DMA to AI

The EU Should Resist Calls to Regulate AI Under the DMA

A growing number of European policymakers are advocating for the extension of the Digital Markets Act (DMA) to new industries, notably AI and cloud services. Countries such as France, Germany, and the Netherlands have issued a joint statement supporting this expansion, despite the DMA’s mixed results in its current application.

However, broadening the DMA’s scope could impose unnecessary constraints on AI firms, deter investment, and weaken the West’s competitive position in the global AI race against China. This push contradicts the messaging from the recent Paris AI Summit, where leading European politicians warned that excessive regulation could stifle innovation.

Potential Consequences of DMA Expansion

Proposals to classify AI-driven services as Core Platform Services under Article 2(2) of the DMA would subject AI providers to restrictive obligations designed for entrenched gatekeepers. This could limit how companies deploy their AI services. For example, the DMA’s data misappropriation rules might restrict firms from using proprietary data for training, impacting AI model development.

Furthermore, interoperability mandates could force certain AI developers to provide open access to their models, eroding competitive differentiation and discouraging innovation. Self-preferencing restrictions might hinder tech platforms’ ability to integrate AI-driven recommendations effectively.

Evaluating Market Needs

Before extending the DMA to AI, it is crucial for the EU to ascertain that such a move would yield more benefits than drawbacks. The DMA has not produced significant advantages for consumers; instead, users are experiencing a less integrated Google and intrusive choice screens on devices like Apple and Android.

Moreover, the DMA’s current implementation is granting third parties free access to costly interoperability services, jeopardizing the security and simplicity of ecosystems like that of Apple.

AI Industry Dynamics

The AI industry is still in its infancy, characterized by intense competition and no evident market failures that would justify DMA intervention. Rapidly scaling European AI startups, such as Aleph Alpha in Germany and Mistral AI in France, exemplify the market’s dynamism. A rigid, one-size-fits-all approach like the DMA is ill-suited for an industry that thrives on adaptability and rapid iteration.

European leaders cannot simultaneously express ambitions to win the AI race while advocating for some of the world’s most restrictive AI regulations.

Regulatory Strategy

Even in cases where market failures exist, the EU should explore competition enforcement before resorting to regulation. Unlike industries such as search and smartphones, no competition cases have been brought against AI and cloud services. It is essential for the EU to attempt applying existing competition rules before introducing new ex ante regulations.

Notably, the EU’s AI Act, which came into force in August 2024, introduced prohibitions on AI use and deployment, with full enforcement expected to phase in over two years. Article 29 of the AI Act includes a provision preventing the use of AI for unfair commercial practices. Adding DMA restrictions on top of these existing regulations could create unnecessary regulatory overlap, complicating compliance for companies already navigating multiple digital regulation frameworks.

Conclusion

Rather than imposing additional rules, Brussels should adopt a light-touch approach to address competition concerns in AI. The initial step in this strategy is to observe how AI markets develop. In dynamic and disruptive industries, competition can often resolve many issues independently. If anticompetitive conduct emerges, the EU should then apply its robust competition law regime.

Europe has already fallen behind in digital innovation and cannot afford to repeat the same mistake with AI. Heavy-handed regulations like the DMA could cripple the EU’s nascent AI sector and hand a decisive advantage to Chinese startups. As the landscape of global AI leadership evolves, the EU must act prudently to drive innovation and preserve transatlantic ties.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...