Brussels Spring: Progress and Challenges of the AI Act and DMA

A View from Brussels: The Progress of the AI Act and Digital Markets Act

As spring blooms in Brussels, bringing vibrant flowers and a sense of renewal, the reality of European policymaking unfolds with significant advancements in digital regulation. The Digital Markets Act (DMA) and the Artificial Intelligence Act (AI Act) are now actively shaping the landscape of technology in Europe.

Key Developments in the Digital Markets Act

Recent decisions by the EU competition team regarding the Digital Markets Act have marked an important milestone. Executive Vice-President for a Clean, Just and Competitive Transition, Teresa Ribera, highlighted that “gatekeepers” are adapting their business models, resulting in tangible benefits for European consumers. The European Commission not only designated these gatekeepers but also responded to ecosystem changes—for instance, the reclassification of Facebook Marketplace as a core platform due to its failure to meet the necessary criteria.

Ribera emphasized the Commission’s commitment to enforcing the DMA, including ongoing investigations into major tech players like Apple and Google. This enforcement is crucial for maintaining a competitive digital market and ensuring that the benefits of regulation reach consumers.

Implications of the Artificial Intelligence Act

In parallel, the Artificial Intelligence Act, which came into force in August 2024, has established ambitious timelines for the AI Office to deliver a comprehensive range of outputs. This includes up to 60 deliverables encompassing guidelines, methodologies, and standards addressing various aspects of AI implementation.

The act aims to provide concrete requirements for AI systems, particularly high-risk applications. Among the anticipated outputs is the General-Purpose AI Code of Practice, which is set to come into effect on August 2. This code is designed to translate the AI Act’s requirements into actionable steps for providers of general-purpose AI models.

However, the consultation process for this code has faced criticism, with stakeholders expressing concerns over the limited opportunities for substantive input and the perceived dilution of the tech community’s concerns during discussions.

Looking Ahead

The anticipated Code of Practice is expected to be finalized by May 2, providing a brief window for AI model providers to align with the new requirements. The importance of this regulation cannot be understated, as it aims to ensure that AI technologies are developed and deployed responsibly and transparently.

With these regulatory frameworks in place, Europe is poised to navigate the evolving landscape of digital technology with a focus on consumer protection, competitive markets, and ethical AI deployment. As policymakers continue to refine these initiatives, the impact on the technology sector and consumers alike will be closely monitored.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...