Brussels Spring: Progress and Challenges of the AI Act and DMA

A View from Brussels: The Progress of the AI Act and Digital Markets Act

As spring blooms in Brussels, bringing vibrant flowers and a sense of renewal, the reality of European policymaking unfolds with significant advancements in digital regulation. The Digital Markets Act (DMA) and the Artificial Intelligence Act (AI Act) are now actively shaping the landscape of technology in Europe.

Key Developments in the Digital Markets Act

Recent decisions by the EU competition team regarding the Digital Markets Act have marked an important milestone. Executive Vice-President for a Clean, Just and Competitive Transition, Teresa Ribera, highlighted that “gatekeepers” are adapting their business models, resulting in tangible benefits for European consumers. The European Commission not only designated these gatekeepers but also responded to ecosystem changes—for instance, the reclassification of Facebook Marketplace as a core platform due to its failure to meet the necessary criteria.

Ribera emphasized the Commission’s commitment to enforcing the DMA, including ongoing investigations into major tech players like Apple and Google. This enforcement is crucial for maintaining a competitive digital market and ensuring that the benefits of regulation reach consumers.

Implications of the Artificial Intelligence Act

In parallel, the Artificial Intelligence Act, which came into force in August 2024, has established ambitious timelines for the AI Office to deliver a comprehensive range of outputs. This includes up to 60 deliverables encompassing guidelines, methodologies, and standards addressing various aspects of AI implementation.

The act aims to provide concrete requirements for AI systems, particularly high-risk applications. Among the anticipated outputs is the General-Purpose AI Code of Practice, which is set to come into effect on August 2. This code is designed to translate the AI Act’s requirements into actionable steps for providers of general-purpose AI models.

However, the consultation process for this code has faced criticism, with stakeholders expressing concerns over the limited opportunities for substantive input and the perceived dilution of the tech community’s concerns during discussions.

Looking Ahead

The anticipated Code of Practice is expected to be finalized by May 2, providing a brief window for AI model providers to align with the new requirements. The importance of this regulation cannot be understated, as it aims to ensure that AI technologies are developed and deployed responsibly and transparently.

With these regulatory frameworks in place, Europe is poised to navigate the evolving landscape of digital technology with a focus on consumer protection, competitive markets, and ethical AI deployment. As policymakers continue to refine these initiatives, the impact on the technology sector and consumers alike will be closely monitored.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...