EU’s AI Oversight: A Race Against Time

Three Months Before Deadline: EU Countries Not Ready for AI Oversight

With just under three months remaining before the 27 EU member states are required to appoint a regulator to oversee compliance with the AI Act, uncertainty looms in at least half of the member states regarding which authority will be designated. A recent analysis by Euronews reveals that the deadline for member states to notify the European Commission about their appointed market surveillance authorities is set for August 2.

In addition to appointing authorities, member countries must also adopt an implementing law that outlines penalties and empowers these authorities. The urgency is heightened by the fact that the AI Act, aimed at regulating AI tools according to the risks they pose to society, officially entered into force in August 2024 and is set to be fully operational by 2027.

Current Status of Member States

The latest meeting of the AI Board, which facilitates cooperation among member states, indicated that most countries sent representatives from various ministries rather than national regulators. Only a few, including Denmark, Greece, Italy, Portugal, and Romania, had national regulators in attendance. The EU executive remains tight-lipped about which countries are prepared, but an official from the AI Office suggested that the process in certain member states, particularly those that recently held elections like Germany, could experience delays.

The official stated that member states are engaged in “intense discussions” within the AI Board, navigating the various approaches to establishing the oversight structure. Each country has the discretion to decide whether to appoint one or multiple regulators.

“I think 95% of them have certainly chosen the structure that they want to have and started the process to appoint the authorities. We will see whether on August 2 things will be finalized or not,” the official commented, acknowledging the unpredictability of parliamentary processes.

Implications of Delays

A delay in appointing oversight bodies could create uncertainty for businesses that need to comply with the forthcoming regulations. Some member states, like Spain, have already established entirely new regulators. In Spain, the AESIA, an independent agency within the Department of Digital Transformation, is likely to assume the regulatory role.

In contrast, Poland is in the process of implementing a new body, the Committee on Development and Security of AI, to serve as its market surveillance authority. Meanwhile, Denmark has designated its pre-existing Agency for Digital Government for this role.

Germany is expected to appoint the Federal Network Agency for oversight, while other nations, including the Netherlands, may expand the responsibilities of their existing privacy watchdogs to ensure compliance with the AI Act, leveraging the legal framework established by the General Data Protection Regulation (GDPR).

In July, privacy regulators urged member states to take charge of high-risk systems, emphasizing the need for stringent oversight in areas such as biometric identification, law enforcement, and migration and border control.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...