Enforcing the AI Act: Challenges and Structures Ahead

Enforcement of the AI Act: A Comprehensive Overview

The European Union Artificial Intelligence Act (AI Act) came into effect on August 1, 2024. It introduces a risk-based framework for the regulation of AI, categorizing AI systems based on their risk levels and outlining specific prohibitions for certain practices deemed unacceptable, such as social scoring and manipulating human behavior.

One of the fundamental challenges that the AI Act faces is its enforcement. The Act delineates both centralized and decentralized enforcement mechanisms, engaging various actors including national market surveillance authorities, the European Commission via the AI Office, and the European Data Protection Supervisor (EDPS).

1. Market Surveillance Authorities

The enforcement of the AI Act heavily relies on the role of Member States, each of which must designate at least one notifying authority and one market surveillance authority to act as the national competent authorities.

  • Notifying Authorities: These entities intervene during the pre-implementation phase of AI systems. Their responsibilities include establishing a framework for conformity assessment bodies and certifying the compliance of high-risk AI systems.
  • Market Surveillance Authorities: After an AI system is implemented, these authorities supervise its operation within their jurisdiction. Unlike notifying authorities, they possess the power to impose sanctions for non-compliance.

Market surveillance authorities are endowed with investigative powers as per Regulation (EU) 2019/1020 and can impose administrative fines for various infringements, including:

  • Non-compliance with prohibited AI practices, with penalties reaching up to EUR 35 million or 7% of the offender’s total worldwide annual turnover.
  • Non-compliance with essential obligations outlined in Article 99(4) of the AI Act, subject to fines up to EUR 15 million or 3% of total annual turnover.
  • Providing misleading information to authorities, incurring fines up to EUR 7 million or 1% of total annual turnover.

Complaints regarding potential infringements can be submitted by any individual who suspects non-compliance, which broadens the scope of accountability.

2. European Commission and AI Office

The European Commission holds exclusive powers to supervise obligations concerning general-purpose AI models, delegating tasks to the AI Office. This office can act autonomously or in response to complaints from users of general-purpose models.

Equipped with investigative powers, the AI Office can:

  • Request documentation and information from AI model providers
  • Conduct compliance evaluations and investigate systemic risks
  • Impose fines for non-compliance, limited to 3% of annual turnover or EUR 15 million, whichever is higher.

The AI Office also supervises compliance for AI systems that utilize its general-purpose model, ensuring that developers adhere to the stipulated regulations.

3. European Data Protection Supervisor (EDPS)

The EDPS serves as the market surveillance authority for EU institutions, equipped with similar powers to national authorities but with lower financial penalties. For instance:

  • Administrative fines up to EUR 1.5 million for non-compliance with prohibited practices.
  • Fines of up to EUR 750,000 for other violations.

4. Cooperation and Coordination

Cooperation among national authorities and the Commission is crucial for effective enforcement. Key mechanisms include:

  • Mandatory reporting of non-compliance that transcends national borders.
  • Provisional measures to limit the use of non-compliant AI systems.
  • Union safeguard procedures where the Commission intervenes in disputes among Member States.

5. Challenges to Implementation

The enforcement framework of the AI Act presents several challenges:

  • Lack of a one-stop shop mechanism: Operators face the burden of navigating multiple authorities across different Member States.
  • Harmonization issues: Variability in national laws raises concerns regarding procedural aspects and compliance deadlines.
  • Double role of the AI Office: Balancing enforcement duties with the development of expertise may compromise impartiality.
  • Varying expertise: Differing levels of expertise among member states could lead to inconsistent enforcement of the Act.

As the landscape of artificial intelligence continues to evolve, addressing these challenges will be crucial for the successful enforcement of the AI Act and ensuring the responsible use of AI technologies in the European Union.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...