Enforcing the AI Act: Challenges and Structures Ahead

Enforcement of the AI Act: A Comprehensive Overview

The European Union Artificial Intelligence Act (AI Act) came into effect on August 1, 2024. It introduces a risk-based framework for the regulation of AI, categorizing AI systems based on their risk levels and outlining specific prohibitions for certain practices deemed unacceptable, such as social scoring and manipulating human behavior.

One of the fundamental challenges that the AI Act faces is its enforcement. The Act delineates both centralized and decentralized enforcement mechanisms, engaging various actors including national market surveillance authorities, the European Commission via the AI Office, and the European Data Protection Supervisor (EDPS).

1. Market Surveillance Authorities

The enforcement of the AI Act heavily relies on the role of Member States, each of which must designate at least one notifying authority and one market surveillance authority to act as the national competent authorities.

  • Notifying Authorities: These entities intervene during the pre-implementation phase of AI systems. Their responsibilities include establishing a framework for conformity assessment bodies and certifying the compliance of high-risk AI systems.
  • Market Surveillance Authorities: After an AI system is implemented, these authorities supervise its operation within their jurisdiction. Unlike notifying authorities, they possess the power to impose sanctions for non-compliance.

Market surveillance authorities are endowed with investigative powers as per Regulation (EU) 2019/1020 and can impose administrative fines for various infringements, including:

  • Non-compliance with prohibited AI practices, with penalties reaching up to EUR 35 million or 7% of the offender’s total worldwide annual turnover.
  • Non-compliance with essential obligations outlined in Article 99(4) of the AI Act, subject to fines up to EUR 15 million or 3% of total annual turnover.
  • Providing misleading information to authorities, incurring fines up to EUR 7 million or 1% of total annual turnover.

Complaints regarding potential infringements can be submitted by any individual who suspects non-compliance, which broadens the scope of accountability.

2. European Commission and AI Office

The European Commission holds exclusive powers to supervise obligations concerning general-purpose AI models, delegating tasks to the AI Office. This office can act autonomously or in response to complaints from users of general-purpose models.

Equipped with investigative powers, the AI Office can:

  • Request documentation and information from AI model providers
  • Conduct compliance evaluations and investigate systemic risks
  • Impose fines for non-compliance, limited to 3% of annual turnover or EUR 15 million, whichever is higher.

The AI Office also supervises compliance for AI systems that utilize its general-purpose model, ensuring that developers adhere to the stipulated regulations.

3. European Data Protection Supervisor (EDPS)

The EDPS serves as the market surveillance authority for EU institutions, equipped with similar powers to national authorities but with lower financial penalties. For instance:

  • Administrative fines up to EUR 1.5 million for non-compliance with prohibited practices.
  • Fines of up to EUR 750,000 for other violations.

4. Cooperation and Coordination

Cooperation among national authorities and the Commission is crucial for effective enforcement. Key mechanisms include:

  • Mandatory reporting of non-compliance that transcends national borders.
  • Provisional measures to limit the use of non-compliant AI systems.
  • Union safeguard procedures where the Commission intervenes in disputes among Member States.

5. Challenges to Implementation

The enforcement framework of the AI Act presents several challenges:

  • Lack of a one-stop shop mechanism: Operators face the burden of navigating multiple authorities across different Member States.
  • Harmonization issues: Variability in national laws raises concerns regarding procedural aspects and compliance deadlines.
  • Double role of the AI Office: Balancing enforcement duties with the development of expertise may compromise impartiality.
  • Varying expertise: Differing levels of expertise among member states could lead to inconsistent enforcement of the Act.

As the landscape of artificial intelligence continues to evolve, addressing these challenges will be crucial for the successful enforcement of the AI Act and ensuring the responsible use of AI technologies in the European Union.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...