Enforcing the AI Act: Challenges and Structures Ahead

Enforcement of the AI Act: A Comprehensive Overview

The European Union Artificial Intelligence Act (AI Act) came into effect on August 1, 2024. It introduces a risk-based framework for the regulation of AI, categorizing AI systems based on their risk levels and outlining specific prohibitions for certain practices deemed unacceptable, such as social scoring and manipulating human behavior.

One of the fundamental challenges that the AI Act faces is its enforcement. The Act delineates both centralized and decentralized enforcement mechanisms, engaging various actors including national market surveillance authorities, the European Commission via the AI Office, and the European Data Protection Supervisor (EDPS).

1. Market Surveillance Authorities

The enforcement of the AI Act heavily relies on the role of Member States, each of which must designate at least one notifying authority and one market surveillance authority to act as the national competent authorities.

  • Notifying Authorities: These entities intervene during the pre-implementation phase of AI systems. Their responsibilities include establishing a framework for conformity assessment bodies and certifying the compliance of high-risk AI systems.
  • Market Surveillance Authorities: After an AI system is implemented, these authorities supervise its operation within their jurisdiction. Unlike notifying authorities, they possess the power to impose sanctions for non-compliance.

Market surveillance authorities are endowed with investigative powers as per Regulation (EU) 2019/1020 and can impose administrative fines for various infringements, including:

  • Non-compliance with prohibited AI practices, with penalties reaching up to EUR 35 million or 7% of the offender’s total worldwide annual turnover.
  • Non-compliance with essential obligations outlined in Article 99(4) of the AI Act, subject to fines up to EUR 15 million or 3% of total annual turnover.
  • Providing misleading information to authorities, incurring fines up to EUR 7 million or 1% of total annual turnover.

Complaints regarding potential infringements can be submitted by any individual who suspects non-compliance, which broadens the scope of accountability.

2. European Commission and AI Office

The European Commission holds exclusive powers to supervise obligations concerning general-purpose AI models, delegating tasks to the AI Office. This office can act autonomously or in response to complaints from users of general-purpose models.

Equipped with investigative powers, the AI Office can:

  • Request documentation and information from AI model providers
  • Conduct compliance evaluations and investigate systemic risks
  • Impose fines for non-compliance, limited to 3% of annual turnover or EUR 15 million, whichever is higher.

The AI Office also supervises compliance for AI systems that utilize its general-purpose model, ensuring that developers adhere to the stipulated regulations.

3. European Data Protection Supervisor (EDPS)

The EDPS serves as the market surveillance authority for EU institutions, equipped with similar powers to national authorities but with lower financial penalties. For instance:

  • Administrative fines up to EUR 1.5 million for non-compliance with prohibited practices.
  • Fines of up to EUR 750,000 for other violations.

4. Cooperation and Coordination

Cooperation among national authorities and the Commission is crucial for effective enforcement. Key mechanisms include:

  • Mandatory reporting of non-compliance that transcends national borders.
  • Provisional measures to limit the use of non-compliant AI systems.
  • Union safeguard procedures where the Commission intervenes in disputes among Member States.

5. Challenges to Implementation

The enforcement framework of the AI Act presents several challenges:

  • Lack of a one-stop shop mechanism: Operators face the burden of navigating multiple authorities across different Member States.
  • Harmonization issues: Variability in national laws raises concerns regarding procedural aspects and compliance deadlines.
  • Double role of the AI Office: Balancing enforcement duties with the development of expertise may compromise impartiality.
  • Varying expertise: Differing levels of expertise among member states could lead to inconsistent enforcement of the Act.

As the landscape of artificial intelligence continues to evolve, addressing these challenges will be crucial for the successful enforcement of the AI Act and ensuring the responsible use of AI technologies in the European Union.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...