EU AI Act: Milestones and Compliance Challenges Ahead

The EU AI Act: Key Milestones and Compliance Challenges

The European Union Artificial Intelligence Act (EU AI Act) is fundamentally transforming the regulatory environment for AI development and deployment, both within Europe and on a global scale. This article provides an overview of the EU AI Act’s phased implementation, compliance challenges, and future implications for organizations involved in AI technologies.

Phased Rollout: Understanding the Timeline

The EU AI Act is being implemented in several significant stages:

  • February 2, 2025: The first obligations took effect, focusing on AI literacy and prohibiting certain high-risk AI practices.
  • May 2, 2025: The delayed publication of the Code of Practice for general-purpose AI (GPAI) models was anticipated, though significant pushback from industry leaders has postponed its finalization.
  • August 2, 2025: Governance rules and obligations for GPAI models on the market will come into force.
  • August 2, 2026: The majority of the EU AI Act’s requirements will become fully enforceable.
  • 2030: Final implementation steps, especially for the public sector, will be completed.

This phased approach allows organizations time to adapt but also creates a complex compliance environment.

The EU AI Act in a Nutshell

  • World’s first comprehensive AI regulation: The EU AI Act sets a global precedent, with its ultimate impact yet to be fully realized.
  • Dense legislation: The Act comprises over 450 pages, including 68 new definitions and nearly 200 recitals.
  • Risk-based approach: Obligations scale with the risk level of the AI system, categorized from prohibited practices to high-risk and low-risk categories.
  • Wide applicability: The Act applies to developers, deployers, affected individuals, importers, and distributors, regardless of their geographical location.
  • Severe sanctions: Fines can reach up to 7% of global turnover or €35 million, surpassing penalties under the GDPR.
  • Dual enforcement: Both national supervisory authorities and the new EU AI Office will have enforcement powers.

Early Compliance: What’s Happened Since February 2025?

The initial obligations concerning AI literacy and prohibited practices have ignited significant activity across organizations:

  • AI literacy: Companies are implementing training programs to ensure staff understand AI risks and regulatory requirements.
  • Prohibited practices: Organizations are conducting audits to ensure compliance and avoid engaging in prohibited activities.

Defining ‘AI System’: Persistent Challenges

A significant challenge remains in determining whether a solution qualifies as an “AI system” under the EU AI Act. The European Commission emphasizes a holistic, case-by-case assessment based on various criteria, leading to concerns about “AI washing”, where products are overlabelled as AI-enabled for marketing purposes.

GPAI Models and the Code of Practice

Regulating general-purpose AI models, such as large language models, is a primary focus of the Act:

  • GPAI models: These are core AI technologies capable of a broad range of tasks (e.g., GPT-4).
  • AI systems: These are applications built on GPAI models, tailored for specific use cases (e.g., ChatGPT).

Obligations differ for GPAI model providers versus AI system providers, with the Code of Practice designed to facilitate compliance. Despite its voluntary nature, adherence to the Code may influence enforcement decisions.

Transparency Obligations: A Shared Responsibility

Transparency is a cornerstone of the EU AI Act. GPAI model providers must maintain up-to-date documentation and share it with both the EU AI Office and downstream system providers. In turn, system providers are required to inform users about the capabilities and limitations of the AI technologies they utilize.

Enforcement: When Do the Teeth Come Out?

While compliance is required for certain obligations already, enforcement mechanisms, including fines and penalties, will only become active from August 2025 (with a later date for GPAI models). National authorities are still being designated, but affected individuals and entities can seek injunctions in national courts.

Key Takeaways

  • The EU AI Act is complex, far-reaching, and continues to evolve.
  • Initial obligations focus on improving AI literacy and prohibiting harmful practices.
  • Defining what counts as an “AI system” remains a challenging task.
  • The upcoming Code of Practice for GPAI models is a critical but currently delayed aspect of the regulation.
  • Transparency obligations impact both GPAI model and AI system providers.
  • Enforcement will significantly increase from mid-2025.

Organizations operating in or engaging with customers in the EU must proactively engage in compliance efforts to navigate this new regulatory landscape effectively.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...