EU AI Act: Milestones and Compliance Challenges Ahead

The EU AI Act: Key Milestones and Compliance Challenges

The European Union Artificial Intelligence Act (EU AI Act) is fundamentally transforming the regulatory environment for AI development and deployment, both within Europe and on a global scale. This article provides an overview of the EU AI Act’s phased implementation, compliance challenges, and future implications for organizations involved in AI technologies.

Phased Rollout: Understanding the Timeline

The EU AI Act is being implemented in several significant stages:

  • February 2, 2025: The first obligations took effect, focusing on AI literacy and prohibiting certain high-risk AI practices.
  • May 2, 2025: The delayed publication of the Code of Practice for general-purpose AI (GPAI) models was anticipated, though significant pushback from industry leaders has postponed its finalization.
  • August 2, 2025: Governance rules and obligations for GPAI models on the market will come into force.
  • August 2, 2026: The majority of the EU AI Act’s requirements will become fully enforceable.
  • 2030: Final implementation steps, especially for the public sector, will be completed.

This phased approach allows organizations time to adapt but also creates a complex compliance environment.

The EU AI Act in a Nutshell

  • World’s first comprehensive AI regulation: The EU AI Act sets a global precedent, with its ultimate impact yet to be fully realized.
  • Dense legislation: The Act comprises over 450 pages, including 68 new definitions and nearly 200 recitals.
  • Risk-based approach: Obligations scale with the risk level of the AI system, categorized from prohibited practices to high-risk and low-risk categories.
  • Wide applicability: The Act applies to developers, deployers, affected individuals, importers, and distributors, regardless of their geographical location.
  • Severe sanctions: Fines can reach up to 7% of global turnover or €35 million, surpassing penalties under the GDPR.
  • Dual enforcement: Both national supervisory authorities and the new EU AI Office will have enforcement powers.

Early Compliance: What’s Happened Since February 2025?

The initial obligations concerning AI literacy and prohibited practices have ignited significant activity across organizations:

  • AI literacy: Companies are implementing training programs to ensure staff understand AI risks and regulatory requirements.
  • Prohibited practices: Organizations are conducting audits to ensure compliance and avoid engaging in prohibited activities.

Defining ‘AI System’: Persistent Challenges

A significant challenge remains in determining whether a solution qualifies as an “AI system” under the EU AI Act. The European Commission emphasizes a holistic, case-by-case assessment based on various criteria, leading to concerns about “AI washing”, where products are overlabelled as AI-enabled for marketing purposes.

GPAI Models and the Code of Practice

Regulating general-purpose AI models, such as large language models, is a primary focus of the Act:

  • GPAI models: These are core AI technologies capable of a broad range of tasks (e.g., GPT-4).
  • AI systems: These are applications built on GPAI models, tailored for specific use cases (e.g., ChatGPT).

Obligations differ for GPAI model providers versus AI system providers, with the Code of Practice designed to facilitate compliance. Despite its voluntary nature, adherence to the Code may influence enforcement decisions.

Transparency Obligations: A Shared Responsibility

Transparency is a cornerstone of the EU AI Act. GPAI model providers must maintain up-to-date documentation and share it with both the EU AI Office and downstream system providers. In turn, system providers are required to inform users about the capabilities and limitations of the AI technologies they utilize.

Enforcement: When Do the Teeth Come Out?

While compliance is required for certain obligations already, enforcement mechanisms, including fines and penalties, will only become active from August 2025 (with a later date for GPAI models). National authorities are still being designated, but affected individuals and entities can seek injunctions in national courts.

Key Takeaways

  • The EU AI Act is complex, far-reaching, and continues to evolve.
  • Initial obligations focus on improving AI literacy and prohibiting harmful practices.
  • Defining what counts as an “AI system” remains a challenging task.
  • The upcoming Code of Practice for GPAI models is a critical but currently delayed aspect of the regulation.
  • Transparency obligations impact both GPAI model and AI system providers.
  • Enforcement will significantly increase from mid-2025.

Organizations operating in or engaging with customers in the EU must proactively engage in compliance efforts to navigate this new regulatory landscape effectively.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...