Implementation Timeline of the AI Act

EU AI Act Key Date and milestone illustration

Table of Contents

Summary

 

  • August 1, 2024: EU AI Act officially comes into force.
  • February 2, 2025: Ban on AI systems with unacceptable risks (e.g., subliminal manipulation, social scoring).
  • May 2, 2025: Development of General Purpose AI (GPAI) codes of conduct begins.
  • August 2, 2025: Governance framework for GPAI providers becomes enforceable.
  • February 2, 2026: Post-market surveillance plans for high-risk AI systems implemented.
  • August 2, 2026: Full applicability of rules for high-risk AI systems, including fines for non-compliance.
  • August 2, 2027: Final compliance deadline for all GPAI systems and high-risk AI systems marketed before August 2025.

Overview

The EU AI Act represents a groundbreaking regulatory framework aimed at ensuring ethical, safe, and transparent AI systems across the European Union. Its phased implementation provides clear guidance to stakeholders while promoting innovation and compliance. Below is a detailed timeline of key milestones, their implications, and preparatory actions for organizations.

Effective Date

August 1, 2024: EU AI Act comes into force

  • Event: Official adoption of the EU AI Act, 20 days after its publication in the EU Official Journal on July 12, 2024.
  • Objective: Establishes the legal foundation for the Act. While it becomes law, most provisions remain unenforceable, allowing a transition period for phased implementation.
  • Immediate Implications:
    • Organizations must initiate internal audits and gap analyses to assess alignment of current AI systems with future obligations.
    • Industries reliant on AI—such as healthcare, finance, and law enforcement—should prioritize mapping use cases based on the risk categories defined in the Act.

Initial Implementation Phase

February 2, 2025: Ban on AI systems with unacceptable risk

  • Scope of banned AI systems:
    • AI using subliminal techniques to manipulate behavior or decisions.
    • AI exploiting vulnerabilities of specific groups (e.g., children, elderly, disabled).
    • Social scoring systems, akin to those in some countries, are prohibited without exception.
    • Real-time biometric identification in public spaces, with strictly defined exceptions for law enforcement.
    • Emotion recognition systems in workplaces or educational settings.
    • AI systems assessing individuals’ likelihood to commit criminal offenses based on personal data.
  • Compliance Focus:
    • AI Literacy: Organizations must enhance employee awareness of ethical AI practices and ensure training for those involved in AI system design, deployment, or management.
    • Policy Revisions: Companies using biometric tools or sentiment analysis systems must reevaluate these systems for compliance or deactivate them.

May 2, 2025: Development of General Purpose AI (GPAI) codes of conduct

  • Facilitation Role: The EU AI Office coordinates the drafting process, involving industry stakeholders, academia, and member states.
  • Content Scope: Expected to include transparency guidelines, risk assessment protocols, and operational best practices.
  • Key Considerations:
    • Participatory Governance: Stakeholders must actively contribute to ensure industry-specific nuances are addressed.
    • Safeguard Clause: If voluntary codes are inadequate, the EU AI Office may impose rules through implementing acts.

August 2, 2025:

  • Governance Framework for GPAI Providers: Obligations for GPAI providers, including transparency and risk mitigation requirements, become enforceable.
  • National Preparations:
    • Member states must designate competent authorities responsible for AI regulation oversight and enforcement.
    • The European Commission initiates annual reviews of prohibited AI practices to address evolving risks.

Main Enforcement Period

February 2, 2026: Implementation of post-market surveillance plans

  • The European Commission adopts post-market surveillance plan models for high-risk AI providers.
  • Providers must implement mechanisms to monitor and report real-world performance, ensuring ongoing compliance and safety.

August 2, 2026:

  • Full Applicability for High-Risk AI Systems: High-risk AI systems defined in Annex III must comply with the Act’s provisions. These include systems used in:
    • Biometrics: Facial recognition, iris analysis, and voice analysis tools.
    • Critical Infrastructures: AI ensuring safety in transport or energy sectors.
    • Education: AI assessing student performance or aptitude.
    • Employment: Automated recruitment tools or employee monitoring systems.
    • Public Services: AI determining eligibility for social benefits, healthcare, or housing.
    • Law Enforcement and Justice: Predictive policing tools or evidence evaluation systems.
  • New Obligations:
    • Penalties and fines for non-compliance become enforceable.
    • AI Regulatory Sandboxes: Each member state must establish at least one operational sandbox to foster innovation while ensuring compliance.
    • The Commission reviews and updates the list of high-risk AI systems annually to maintain regulatory relevance.

Extended Compliance Deadlines

August 2, 2027: Final compliance deadline for existing GPAI systems

  • All risk categories (including Annex II systems) must achieve full compliance.
  • High-risk AI systems marketed in the EU before August 2, 2025, must meet regulatory requirements.

General Implications for Stakeholders

Compliance Preparedness:

  • Risk Mapping: Organizations must classify their AI systems into risk categories (minimal, limited, high, or unacceptable) as per the Act.
  • Documentation and Transparency: Maintain comprehensive records on data provenance, model design, and intended use cases.

Investment in Innovation:

  • AI Sandboxes: Provide safe environments to test and refine AI systems under regulatory supervision.
  • R&D Prioritization: Companies can explore AI applications in low-risk categories with minimal regulatory constraints.

Global Ripple Effects:

  • Non-EU companies offering AI products/services in the EU must comply, influencing global AI governance trends.

Ethical AI Leadership:

  • The EU positions itself as a global leader in AI governance, setting a precedent for other jurisdictions such as the U.S., Canada, and Japan.

Conclusion

The EU AI Act represents a paradigm shift in AI governance, balancing innovation with ethical safeguards. Its phased implementation ensures organizations have the necessary time to adapt while addressing pressing risks early on. Stakeholders across all sectors must act proactively, leveraging the transition period to align strategies with compliance requirements, build trust, and ensure the sustainability of their AI systems in the European market.