Essential Deadlines for Compliance with the EU AI Act

EU AI Act Deadlines and Compliance Overview

The European Union’s Artificial Intelligence Act (EU AI Act) represents a significant legislative effort aimed at regulating AI systems throughout the EU. This landmark legislation is designed to ensure safety, transparency, and accountability in the deployment of AI technologies. The Act was officially enacted on August 2, 2024, and its implementation will occur over several years, with staggered deadlines that companies must adhere to.

Key Deadlines and Compliance Requirements

Understanding the critical deadlines outlined in the EU AI Act is essential for businesses involved in AI development and deployment. Below is a detailed breakdown of these deadlines along with compliance requirements:

1. February 2, 2025: Prohibited Practices

What to Comply With: The EU AI Act prohibits certain AI practices deemed harmful or manipulative. These include systems that exploit vulnerabilities or distort human behavior, such as public biometrics and social scoring. Companies must ensure their AI systems do not engage in any of these prohibited activities.

2. August 2, 2025: General-Purpose AI (GPAI)

What to Comply With: Providers of General-Purpose AI models must meet specific transparency obligations. This includes maintaining comprehensive documentation of their models and datasets. All developers of large language models (LLM) and generative AI (genAI) foundational models are categorized under this requirement.

3. August 2, 2026: High-Risk AI Systems

What to Comply With: High-risk AI systems, such as those utilized in healthcare or transportation, will be subjected to stricter regulations. Compliance includes implementing cybersecurity measures, establishing incident response protocols, and maintaining evaluation records for AI models.

4. August 2, 2026: Limited-Risk AI Systems

What to Comply With: Limited-risk AI systems, which may include applications in irrigation, agriculture, and customer service, will face milder regulatory requirements. Companies must label AI-generated outputs as ‘artificial’, provide a summary of the data used, and prepare a ‘model card’ detailing the model utilized.

5. August 2, 2027: Additional High-Risk AI Requirements

What to Comply With: Further requirements for high-risk AI systems will come into effect, particularly for those involving safety components, including medical devices and products manufactured for children.

Conclusion

As the EU AI Act unfolds, organizations must remain vigilant and proactive in understanding their obligations under this legislation. By staying informed about deadlines and compliance requirements, companies can navigate the complexities of AI regulation and ensure their systems are aligned with EU standards.

More Insights

Transforming AI Governance: The EU Act’s Framework Against Super AI Risks

The EU AI Act establishes a risk-based framework that categorizes AI systems based on their potential harm, imposing strict regulations on high-risk and prohibited uses to enhance human oversight and...

EU AI Act: Key Changes and Future Implications

The EU AI Act reached a significant milestone on August 2, 2025, marking the beginning of real obligations for general-purpose AI models. Providers must now meet specific requirements to enter the EU...

AI Copyright Dilemma in the EU

The European Union's implementation of the Artificial Intelligence Act introduces new guidelines that aim to balance AI growth with copyright compliance, but this creates significant challenges for...

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full...

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...

Building Secure and Ethical AI in an Evolving Threat Landscape

Sam Peters, Chief Product Officer at ISMS.online, discusses the importance of building secure and ethical AI models in a rapidly evolving threat landscape, emphasizing that compliance must be the...

AI Recruitment Compliance: Key Insights for Employers in Bulgaria and the EU

Artificial intelligence is increasingly influencing recruitment practices, offering a data-driven approach that can streamline hiring processes and reduce human bias. However, the use of AI also...

EU AI Act: Setting the Standard for Global Super AI Regulation

The EU AI Act pioneers global super AI regulation through its risk-based framework, categorizing AI systems by their potential harm and implementing tailored controls to protect society. By focusing...

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...