Understanding the AI Act: Key Changes and Compliance Steps

The AI Act Comes into Effect

The European Regulation on Artificial Intelligence (referred to as the AI Act or RIA), adopted and validated by the Council of the European Union on May 21, 2024, was published in the Official Journal of the EU on July 12, 2024. This regulation aims to ensure the ethical, safe, and fundamental rights-respecting use of AI systems and models marketed within the European Union.

While respecting the values of the Union enshrined in the Charter of Fundamental Rights of the European Union, and facilitating the protection of individuals, businesses, democracy, the rule of law, and the environment, the RIA’s explicit goal is to stimulate innovation and employment, making the EU a leading player in the adoption of trustworthy AI.

A Risk-Based Approach

The Scope of the AI Act

The AI Act applies to AI systems, which are automated systems designed to operate at various levels of autonomy and capable of adapting after deployment. These systems, whether they have explicit or implicit goals, deduce from received inputs how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments (Article 3, 1 of the regulation).

Depending on the risk level of the AI system, providers, importers, and professional users, referred to as “deployers,” will be subject to various obligations.

Risk Levels of AI Systems

The Regulation on Artificial Intelligence defines four risk levels for AI systems:

  • Unacceptable Risk systems, which are prohibited, such as the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except in very specific cases.
  • High-Risk systems, which are subject to enhanced requirements, such as those that pose risks to health, safety, and fundamental rights.
  • Low-Risk systems, primarily subject to transparency obligations, such as chatbots or deepfakes.
  • Minimal Risk systems, which represent the majority of AI systems and are not subject to any specific obligations.

General-purpose AI models are also subject to regulation. Providers and professional users (“deployers”) must comply with several obligations, including transparency and copyright respect, and they are required to publish a sufficiently detailed summary of the content used to train the model.

High-risk AI systems and general-purpose AI models must now comply with the requirements of the AI Act. In case of non-compliance, providers (and to a lesser extent, deployers) face severe penalties: financial sanctions of up to 7% of the company’s global revenue, as well as the potential withdrawal of AI systems from the European market.

Although the AI Act establishes strict rules and monitoring mechanisms, it is designed to encourage innovation through enhanced legal certainty and controlled experimentation environments. The aim is to ensure safe and ethical AI while stimulating technological development.

Implementation and Gradual Application of the AI Regulation

This regulation comes into effect on the twentieth day following its publication in the Official Journal of the EU, starting from August 1, 2024. Gradually, the regulation will be fully applicable from August 2, 2026 (Article 113 of the regulation). However, there are certain exceptions.

Step 1: February 2, 2025 (6 months after entry into force)

Entry into force of:

  • All prohibitions related to AI systems presenting unacceptable risks to prevent potential harm.
  • The obligation to train employees who use AI or maintain it, which falls upon all concerned parties.

This application arrives swiftly after the regulation’s publication to ensure safety and the protection of fundamental rights.

Step 2: August 2, 2025 (12 months after entry into force)

The rules for general-purpose AI models and the appointment of competent authorities at the member state level will come into effect.

Step 3: August 2, 2026 (24 months after entry into force)

From this date, the following will apply:

  • All provisions of the AI regulation, particularly the application of rules regarding high-risk AI systems outlined in Annex III (AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and justice administration).
  • The implementation by member state authorities of at least one regulatory sandbox.

Step 4: August 2, 2027 (36 months after entry into force)

This final step will involve the application of rules concerning high-risk AI systems outlined in Annex I (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.).

By the end of 2030, grace periods for existing AI systems and general-purpose AI models will expire.

Preparing for Compliance

To comply with the AI regulation adopted on July 12, 2024, AI system providers must begin identifying risks associated with their technologies and ensuring that these respect the fundamental rights of the EU. They must provide transparent documentation on how their systems function and maintain high standards of quality and reliability. Continuous monitoring and regular audits by independent third parties are essential to detect and correct deviations.

Professional users, or “deployers,” must now identify and classify their main use cases, while anticipating their compliance with the AI Act.

By following ethical principles, training staff, and collaborating with authorities, businesses can promote safe, ethical, and innovative AI, while benefiting from the legal certainty offered by the regulation.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...