Understanding the AI Act: Key Changes and Compliance Steps

The AI Act Comes into Effect

The European Regulation on Artificial Intelligence (referred to as the AI Act or RIA), adopted and validated by the Council of the European Union on May 21, 2024, was published in the Official Journal of the EU on July 12, 2024. This regulation aims to ensure the ethical, safe, and fundamental rights-respecting use of AI systems and models marketed within the European Union.

While respecting the values of the Union enshrined in the Charter of Fundamental Rights of the European Union, and facilitating the protection of individuals, businesses, democracy, the rule of law, and the environment, the RIA’s explicit goal is to stimulate innovation and employment, making the EU a leading player in the adoption of trustworthy AI.

A Risk-Based Approach

The Scope of the AI Act

The AI Act applies to AI systems, which are automated systems designed to operate at various levels of autonomy and capable of adapting after deployment. These systems, whether they have explicit or implicit goals, deduce from received inputs how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments (Article 3, 1 of the regulation).

Depending on the risk level of the AI system, providers, importers, and professional users, referred to as “deployers,” will be subject to various obligations.

Risk Levels of AI Systems

The Regulation on Artificial Intelligence defines four risk levels for AI systems:

  • Unacceptable Risk systems, which are prohibited, such as the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except in very specific cases.
  • High-Risk systems, which are subject to enhanced requirements, such as those that pose risks to health, safety, and fundamental rights.
  • Low-Risk systems, primarily subject to transparency obligations, such as chatbots or deepfakes.
  • Minimal Risk systems, which represent the majority of AI systems and are not subject to any specific obligations.

General-purpose AI models are also subject to regulation. Providers and professional users (“deployers”) must comply with several obligations, including transparency and copyright respect, and they are required to publish a sufficiently detailed summary of the content used to train the model.

High-risk AI systems and general-purpose AI models must now comply with the requirements of the AI Act. In case of non-compliance, providers (and to a lesser extent, deployers) face severe penalties: financial sanctions of up to 7% of the company’s global revenue, as well as the potential withdrawal of AI systems from the European market.

Although the AI Act establishes strict rules and monitoring mechanisms, it is designed to encourage innovation through enhanced legal certainty and controlled experimentation environments. The aim is to ensure safe and ethical AI while stimulating technological development.

Implementation and Gradual Application of the AI Regulation

This regulation comes into effect on the twentieth day following its publication in the Official Journal of the EU, starting from August 1, 2024. Gradually, the regulation will be fully applicable from August 2, 2026 (Article 113 of the regulation). However, there are certain exceptions.

Step 1: February 2, 2025 (6 months after entry into force)

Entry into force of:

  • All prohibitions related to AI systems presenting unacceptable risks to prevent potential harm.
  • The obligation to train employees who use AI or maintain it, which falls upon all concerned parties.

This application arrives swiftly after the regulation’s publication to ensure safety and the protection of fundamental rights.

Step 2: August 2, 2025 (12 months after entry into force)

The rules for general-purpose AI models and the appointment of competent authorities at the member state level will come into effect.

Step 3: August 2, 2026 (24 months after entry into force)

From this date, the following will apply:

  • All provisions of the AI regulation, particularly the application of rules regarding high-risk AI systems outlined in Annex III (AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and justice administration).
  • The implementation by member state authorities of at least one regulatory sandbox.

Step 4: August 2, 2027 (36 months after entry into force)

This final step will involve the application of rules concerning high-risk AI systems outlined in Annex I (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.).

By the end of 2030, grace periods for existing AI systems and general-purpose AI models will expire.

Preparing for Compliance

To comply with the AI regulation adopted on July 12, 2024, AI system providers must begin identifying risks associated with their technologies and ensuring that these respect the fundamental rights of the EU. They must provide transparent documentation on how their systems function and maintain high standards of quality and reliability. Continuous monitoring and regular audits by independent third parties are essential to detect and correct deviations.

Professional users, or “deployers,” must now identify and classify their main use cases, while anticipating their compliance with the AI Act.

By following ethical principles, training staff, and collaborating with authorities, businesses can promote safe, ethical, and innovative AI, while benefiting from the legal certainty offered by the regulation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...