Understanding the AI Act: Key Changes and Compliance Steps

The AI Act Comes into Effect

The European Regulation on Artificial Intelligence (referred to as the AI Act or RIA), adopted and validated by the Council of the European Union on May 21, 2024, was published in the Official Journal of the EU on July 12, 2024. This regulation aims to ensure the ethical, safe, and fundamental rights-respecting use of AI systems and models marketed within the European Union.

While respecting the values of the Union enshrined in the Charter of Fundamental Rights of the European Union, and facilitating the protection of individuals, businesses, democracy, the rule of law, and the environment, the RIA’s explicit goal is to stimulate innovation and employment, making the EU a leading player in the adoption of trustworthy AI.

A Risk-Based Approach

The Scope of the AI Act

The AI Act applies to AI systems, which are automated systems designed to operate at various levels of autonomy and capable of adapting after deployment. These systems, whether they have explicit or implicit goals, deduce from received inputs how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments (Article 3, 1 of the regulation).

Depending on the risk level of the AI system, providers, importers, and professional users, referred to as “deployers,” will be subject to various obligations.

Risk Levels of AI Systems

The Regulation on Artificial Intelligence defines four risk levels for AI systems:

  • Unacceptable Risk systems, which are prohibited, such as the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except in very specific cases.
  • High-Risk systems, which are subject to enhanced requirements, such as those that pose risks to health, safety, and fundamental rights.
  • Low-Risk systems, primarily subject to transparency obligations, such as chatbots or deepfakes.
  • Minimal Risk systems, which represent the majority of AI systems and are not subject to any specific obligations.

General-purpose AI models are also subject to regulation. Providers and professional users (“deployers”) must comply with several obligations, including transparency and copyright respect, and they are required to publish a sufficiently detailed summary of the content used to train the model.

High-risk AI systems and general-purpose AI models must now comply with the requirements of the AI Act. In case of non-compliance, providers (and to a lesser extent, deployers) face severe penalties: financial sanctions of up to 7% of the company’s global revenue, as well as the potential withdrawal of AI systems from the European market.

Although the AI Act establishes strict rules and monitoring mechanisms, it is designed to encourage innovation through enhanced legal certainty and controlled experimentation environments. The aim is to ensure safe and ethical AI while stimulating technological development.

Implementation and Gradual Application of the AI Regulation

This regulation comes into effect on the twentieth day following its publication in the Official Journal of the EU, starting from August 1, 2024. Gradually, the regulation will be fully applicable from August 2, 2026 (Article 113 of the regulation). However, there are certain exceptions.

Step 1: February 2, 2025 (6 months after entry into force)

Entry into force of:

  • All prohibitions related to AI systems presenting unacceptable risks to prevent potential harm.
  • The obligation to train employees who use AI or maintain it, which falls upon all concerned parties.

This application arrives swiftly after the regulation’s publication to ensure safety and the protection of fundamental rights.

Step 2: August 2, 2025 (12 months after entry into force)

The rules for general-purpose AI models and the appointment of competent authorities at the member state level will come into effect.

Step 3: August 2, 2026 (24 months after entry into force)

From this date, the following will apply:

  • All provisions of the AI regulation, particularly the application of rules regarding high-risk AI systems outlined in Annex III (AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and justice administration).
  • The implementation by member state authorities of at least one regulatory sandbox.

Step 4: August 2, 2027 (36 months after entry into force)

This final step will involve the application of rules concerning high-risk AI systems outlined in Annex I (toys, radio equipment, in vitro diagnostic medical devices, civil aviation safety, agricultural vehicles, etc.).

By the end of 2030, grace periods for existing AI systems and general-purpose AI models will expire.

Preparing for Compliance

To comply with the AI regulation adopted on July 12, 2024, AI system providers must begin identifying risks associated with their technologies and ensuring that these respect the fundamental rights of the EU. They must provide transparent documentation on how their systems function and maintain high standards of quality and reliability. Continuous monitoring and regular audits by independent third parties are essential to detect and correct deviations.

Professional users, or “deployers,” must now identify and classify their main use cases, while anticipating their compliance with the AI Act.

By following ethical principles, training staff, and collaborating with authorities, businesses can promote safe, ethical, and innovative AI, while benefiting from the legal certainty offered by the regulation.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...