The EU AI Act: A Practical Guide to Compliance

As artificial intelligence rapidly integrates into our lives, a comprehensive regulatory framework becomes essential. The EU AI Act marks a pivotal step towards responsible AI innovation, but its sweeping changes require clarity and understanding. This is a guide to successfully comply with the EU’s landmark AI legislation. It offers practical insights into the complex obligations it imposes, designed to help businesses and legal professional fully understand and implement the EU AI Act.

Supervision and Enforcement

The EU AI Act establishes a multi-tiered system for supervision and enforcement, involving both Member State and EU-level bodies. At the Member State level, each country must designate at least one “notifying authority” responsible for assessing and notifying conformity assessment bodies, and at least one Market Surveillance Authority (MSA) to oversee overall compliance. These national competent authorities must be equipped with adequate resources to fulfill their tasks, which include providing guidance, conducting investigations, and enforcing the AI Act’s provisions. The MSAs, in particular, wield significant power under the Market Surveillance Regulation, including the authority to demand information, conduct inspections, issue corrective orders, and ultimately impose penalties for violations.

At the EU level, the AI Office serves as a central body for developing expertise, coordinating implementation, and supervising general-purpose AI models. The AI Office plays a key role in providing guidance and support and acts as a supervisory authority for general-purpose AI models and their providers. It is working to encourage compliance with the AI Act by drafting guidelines and offering companies a way to better understand their obligations. A new AI Board, comprised of representatives from each Member State, advises and assists the EU Commission and the Member States in ensuring consistent and effective application of the AI Act across the Union. The EU Commission itself retains responsibilities such as preparing documentation, adopting delegated acts, and maintaining the high-risk AI system database.

Enforcement Powers and Penalties

The penalty framework outlined in the AI Act includes substantial fines depending on the severity and type of infringement. Non-compliance with rules surrounding prohibited AI practices could result in fines up to €35 million or 7% of total worldwide annual turnover. Other breaches may incur penalties of up to €15 million or 3% of turnover, while the supply of incorrect or misleading information could lead to fines of €7.5 million or 1% of turnover. These penalties can be applied in specific instances, so they remain consistent. It is important to note that these fines are in addition to any other penalties or sanctions prescribed by Member State laws. The dual-layered approach to penalties can increase the pressure to succeed, and companies need to be aware not only of the overall EU regulation, but the potential penalties added onto those from the individual Member State penalties as well.

Using the EU AI Act Guide can assist legal professionals.

This guide is designed as a practical resource for in-house legal professionals navigating the complexities of the EU AI Act. It focuses on providing readily applicable information and strategies to help businesses understand and comply with the new regulations. The guide outlines key areas of the AI Act that are most likely to affect businesses, paying close attention to the distinct obligations placed on AI providers and deployers. By structuring its content around the practical compliance steps businesses should consider, it allows legal professionals to efficiently translate the AI Act’s requirements into actionable internal policies and procedures.

Within each section, the guide not only explains the legal requirements imposed by the AI Act, but also explores the practical implications for businesses. A central feature of the guide is its attention to detail in outlining compliance measures businesses may consider taking, thereby bridging the gap between legal understanding and practical implementation. Furthermore, recognizing the intricate relationship between the AI Act and the GDPR, the guide incorporates specific discussion of their interplay, allowing legal professionals to leverage existing compliance programs and potentially streamlining the compliance process.

Practical Compliance Steps and GDPR Interplay

Each section offers dedicated “Practical Compliance Steps” boxes, providing concrete actions businesses can take to meet their obligations. Moreover, “Interplay” boxes highlight the relationship between the AI Act and the GDPR, offering insights into existing GDPR compliance programs that can be adapted for AI Act compliance. Icons are used to visually indicate the level of overlap, including substantial overlap (requiring minimal additional effort), moderate overlap (serving as a starting point), and no overlap (requiring entirely new measures). This approach allows legal professionals to efficiently assess the impact of the AI Act on their organizations and develop appropriate compliance strategies.

The EU AI Act proposes significant regulatory changes to businesses.

The EU AI Act introduces substantial regulatory changes affecting businesses operating within the European Union, as well as those deploying or providing AI systems or models within the EU market, irrespective of their physical location. The Act imposes obligations on various actors in the AI ecosystem, including providers, deployers, importers, and distributors of AI systems. Depending on the type of AI system employed, the Act specifies compliance timelines ranging from 6 to 36 months post its entry into force, requiring in-house legal teams to swiftly assess the Act’s potential impact and develop effective implementation strategies. Failure to proactively address these new obligations may leave businesses unprepared, potentially leading to resource constraints during the compliance implementation phase.

The impact on businesses hinges on their role as providers or deployers of particular types of AI systems and/or general-purpose AI models. Certain AI systems are now deemed prohibited outright due to their inherent risks to fundamental rights. Other AI systems, like those used in critical infrastructure, employment, or public services, are classified as high-risk and are thus subject to rigorous requirements detailed in the AI Act. Furthermore, the Act includes regulations specific to general-purpose AI models depending on whether they are deemed to present a systemic risk. Businesses must therefore carry out a thorough analysis of their AI systems to determine whether it is subject to the regulations. This includes conducting appropriate risk assessments, implementing technical safeguards, ensuring human oversight, and maintaining transparent documentation.

Practical Obligations for All Businesses

Beyond the sector-specific and risk-tiered requirements, the EU AI Act mandates a fundamental change for all businesses, irrespective of their specific involvement with AI development or deployment, by establishing AI literacy obligations. Article 4 stipulates that both providers and deployers of AI systems must ensure an adequate level of AI literacy amongst their personnel, taking into account their respective roles and the intended use of AI systems. This includes but is not limited to providing ongoing training and education to staff members. The intention is to foster an understanding of AI technologies, their potential risks, and the regulatory requirements imposed by the AI Act. This ensures responsible development, deployment, and use of AI systems, safeguarding health, safety, and fundamental rights.

Key milestones of the AI Act timeline are clearly defined.

The EU AI Act’s journey from proposal to enforcement has been marked by clearly defined milestones. The initial public consultation closed on August 6, 2021, setting the stage for legislative action. The European Commission formally published the AI Act proposal on April 21, 2021, laying out the framework for AI regulation. The EU Parliament adopted its negotiating position on June 14, 2023, signaling its priorities for the Act. The Council adopted its common position/general approach on December 6, 2022, indicating alignment among Member States. Crucially, on December 9, 2023, the EU Parliament and the Council reached a provisional agreement. These benchmarks show the collaborative and iterative nature of EU lawmaking, guaranteeing all involved parties a structured strategy toward AI governance.

Implementation and Enforcement Dates

After reaching political agreement, formal steps towards implementation included launching the AI Office on February 21, 2024. The EU Parliament gave its final approval on March 13, 2024, followed by the Council’s endorsement on May 24, 2024. The AI Act was published in the Official Journal of the EU on July 12, 2024, before formally entering into force on August 1, 2024. The different AI system categories have staggered enforcement dates, allowing businesses to strategically adjust. Systems identified as prohibited and AI literacy programs took effect on February 2, 2025. General-purpose AI model obligations took effect August 2, 2025, followed by most additional obligations (including Annex III high-risk AI systems) two years after the Act’s entry into force, on August 2, 2026. High-risk AI systems placed into Annex I will take effect on August 2, 2027. These deadlines offer a phased strategy tailored for providers and deployers, depending on the type of AI system.

Understanding the scope of the AI Act is essential for businesses.

The AI Act’s territorial scope extends beyond the EU’s physical borders, impacting businesses globally. The territorial applicability is determined by three key criteria: the location of the entity, the market placement of the AI system or model, and the geographic usage of the AI system’s output. If a business is established or located within the EU and deploys an AI system, the AI Act applies. Furthermore, the Act encompasses providers—irrespective of their location—that place AI systems on the EU market or put them into service within the EU. This includes AI systems employed by product manufacturers under their own brand. Critically, the AI Act also targets both providers and deployers whose AI systems’ outputs are used within the EU, regardless of their establishment location, and protects individuals within the EU affected by the use of AI systems. This broad scope necessitates that businesses, irrespective of their geographic base, meticulously ascertain their obligations under the AI Act to ensure compliance, avoid potential penalties, and maintain operational integrity within the EU market.

Beyond the territorial scope, understanding the AI Act’s personal and material scope is crucial. The Act’s primary targets are providers and deployers of AI systems, each carrying distinct responsibilities. However, the definition of ‘provider’ can extend to importers, distributors, and even product manufacturers under certain conditions. Specifically, if these entities affix their branding to a high-risk AI system, substantially modify it, or alter its intended purpose to classify it as high-risk, they assume the obligations of a provider. Nonetheless, the Act carves out several exceptions, such as for deployers who are natural persons utilizing AI systems for purely personal, non-professional activities, and for AI systems developed solely for scientific research and development. A staged temporal scope also impacts understanding the AI Act. Compliance duties are phased in, ranging from six to thirty-six months from the Act’s entry into force, contingent upon the AI system’s type. High-risk AI systems already on the market before August 2, 2026, only fall under the Act’s purview following significant design alterations. This complexity underscores the need for businesses to carefully assess their roles and applicable timelines for comprehensive compliance planning.

Definition of crucial concepts within the AI Act is provided.

The AI Act hinges on a set of precisely defined concepts that determine its scope and application. The term “AI system” itself is meticulously defined as a machine-based system designed to operate with varying levels of autonomy, capable of adapting post-deployment. This system infers from its inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, explicitly or implicitly. This broad definition captures a wide range of technologies, necessitating careful analysis to determine whether a particular system falls within the Act’s purview. “Intended purpose” is also critical; it refers to the use for which an AI system is designed by the provider, including the specific context and conditions of use, as detailed in the instructions for use, promotional materials, and technical documentation. This definition emphasizes the provider’s role in defining the scope of acceptable uses and highlights the importance of clear and accurate communication to deployers. Understanding the ‘Reasonably Foreseeable Misuse’ of an AI system, defined as use not aligned with its intended purpose but resulting from anticipated human behavior or interaction with other systems, is also a crucial element which frames Risk Management and Liability concerns.

Further refining the legal landscape are concepts related to market availability. “Making available on the market” refers to the supply of an AI system for distribution or use within the EU, whether for payment or free of charge, in the course of commercial activity. “Placing on the market” denotes the first instance of making an AI system available within the EU. “Putting into service” signifies the supply of an AI system for its initial use directly to the deployer or for the provider’s own use within the EU, aligned with its intended purpose. These definitions clarify the point at which various obligations under the AI Act trigger, particularly for providers and importers. Finally, the Act carefully defines what it constitutes a Significant Incident, which are subject to specific reporting obligations. A “serious incident” includes events such as death, serious health harm, major infrastructure disruptions, violations of fundamental rights, or significant property or environmental damage caused by AI system malfunction. These definitions establish a hierarchical framework for assessing risk and determining corresponding regulatory obligations within the AI ecosystem.

Practical applications for businesses are established.

The EU AI Act introduces a new paradigm for businesses developing, deploying, or using AI systems. Understanding the obligations arising from this legislation is crucial for strategic planning and risk mitigation. By recognizing the specific roles and responsibilities assigned to providers and deployers of AI, businesses can proactively address the compliance requirements and integrate them into their existing frameworks. This section provides a detailed overview of practical applications for businesses and a systematic breakdown of measures that can be taken for practical compliance.

Businesses should begin by conducting a comprehensive AI audit to identify all AI systems in use and classify them based on their risk level. This involves identifying whether the AI system is prohibited, high-risk, or subject to simple transparency requirements. Next, AI systems should be further classified based on whether the business qualifies as an AI provider or an AI deployer, per the risk framework. Understanding the interplay between provider and deployer responsibilities is essential as organizations might assume both roles. This careful classification will delineate the precise obligations, and informs the specific strategies required for their unique context.

Provider Obligations

High-risk AI system providers must take a proactive and holistic approach. They must establish a comprehensive risk management system to identify and mitigate reasonably foreseeable risks throughout the AI’s lifecycle, including managing data quality, technical documentation, record-keeping, and corrective actions. They should also ensure transparency to deployers and define human oversight measures. For system providers, post-market monitoring and incident management become essential. Transparency requirements extend to informing individuals about interacting with an AI system, and marking AI-generated content accordingly. For AI providers of general purpose AI models, this policy must comply with the strict requirements of EU copyright law. Complying with harmonized standards helps demonstrate compliance with the AI Act’s requirements.

Deployer Obligations

Deployers must ensure alignment with the provider’s usage instructions and assign competent human oversight. If a deployer has control over input data, they must ensure the data is appropriate. They must monitor AI systems and report incidents appropriately. When deploying AI systems in the workplace, transparency with employees is mandatory. Moreover, the transparency requirements extend to informing individuals about interacting with an AI system, and informing them if it comes to assist in making decisions that involve said individual. In some instances, deployers must carry out fundamental rights impact assessments, or otherwise make the AI policies related to data processing clear.

Practical guidelines for all businesses to take to comply with the law are offered.

The EU AI Act mandates several practical steps for all businesses, regardless of their specific role as providers or deployers of AI systems. A foundational requirement is achieving a sufficient level of AI literacy among staff and relevant personnel. This necessitates measures to ensure that individuals involved in the operation and use of AI systems possess the necessary technical knowledge, experience, education, and training. The appropriate level of literacy will depend on the context in which the AI systems are used and the potential impact on individuals or groups. Therefore, businesses must actively invest in AI literacy programs tailored to different roles and responsibilities within the organization, promoting an understanding of both the potential benefits and the inherent risks associated with AI technologies.

Businesses should evaluate which AI systems they are providing or deploying and make sure that their staff are adequately prepared to work with these systems. A robust AI training programme, which is kept up to date, will be essential to comply with this provision. GDPR education and training programmers may be leveraged but likely will require updates or new processes (some technical) to ensure compliance with the AI Act’s specific data quality criteria for training, validation and testing data in connection with the development of AI systems. Businesses may be able to leverage certain GDPR accountability efforts in the process of preparing technical documentation, such as for the description of the applicable cybersecurity measures. The logging requirements partially overlap with the data security requirements and best practices under the GDPR, but are more specific and require technical implementation. It is expected that organizations will develop and implement technical capabilities and specific processes to comply with this requirement. Providers will also be required to consider the retention period for logs in their data protection retention schedules.

Practical obligations for businesses pertaining to the AI Act’s providers are presented.

The EU AI Act places significant practical obligations on businesses that are considered “providers” of AI systems. These obligations vary based on the risk level of the AI system, with the most stringent requirements applied to high-risk AI systems. Providers are responsible for ensuring that their AI systems comply with the AI Act’s requirements before placing them on the market or putting them into service. The obligations cover a broad spectrum, including risk management, data governance, technical documentation, transparency, human oversight, and cybersecurity. A core theme is the implementation of robust quality management systems (QMS) to ensure continuous compliance throughout the AI system’s lifecycle. Failure to meet these obligations can result in substantial fines and reputational damage, emphasizing the need for a proactive and comprehensive approach to AI governance.

For high-risk AI systems, providers must establish a risk management system to identify, analyze, and mitigate potential risks to health, safety, and fundamental rights. This involves data quality controls, rigorous testing, and cybersecurity measures. The AI Act mandates the creation of technical documentation detailing the system’s design, functionality, and performance. The documentation must be comprehensive and continuously updated to reflect any changes or modifications to the AI system. Providers must also ensure adequate transparency by supplying clear and accessible instructions for users, detailing the AI system’s characteristics, limitations, and potential risks. To further increase user trust and reliability, the provider must ensure appropriate levels of human oversight to the system, allowing for operators to overrule when required.

General-Purpose AI Models

For general-purpose AI Models, providers must abide by the set of key conditions in Article 53. First, those developing the models must keep complete and current the system’s documentation, including relevant training, evaluation, and testing. Documentation and details need to be passed along to other AI systems that intend to integrate the general-purpose AI model. Companies creating AI must implement policies to comply with copyright as well as summaries of content used for training the AI model. General-purpose AI Models that also have systemic risk are subject to more stringent obligations, including assessments of potential systemic risks at the EU level, maintenance of cybersecurity relating to the model, and a report with no delay relevant information about serious incidents and potentially corrective measures.

Practical obligations for businesses pertaining to the AI Act’s deployers are described.

The AI Act places several practical obligations on deployers of AI systems, particularly those classified as high-risk. One of the foremost requirements is adherence to the provider’s instructions for use. Deployers must implement appropriate technical and organizational measures to ensure that high-risk AI systems are operated in accordance with the instructions provided by the system’s provider. This entails a thorough understanding of the system’s capabilities, limitations, and intended purpose, as well as the implementation of controls to prevent misuse or deviation from the prescribed operational parameters. To facilitate compliance, deployers should maintain a register of the high-risk systems in use, along with the corresponding instructions. Assigning responsibility for monitoring compliance within the organization can also significantly bolster accountability. Moreover, it is crucial for deployers to stay informed of any updates or changes to the instructions through established communication channels with the AI system providers.

Beyond adherence to provider instructions, deployers are obligated to implement human oversight. This involves assigning individuals with the necessary competence, training, authority, and support to oversee the use of high-risk AI systems. These individuals must possess a comprehensive understanding of the system’s capacities and limitations, be vigilant against automation bias, accurately interpret the system’s outputs, and have the authority to disregard, override, or reverse the system’s input when necessary. Furthermore, deployers who exercise control over the input data for high-risk AI systems must ensure that the data is relevant and sufficiently representative in light of the system’s intended purpose. This necessitates a pre-screening process to evaluate the data’s fitness, including its accuracy and freedom from bias, to ensure the system operates effectively and fairly. Continuing obligations for deployers include ongoing monitoring of the AI systems and in some cases completing fundamental rights impact assessments.

Specific Transparency Requirements

The Act also includes transparency requirements for deployers using systems such as emotion recognition and biometric categorization, and those that generate or manipulate images, audio, video, and text. Transparency requirements such as those included in Article 50 (5) of the AI Act create additional requirements for deployers to ensure that, at the latest upon the first interaction the AI system is deployed transparently.

The framework for supervision and enforcement of the AI Act is established.

The supervision and enforcement of the AI Act will be managed at both the Member State and EU levels, with distinct roles and responsibilities assigned to various authorities. At the Member State level, each country is required to designate at least one “notifying authority” responsible for assessing and monitoring conformity assessment bodies, and at least one “Market Surveillance Authority” (MSA) to oversee compliance by providers, deployers, and other entities within the AI value chain. The MSAs are granted extensive powers of market surveillance, investigation, and enforcement, including the ability to require information, conduct on-site inspections, issue compliance orders, take corrective actions, and impose penalties. Member States are also mandated to establish a single point of contact for the AI Act to facilitate communication and coordination. All businesses subject to the AI Act must cooperate fully with these national competent authorities.

At the EU level, the AI Office, established by the EU Commission, plays a key role in developing expertise, contributing to the implementation of EU law on AI, and overseeing compliance in respect of general-purpose AI models. The AI Board, composed of representatives from each Member State, advises and assists the EU Commission and Member States to ensure consistent and effective application of the AI Act. The EU Commission also has a number of responsibilities, such as preparing documentation, adopting delegated acts, and managing the database for high-risk AI systems. The enforcement powers available to the MSAs include significant fines for non-compliance, structured according to the severity of the infringement, with the highest penalties applicable to prohibited AI practices. Moreover, individuals and organizations have the right to submit complaints to their national MSA if they believe an infringement of the AI Act has occurred, furthering the Act’s accountability mechanisms.

A glossary of terms used in the AI Act is compiled.

The EU AI Act introduces numerous technical and legal terms. Understanding these definitions is crucial for businesses to comply with the Act’s provisions. While a complete glossary is beyond the scope of this section, key terms like “AI system,” “general-purpose AI model,” “intended purpose,” “reasonably foreseeable misuse,” “making available on the market,” “placing on the market,” “putting into service,” and “serious incident” are central to interpreting and applying the Act’s requirements. These definitions, as laid out in Article 3 of the AI Act, delineate the boundaries of its scope and the obligations of various actors. Consistent interpretation of these terms across the EU is necessary for uniform application and enforcement of the AI Act, ensuring a harmonized market for AI technologies.

Distinguishing between key actors such as “providers” and “deployers” is also essential, as each role carries distinct responsibilities under the AI Act. A provider is generally the entity that develops or places an AI system on the market, while a deployer uses the AI system in its operations. The responsibilities of each, articulated throughout the Act, are intrinsically linked. For instance, clear communication of the intended purpose of an AI system from the provider to the deployer is crucial for the deployer to use the system in compliance with both the provider’s instructions and the Act’s broader requirements. Furthermore, the concept of “intended purpose” is itself a defined term, emphasizing the importance of interpreting AI systems, and their use in accordance with the information provided by the provider.

Key Definitions and Concepts

To promote AI literacy and facilitate further clarity, the EU Commission published guidelines that further specifies the definition of “AI system” under the AI Act. In addition to the precise delineation provided by these definitions, a broader understanding of the underlying concepts—like the risk-based approach, transparency, and human oversight—is important. Without a firm grasp of these critical definitions and concepts, compliance with the AI Act remains an elusive goal. Regularly reviewing and understanding evolving guidance from the EU Commission, the AI Office, and the AI Board, will be instrumental in maintaining an accurate interpretation of the Act’s terms and definitions.

Important contact sections are listed for different leadership members and EU/UK teams.

This document culminates with comprehensive contact sections designated for both leadership personnel and members of the EU/UK teams. These sections serve as essential resources for individuals seeking specific expertise or assistance related to AI Act compliance. By providing direct access to key individuals within the respective teams, the document aims to facilitate clear and efficient communication, enabling stakeholders to navigate the complexities of AI regulation with greater ease. Proper communication is paramount in this nascent phase of regulatory application, as authorities and organizations alike begin to grapple with the practical implications of the EU AI Act.

The inclusion of detailed contact information for leadership members underscores the importance of executive oversight in AI Act compliance. These individuals often possess the strategic vision and decision-making authority necessary to guide organizations through the regulatory landscape. Meanwhile, the listing of EU/UK team members recognizes the distinct regional nuances of AI regulation, acknowledging that compliance strategies may need to be tailored to specific jurisdictions. This duality ensures that stakeholders have access to both high-level strategic guidance and boots-on-the-ground expertise relevant to their particular circumstances. Taken together, these sections prioritize both high-level and practical information to guide users with their legal and implementation questions regarding the act.

Navigating the complexities of the EU AI Act requires a proactive and informed approach. Businesses must swiftly assess their role in the AI ecosystem, understanding whether they act as providers, deployers, or both, and meticulously classify their AI systems based on risk. This classification is not merely a bureaucratic exercise; it’s the foundation upon which effective compliance strategies are built. Investing in AI literacy programs is no longer optional, but a fundamental requirement to ensure responsible AI development, deployment, and utilization. Ultimately, success hinges on integrating these new obligations into existing frameworks, fostering a culture of transparency, and prioritizing the safeguarding of fundamental rights in the age of artificial intelligence.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...