AI Act Compliance: Key Challenges and Regulatory Developments

AI Act: Compliance Framework, Key Issues, and Regulatory Developments

The AI Act establishes a legal framework aimed at regulating the use of AI systems within the European Union. Since its adoption, the text has undergone several evolutions, including guidelines, harmonized standards, and timeline adjustments to facilitate operational implementation by businesses.

As stakeholders navigate the AI landscape, they must contend with a dynamic regulatory framework that, while complex, is essential for the future of responsible innovation.

1. Official Elements Already in Effect

Scope of the Regulation and Key Definitions

The European Commission has published official guidelines to secure the legal interpretation of the AI Act. These documents provide essential clarifications on:

  • The definition of an AI system, based on the system’s capacity to infer, generate outputs, or make decisions influencing physical or digital environments;
  • Prohibited AI practices as defined by Regulation (EU) 2024/1689, including activities that infringe on fundamental rights (cognitive manipulation, social scoring, exploitation of vulnerabilities).

These texts serve as stable legal references for interpretation by businesses and national regulatory authorities.

National Governance and Role of Competent Authorities

The AI Act mandates each member state to designate at least one notifying authority and one market surveillance authority responsible for monitoring its application and implementation. The regulation allows member states considerable freedom in organizing these authorities. Their roles include:

  • Market Surveillance Authorities: Ensure control of AI systems on the market or used nationally, particularly those classified as high-risk.
  • Notifying Authorities: Configure the national certification ecosystem by designating notified bodies.

The European Whistleblower Tool

To enhance post-market surveillance, the European Commission has established a secure reporting platform accessible to all concerned parties (employees, users, providers, third parties). Reports can address:

  • Violations of obligations outlined by the AI Act (including risk concerns);
  • Serious incidents posing risks to health, safety, fundamental rights, or the environment.

The platform guarantees anonymous and secure handling, supporting rapid risk prevention and correction.

Position of the Commission on AI Agents

AI agents capable of acting autonomously without continuous human supervision fall fully under the scope of the AI Act. The Commission confirms that:

  • An AI agent may be classified as a high-risk AI system if it meets the criteria of Article 6;
  • Applicable obligations depend on the usage context, particularly in sensitive sectors such as public safety, financial services, or human resource management.

The autonomy of the system does not constitute an exemption; rather, it is a potential aggravating factor regarding risks.

Guidelines for General Purpose AI Models (GPAI)

Providers of General Purpose AI models are required to comply with enhanced requirements starting August 2025. The guidelines clarify:

  • The qualification criteria for GPAI models and those presenting systemic risks;
  • Obligations of providers: risk management, comprehensive technical documentation, risk monitoring;
  • Exemptions and obligations for providers of open-source models.

Exemptions for Open Source Models

Open-source AI models benefit from targeted exemptions, particularly regarding technical documentation, information provision for model integrators, and designation of representatives for non-EU providers. These exemptions apply as long as:

  • The model is distributed under a free and open-source license without direct monetization;
  • Its parameters are public.

However, certain obligations remain, especially concerning copyright and transparency on training data.

Code of Good Practice for General Purpose AI Models

Validated by the European Commission in July 2025, the GPAI Code of Practice serves as a voluntary alignment tool for providers to comply with the AI Act. It consists of three chapters:

  • Transparency: Structuring the information to be provided;
  • Copyright: Compliance with European copyright legislation;
  • Safety and Security: Enhanced requirements for high-impact or systemic risk models.

Although non-legally binding, this code serves as a strategic reference to demonstrate a proactive compliance approach.

2. Elements Under Clarification or Consultation

Proposed Code of Practice on Marking and Labeling AI-Generated Content

On December 17, 2025, the European Commission published a draft Code of Practice on marking and labeling AI-generated or manipulated content, as part of implementing Article 50 of the AI Act. This voluntary code aims to assist providers of generative AI and professional deployers in anticipating future transparency obligations, particularly regarding machine-readable marking and labeling of deepfakes. A consultation is open until January 23, 2026, with final adoption expected by June 2026, before legal obligations begin in August 2026.

Currently Developing Guidelines

Several guidelines are still in development, including:

  • Guidelines on transparency of AI systems subject to specific obligations, expected by mid-2026;
  • Thematic consultations concerning high-risk AI systems, likely extending until August 2026.

Announced but Unpublished Guidelines

The European Commission has announced forthcoming guidelines, including:

  • Practical implementation of the classification of high-risk systems;
  • Specific reporting modalities for incidents by AI system providers;
  • Practical implementation of obligations concerning high-risk system providers and deployers.

Details on these texts are pending finalization.

Harmonized Standards and Technical Challenges

Harmonized standards aim to translate requirements into technical specifications. Some are under consultation, such as:

  • Cybersecurity: As of November 7, 2025, the Commission indicated that the draft standard did not yet provide sufficiently clear and operational specifications to meet Article 15(5) requirements of the AI Act. A revision is underway.
  • Quality Management Systems (QMS): Currently in public inquiry since October 2025.

3. Uncertain Points: Timeline and Proposed Delays

AI Act Timeline and Potential Changes

The initial timeline envisages compliance obligations for high-risk systems starting in 2026. However, the European Commission recently proposed a targeted delay, anticipating:

  • 2026: Entry into force of obligations for systems listed in Annex III;
  • 2027: Entry into force of obligations for systems in Annex I.

This proposal awaits approval by the European Parliament and the EU Council.

Towards Compliance: Key Next Steps

Compliance will not be a one-time exercise but a continuous process. AI stakeholders must monitor regulatory evolutions and take necessary measures to anticipate the application of the AI Act while optimizing security, risk management, and transparency.

Reporting mechanisms, harmonized standards, and codes of good practice will play a central role in this dynamic.

Anticipate Today to Secure Tomorrow

Organizations are encouraged to begin their compliance journey now. Through technical and regulatory expertise, they can navigate implementation of the AI Act’s requirements, from risk management to ensuring compliance of AI systems.

Contact us for a personalized diagnosis and tailored support to successfully navigate toward AI compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...