AI Act and Harmonized Standards: Progress and Implementation Insights

AI Act and Harmonized Standards: Role, Development Process, and Progress of European AI Standards

The AI Act adopts a risk-based approach: the greater the risk a system of AI poses to health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This tiered logic forms the foundation of the new European Trust Framework for AI.

To operationalize these obligations, the regulation combines two complementary levels:

  • On one side, the European AI Regulation (AI Act) defines the essential requirements that AI systems must meet, particularly concerning safety and quality.
  • On the other side, it refers to technical specifications and rules, known as harmonized standards, which detail the concrete implementation of these requirements and, where possible, translate certain qualitative concepts (such as accuracy) into measurable criteria.

The relationship between regulatory requirements and harmonized standards is thus the central mechanism allowing stakeholders to demonstrate compliance with the obligations of the AI Act.

What Are Harmonized Standards?

According to Article 2 §1 c) of Regulation (EU) No 1025/2012, harmonized standards refer to European standards adopted based on a request made by the Commission for the application of Union harmonization legislation. The implementation of the AI Act therefore necessarily relies on the development of such standards.

Currently under development, these standards will play a key role in the implementation of the AI Act. They will define:

  • How to identify and manage risks related to AI
  • How to establish and operate an effective quality management system
  • How to measure the accuracy and other relevant performances of AI systems
  • How to ensure that AI systems remain trustworthy throughout their lifecycle.

By ensuring a harmonized application of requirements across the European Union, these standards aim to guarantee that AI systems are designed and used according to common standards of safety, reliability, and trust, regardless of their deployment location. They thus constitute an essential lever to facilitate the demonstration of compliance with the AI Act and secure economic actors in a rapidly structuring regulatory environment.

Phases of Developing European AI Standards

The development of European standards for AI generally revolves around the following steps:

  1. Request for Standardization by the European Commission
    The process begins following a standardization request issued by the European Commission, which defines what the standards should cover. In the case of the AI Act, this request particularly concerns the obligations applicable to high-risk AI systems. It is then sent to the European Standardization Organizations (ESO): CEN, CENELEC, and ETSI.
  2. Drafting a Standard
    After a favorable opinion on the standardization request, drafting work can begin. For the standards relating to the AI Act, this work is conducted within CEN and CENELEC, in the Joint Technical Committee JTC 21, organized into five working groups (WG). A draft standard is entrusted to one of these groups, where technical experts from national standardization bodies (NSB) collaborate, along with other stakeholders, to write the text.
  3. Public Consultation
    When a project is deemed sufficiently developed and meets the standardization request, it is sent to NSBs for the public consultation phase. During this stage, NSBs organize national consultations, conduct votes, and collect detailed comments from their stakeholders.
  4. Formal Vote
    Once revisions are made, the updated project is submitted for formal voting by NSBs. A positive vote leads to the approval of the standard at the European level, with only minor editorial corrections remaining possible.
  5. Publication by CEN/CENELEC
    When the formal vote is positive, the standard is published by CEN/CENELEC. The final version is then made available, typically via the online shops of the national standardization bodies (NSB).
  6. Commission Evaluation and Citation in the Official Journal
    In the final phase, the European Commission evaluates the published standard, ensuring it meets the AI Act requirements and corresponds to the standardization request. If the evaluation is positive, the Commission adopts an implementing act and cites the standard in the Official Journal of the European Union (OJEU).

Harmonized Standards Under Development

Here is an overview of the harmonized standards currently under development in the framework of the AI Act:

  • Relevant AI Act Articles: Article 17.1; Article 11.1; Article 72
    Corresponding Standard: prEN 18286 Quality Management System for the European AI Regulation
  • Relevant AI Act Articles: Article 9
    Corresponding Standard: prEN 18228 Risk Management Related to AI
  • Relevant AI Act Articles: Article 10
    Corresponding Standard: prEN 18284 Quality and Governance of Datasets in AI
  • Relevant AI Act Articles: Article 10.2(f-g)
    Corresponding Standard: prEN 18283 Concepts, Measures, and Requirements for Managing Bias in AI Systems
  • Relevant AI Act Articles: Article 12-14
    Corresponding Standard: prEN 18229-1 AI Reliability Framework – Part 1: Logging, Transparency, and Human Control
  • Relevant AI Act Articles: Article 15
    Corresponding Standard: prEN 18229-2 AI Reliability Framework – Part 2: Accuracy and Robustness
  • Relevant AI Act Articles: Article 15
    Corresponding Standard: prEN 18282 Cybersecurity Specifications for AI Systems
  • Relevant AI Act Articles: Article 43
    Corresponding Standard: prEN 18285 AI Compliance Assessment Framework

Progress of Standards

Originally scheduled for 2025, these standards are now delayed compared to the AI Act timeline, with publication pushed to 2026.

  • The QMS standard remains the most advanced. Its publication is anticipated by the third quarter of 2026. It is still in the public consultation stage, open since October 30 for a duration of 12 weeks.
  • The cybersecurity standard, which should have already been in public consultation, needs to be revised following negative feedback from the European Commission, as the project does not provide sufficiently clear and operational technical specifications regarding Article 15.
  • Other standards are expected to enter the public consultation phase starting February 2026, with publication aimed for the end of 2026. Data-related standards are not expected to reach this stage until mid-2026, with publication likely anticipated for the second quarter of 2027.

It is important to note that the publication of a standard by CEN/CENELEC does not automatically mean its citation in the Official Journal of the European Union. This latter process may take several weeks or even months longer. Only from that moment does the standard confer a presumption of compliance for the legal requirements it covers.

In addition to these harmonized standards, a standard under development, “Overview and Architecture of Standards Supporting the AI Regulation,” provides a structured overview of these standards.

Take Action Now

The AI Act applies even before the publication of harmonized standards: anticipating their content is essential to avoid costly redesigns and secure your compliance.

We support AI stakeholders in complying with the AI Act, structuring quality management systems, managing risks and data, and preparing for audits and compliance assessments. Contact us to secure your AI systems and benefit from structured guidance toward compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...