AI Act and Harmonized Standards: Role, Development Process, and Progress of European AI Standards
The AI Act adopts a risk-based approach: the greater the risk a system of AI poses to health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This tiered logic forms the foundation of the new European Trust Framework for AI.
To operationalize these obligations, the regulation combines two complementary levels:
- On one side, the European AI Regulation (AI Act) defines the essential requirements that AI systems must meet, particularly concerning safety and quality.
- On the other side, it refers to technical specifications and rules, known as harmonized standards, which detail the concrete implementation of these requirements and, where possible, translate certain qualitative concepts (such as accuracy) into measurable criteria.
The relationship between regulatory requirements and harmonized standards is thus the central mechanism allowing stakeholders to demonstrate compliance with the obligations of the AI Act.
What Are Harmonized Standards?
According to Article 2 §1 c) of Regulation (EU) No 1025/2012, harmonized standards refer to European standards adopted based on a request made by the Commission for the application of Union harmonization legislation. The implementation of the AI Act therefore necessarily relies on the development of such standards.
Currently under development, these standards will play a key role in the implementation of the AI Act. They will define:
- How to identify and manage risks related to AI
- How to establish and operate an effective quality management system
- How to measure the accuracy and other relevant performances of AI systems
- How to ensure that AI systems remain trustworthy throughout their lifecycle.
By ensuring a harmonized application of requirements across the European Union, these standards aim to guarantee that AI systems are designed and used according to common standards of safety, reliability, and trust, regardless of their deployment location. They thus constitute an essential lever to facilitate the demonstration of compliance with the AI Act and secure economic actors in a rapidly structuring regulatory environment.
Phases of Developing European AI Standards
The development of European standards for AI generally revolves around the following steps:
- Request for Standardization by the European Commission
The process begins following a standardization request issued by the European Commission, which defines what the standards should cover. In the case of the AI Act, this request particularly concerns the obligations applicable to high-risk AI systems. It is then sent to the European Standardization Organizations (ESO): CEN, CENELEC, and ETSI. - Drafting a Standard
After a favorable opinion on the standardization request, drafting work can begin. For the standards relating to the AI Act, this work is conducted within CEN and CENELEC, in the Joint Technical Committee JTC 21, organized into five working groups (WG). A draft standard is entrusted to one of these groups, where technical experts from national standardization bodies (NSB) collaborate, along with other stakeholders, to write the text. - Public Consultation
When a project is deemed sufficiently developed and meets the standardization request, it is sent to NSBs for the public consultation phase. During this stage, NSBs organize national consultations, conduct votes, and collect detailed comments from their stakeholders. - Formal Vote
Once revisions are made, the updated project is submitted for formal voting by NSBs. A positive vote leads to the approval of the standard at the European level, with only minor editorial corrections remaining possible. - Publication by CEN/CENELEC
When the formal vote is positive, the standard is published by CEN/CENELEC. The final version is then made available, typically via the online shops of the national standardization bodies (NSB). - Commission Evaluation and Citation in the Official Journal
In the final phase, the European Commission evaluates the published standard, ensuring it meets the AI Act requirements and corresponds to the standardization request. If the evaluation is positive, the Commission adopts an implementing act and cites the standard in the Official Journal of the European Union (OJEU).
Harmonized Standards Under Development
Here is an overview of the harmonized standards currently under development in the framework of the AI Act:
- Relevant AI Act Articles: Article 17.1; Article 11.1; Article 72
Corresponding Standard: prEN 18286 Quality Management System for the European AI Regulation - Relevant AI Act Articles: Article 9
Corresponding Standard: prEN 18228 Risk Management Related to AI - Relevant AI Act Articles: Article 10
Corresponding Standard: prEN 18284 Quality and Governance of Datasets in AI - Relevant AI Act Articles: Article 10.2(f-g)
Corresponding Standard: prEN 18283 Concepts, Measures, and Requirements for Managing Bias in AI Systems - Relevant AI Act Articles: Article 12-14
Corresponding Standard: prEN 18229-1 AI Reliability Framework – Part 1: Logging, Transparency, and Human Control - Relevant AI Act Articles: Article 15
Corresponding Standard: prEN 18229-2 AI Reliability Framework – Part 2: Accuracy and Robustness - Relevant AI Act Articles: Article 15
Corresponding Standard: prEN 18282 Cybersecurity Specifications for AI Systems - Relevant AI Act Articles: Article 43
Corresponding Standard: prEN 18285 AI Compliance Assessment Framework
Progress of Standards
Originally scheduled for 2025, these standards are now delayed compared to the AI Act timeline, with publication pushed to 2026.
- The QMS standard remains the most advanced. Its publication is anticipated by the third quarter of 2026. It is still in the public consultation stage, open since October 30 for a duration of 12 weeks.
- The cybersecurity standard, which should have already been in public consultation, needs to be revised following negative feedback from the European Commission, as the project does not provide sufficiently clear and operational technical specifications regarding Article 15.
- Other standards are expected to enter the public consultation phase starting February 2026, with publication aimed for the end of 2026. Data-related standards are not expected to reach this stage until mid-2026, with publication likely anticipated for the second quarter of 2027.
It is important to note that the publication of a standard by CEN/CENELEC does not automatically mean its citation in the Official Journal of the European Union. This latter process may take several weeks or even months longer. Only from that moment does the standard confer a presumption of compliance for the legal requirements it covers.
In addition to these harmonized standards, a standard under development, “Overview and Architecture of Standards Supporting the AI Regulation,” provides a structured overview of these standards.
Take Action Now
The AI Act applies even before the publication of harmonized standards: anticipating their content is essential to avoid costly redesigns and secure your compliance.
We support AI stakeholders in complying with the AI Act, structuring quality management systems, managing risks and data, and preparing for audits and compliance assessments. Contact us to secure your AI systems and benefit from structured guidance toward compliance.