AI Act and Harmonised Standards: Role, Development Process, and State of Progress
The AI Act adopts a risk-based approach: the greater the risks an AI system poses to people’s health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This graduated logic forms the foundation of the new European framework of trust for AI.
To make these obligations operational, the regulation combines two complementary levels.
Defining Essential Requirements
On one hand, the regulation defines the essential requirements with which AI systems must comply, particularly concerning safety and quality.
Technical Specifications and Harmonised Standards
On the other hand, it refers to technical specifications and rules—harmonised standards—which detail the concrete implementation of these requirements and, where possible, translate certain qualitative concepts (for example, accuracy) into measurable criteria.
The articulation between regulatory requirements and harmonised standards thus constitutes the central mechanism enabling stakeholders to demonstrate their compliance with the obligations of the AI Act.
What are Harmonised Standards?
According to Article 2(1)(c) of Regulation (EU) No 1025/2012, harmonised standards are European standards adopted based on a request issued by the Commission for the application of Union harmonisation legislation. The implementation of the AI Act relies heavily on the development of such standards.
Currently under development, these standards will play a key role in the implementation of the AI Act. They will notably define:
- How to identify and manage AI-related risks
- How to set up and operate an effective quality management system
- How to measure accuracy and other relevant performance metrics of AI systems
- How to ensure that AI systems remain trustworthy throughout their lifecycle
By ensuring a harmonised application of requirements across the European Union, these standards aim to guarantee that AI systems are designed and used according to common standards of safety, reliability, and trust, regardless of their place of deployment. They constitute an essential lever to facilitate the demonstration of compliance with the AI Act and secure economic actors in a regulatory environment that is still being structured.
Phases of Development of European AI Standards
The development of European AI standards is structured around the following steps:
1. Standardisation Request from the European Commission
The process begins following a standardisation request issued by the European Commission, defining what the standards must cover, particularly concerning high-risk AI systems. This request is sent to the European Standardisation Organizations (ESOs): CEN, CENELEC, and ETSI.
2. Development of a Draft Standard
After a favourable opinion on the standardisation request, drafting work can begin. For standards related to the AI Act, this work is carried out within CEN and CENELEC in the Joint Technical Committee JTC 21, organised into five working groups (WG). A draft standard is assigned to one of these groups, where technical experts from national standardisation bodies (NSBs) collaborate to draft the text.
Under the responsibility of a Project Leader, the work is conducted according to a consensus-based approach. During this phase, experts may refer to existing international standards, such as ISO/IEC standards, to support their work. A Working Draft circulates for information and comments, which are then discussed and resolved within the working group before the text is sent for public enquiry.
3. Public Enquiry
When a project is deemed sufficiently mature, it is transmitted to the NSBs for the public enquiry phase. During this stage, NSBs organise national consultations, conduct votes, and transmit detailed comments collected from stakeholders. The experts review this feedback, propose amendments, and attempt to resolve comments, including where positions of different countries diverge, by seeking consensus.
4. Formal Vote
Once revisions are made, the updated project is submitted to a formal vote of the NSBs. A positive vote leads to the approval of the standard at the European level, with only minor editorial corrections remaining possible. A negative vote necessitates corrective actions based on the feedback received.
5. Publication by CEN/CENELEC
Following a positive formal vote, the standard is published by CEN/CENELEC, and the final version becomes available through the online shops of the NSBs.
6. Evaluation by the Commission and Citation in the Official Journal
In the final phase, the European Commission evaluates the published standard, verifying compliance with the AI Act and consistency with the standardisation request. If the evaluation is positive, the Commission adopts an implementing act and cites the standard in the Official Journal of the European Union (OJEU). From this moment, the standard becomes a harmonised standard, providing a presumption of conformity with the corresponding legal requirements.
Harmonised Standards Under Development
Here is an overview of the harmonised standards currently under development within the framework of the AI Act:
- Article 17.1; Article 11.1; Article 72: prEN 18286 Quality management system for the European AI regulation
- Article 9: prEN 18228 AI risk management
- Article 10: prEN 18284 Quality and governance of datasets in AI
- Article 10.2(f–g): prEN 18283 Concepts, measures, and requirements for bias management in AI systems
- Articles 12–14: prEN 18229-1 AI trustworthiness framework – Part 1: Logging, transparency, and human oversight
- Article 15: prEN 18229-2 AI trustworthiness framework – Part 2: Accuracy and robustness
- Article 15: prEN 18282 Cybersecurity specifications for AI systems
- Article 43: prEN 18285 AI conformity assessment framework
State of Progress of the Standards
Initially planned for 2025, these standards have been delayed, with publication postponed to 2026. The QMS standard remains the most advanced, with its publication envisaged by the third quarter of 2026. Currently, it is at the public enquiry stage, which has been open since October 30 for a duration of 12 weeks.
The cybersecurity standard, which was expected to enter the public enquiry phase, must be reworked following a negative opinion from the European Commission, as the draft does not provide sufficiently clear and operational technical specifications concerning Article 15.
The other standards are expected to enter the public enquiry phase from February 2026, with publication targeted towards the end of 2026. Data-related standards are not anticipated to reach this stage until mid-2026, with publication likely expected in the second quarter of 2027.
It is important to note that the publication of a standard by CEN/CENELEC does not automatically mean its citation in the Official Journal of the European Union. This may occur several weeks or even months later. Only from that moment does the standard confer a presumption of conformity for the legal requirements it covers.
In addition to these harmonised standards, one standard currently under development, “Overview and architecture of standards supporting the AI regulation,” provides a structured overview of them.
Act Now to Prepare for Compliance
The AI Act applies even before the publication of harmonised standards: anticipating their content is essential to avoid costly redesigns and to secure compliance.
Support is available for AI stakeholders in compliance with the AI Act, the structuring of quality management systems, risk and data management, and preparation for audits and conformity assessments. Contact us to secure your AI systems and benefit from structured support towards compliance.