Standardization Strategies for Compliance with the EU AI Act

Standardization for Compliance in the European Union’s AI Act

The European Union’s Artificial Intelligence Act (AI Act) aims to establish a comprehensive framework for the regulation of AI technologies. A crucial element of this framework is the role of standardization, which is expected to provide technical solutions to ensure compliance with the Act’s complex requirements.

The Importance of Standardization

Standardization is seen as a key mechanism for achieving compliance with the AI Act. As various stakeholders engage in the development of relevant standards, the challenge remains significant: standards must be ready for implementation by August 2026, coinciding with the majority of the AI Act’s provisions coming into effect. The European Committee for Standardization and the European Electrotechnical Committee for Standardization are tasked with making these standards available by the end of 2025.

Standards and Specifications

Within the AI Act, several types of standards are defined:

  • Harmonized Standards: AI systems classified as high-risk and complying with harmonized standards published in the Official Journal of the EU will be presumed to meet the requirements outlined in the AI Act. These standards also encompass the general transparency requirements under Article 50 of the Act.
  • Common Specifications: The Commission has the authority to adopt common specifications for high-risk and limited-risk requirements. However, harmonized standards take precedence, and common specifications can only be adopted if no suitable standards exist.

Certificates and Compliance Assessments

Providers of high-risk AI systems must ensure their products undergo a conformity assessment procedure before entering the European market. This procedure varies based on the system’s type, and it can involve either internal control or an external review by a Notified Body (NB).

Upon successful assessment, an NB will issue an EU technical documentation assessment certificate, which is crucial for demonstrating compliance with the AI Act. These systems must also bear the CE marking to signify their conformity.

Role of Notifying Authorities and Notified Bodies

The AI Act mandates EU Member States to designate at least one Notifying Authority (NA) responsible for overseeing the assessment and notification of conformity assessment bodies (CABs). NAs must ensure that CABs can conduct independent and objective evaluations of high-risk AI systems.

Notified Bodies play a vital role in this ecosystem, verifying the conformity of AI systems according to the established assessment procedures. They must meet rigorous requirements to ensure impartiality and confidentiality in their assessments.

Conclusion

As the EU moves forward with the implementation of the AI Act, the development of standards and specifications will be crucial in meeting regulatory requirements. Stakeholders must engage in collaborative efforts to ensure that these standards are not only developed but also effectively integrated into the compliance frameworks of AI technologies.

This comprehensive approach aims to foster innovation while safeguarding public interests, thus balancing the advancement of AI technologies with necessary regulatory oversight.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...