Artificial intelligence is rapidly transforming our world, but its responsible development and deployment hinge on establishing clear and effective standards. The process of defining these benchmarks, especially within the European Union, is complex and multifaceted. This exploration delves into the key elements shaping the EU’s AI standardization landscape: the major players involved, the evolving standards themselves, and the critical challenges that must be addressed to ensure a level playing field for all. We examine the current state of play and the practical implications of these standards across various sectors, revealing both the opportunities and the potential pitfalls that lie ahead.
What are the key characteristics of the AI standardization landscape?
The European AI standardization landscape is currently being shaped by several key factors. Harmonized standards under the EU AI Act are designed to provide a clear route to CE marking and facilitate EU market access for AI systems. However, the standardization process involves complex stakeholder dynamics, technical implementation hurdles, and potentially significant costs, especially for startups and SMEs. Technical standards are reshaping global AI competition and act as market entry barriers, particularly for startups and SMEs due to resource limitations and unequal participation in standard-setting processes.
Key Stakeholders and Committees
The landscape is populated by multiple key stakeholders, including standardization bodies, industry players, civil society groups, and scientific organizations. Key standardization committees include:
- ISO/IEC JTC 1/SC 42 (AI): An international committee with numerous published and developing standards.
- IEEE AI Standards Committee: Another significant player with existing and upcoming standards.
- CEN-CENELEC JTC 21 (AI): A joint European committee working on standards development in line with the EU AI Act.
Additionally, national standardization bodies, like DIN in Germany, collaborate with international bodies to balance national and international efforts.
Horizontal vs. Vertical Standards
The key standardization challenges depend on the type of standard under consideration:
- Horizontal Standards: Industry-agnostic standards outlined in the AI Act apply generally. However, ambiguity and complexity arise in compliance due to varying interpretations across sectors and member states.
- Vertical Standards: Sector-specific standards may be required in addition to horizontal ones depending on the existing legislation. This is particularly relevant for machinery, medical devices, and other industries with established sector-specific regulations.
The interplay between horizontal and vertical standards presents significant compliance challenges, especially regarding transparency, interoperability, and differing secrecy requirements among EU member states.
Challenges & Concerns Regarding the AI Act Standardization Process
Several challenges impede the standardization process, including:
- Critical Timelines: Tight deadlines set by the European Commission may be difficult to meet given the complexity of consensus-building and the need to align with both global and sector-specific standardization needs.
- Complex Stakeholder Dynamics: Large enterprises, particularly US tech and consulting giants, often dominate standardization committees, leading to under-representation of SMEs, startups, and civil society organizations.
- Unjustifiable Costs: The cost of accessing standards raises concerns, especially in light of the Malamud case, which explores whether harmonized standards should be freely accessible as part of EU law.
- Operationalisation Hurdles: Turning standards into actionable steps is difficult. This can be addressed by machine-readable Smart Standards to automate testing capabilities.
The short implementation timelines between final standard publications and the AI Act’s application is a significant concern.
What is the current state of the European AI standardization process?
The EU AI Act relies heavily on technical standards to operationalize its high-risk requirements, but the standardization process is facing some hurdles. The European Commission issued a standardization request to CEN and CENELEC in May 2023, aiming to define actionable requirements for AI systems. However, the original deadline was initially set for April 2025; now it has been extended to August 2025. Even with this extension, timely delivery is uncertain. After the standards are finalized, they will have to go through another review and can be published in the OJEU which is currently expected for the beginning of 2026, then AI providers will only have approximately 6–8 months to implement them by August 2026.
As of now more than 300 experts from over 20 EU member states are working to specify the AI Act’s high-risk requirements and currently, CEN-CENELEC JTC 21 is working on roughly 35 standardization activities to fulfill the request. Most work items are based on or co-developed with ISO/IEC standards, but many AI Act aspects require new European standards for alignment with EU values and fundamental rights protection.
The European Commission’s standardization request outlined ten essential deliverables addressing key regulatory requirements – from risk management to conformity assessment. Deliverables include:
- Risk Management for AI Systems
- Governance and Data Quality of Datasets
- Record Keeping Through Built-In Logging Capabilities
- Transparency and Information to Users
- Human Oversight Over AI Systems
- Accuracy Specifications for AI Systems
- Robustness Specifications for AI Systems
- Cybersecurity Specifications for AI Systems
- Quality Management for Providers of AI Systems, Including Post Market Monitoring Process
- Conformity Assessment for AI Systems
Work is in progress, but forecasted voting dates for a good portion of work items are expected for mid-2026, exceeding the standardization request deadline by more than a year. While there has been some progress towards setting AI standards, delays could impact AI providers’ ability to deploy safe and compliant systems.
What conclusions can be drawn from cross-sector and industry-specific implications of AI standards?
Based on interviews with EU organizations developing and deploying AI systems, several cross-sectoral and industry-specific implications of AI standards emerge, primarily stemming from the EU AI Act’s forthcoming regulations.
Cross-Sector Findings
Several overarching challenges and opportunities cut across different industries:
- Ambiguity and Complexity in Compliance: Defining compliance boundaries is difficult, especially when systems integrate multiple components or third-party models. Divergent secrecy requirements across EU member states exacerbate these issues, creating operational conflicts. Classification ambiguity (e.g., systems evolving from design support to operational control) is also a critical concern. Even organizations familiar with existing regulatory frameworks struggle to align AI Act requirements.
- Resource Demands: The AI Act demands significant resources. AI providers anticipate annual compliance costs around €100,000 for personnel and 10-20% of management time. Certification costs can exceed €200,000 in sectors like medical tech and legal tech. These costs burden startups that voluntarily seek certification to minimize regulatory uncertainty.
- Market Reputation Impact: Established players in healthcare and legal tech see regulation as potentially beneficial for market trust, others fear standardization-based innovation barriers. SMEs fear compliance requirements disproportionately affect their ability to scale, potentially causing them to lose ground to jurisdictions with lower regulatory burdens.
- Asymmetric Participation in Standards Setting: Limited participation in standardization among SMEs and startups means that smaller companies could be at a disadvantage. Standardization efforts within the JTC 21 committees are often dominated by larger corporations.
- Fragmented Jurisdictions: Discrepancies between regulatory frameworks delay EU market entry, making other markets (e.g., the US) more attractive. Varying interpretations across EU member states create implementation challenges. Companies express worry about delays and inconsistencies in certification processes based on past experiences.
- Short Implementation Timelines: Companies see the August 2026 deadline as impractical, and estimate needing 12 months per standard. The timelines could significantly divert resources from development activities.
Industry-Specific Findings
Certain sectors face particular challenges and derive specific benefits:
- Healthcare and MedTech: These sectors are leveraging existing MDR compliance experience. There is value in standardization’s potential to enhance interoperability.
- Manufacturing: Standardization anticipates close alignment between technical standards, ISO 9001, ISO 31000, and Industry 4.0 protocols. Comprehensive documentation is needed for AI-driven decisions.
- Legal Tech: Maintaining detailed audit trails for AI outputs is resource-intensive, especially when handling sensitive client data. They foresee that complying with high-risk standards can establish them as leaders in ethical AI and improve client trust.
- FinTech: Concerns exist surrounding overly prescriptive requirements potentially favoring established institutions, and are specifically concerned about being similar to experiences with the PSD2 implementation. Standardization is viewed as trust-building, but smaller firms worry that complex compliance requirements could burden them.
Furthermore, technical standards will affect the mobility/automotive and defense sectors even though parts of these sectors fall outside the AI Act’s direct scope. AI providers in mobility see the standards as a double-edged sword.
The defense sector, excluded for national security reasons, faces indirect pressure through ecosystem impacts. While not directly regulated, defense companies closely monitor the standards’ impacts on open-source AI model availability and general AI standards.
In conclusion, while AI standards offer opportunities for improved transparency, safety, and interoperability, their effective implementation requires careful consideration of the challenges faced by smaller organizations, the need for clearer guidance, and the potential for regulatory fragmentation to hinder innovation in the EU AI ecosystem.
What policy recommendations are presented for addressing the challenges posed by the European AI Act?
The European AI Act, groundbreaking as it is, presents significant hurdles for AI developers, particularly startups and SMEs. A key takeaway from recent analysis is a need for practical, actionable policies to smooth the path to compliance. Here’s a breakdown of the recommendations:
Timeline Adjustments: More Breathing Room
The current deadlines are unrealistic. The gap between the expected publication of harmonized standards (early 2026) and the compliance deadline (August 2026) leaves a mere 6-8 months for implementation. Many companies estimate needing at least 12 months per standard. The recommendation is clear: the EU legislator should postpone implementation deadlines to provide more realistic timeframes. This is crucial for enabling companies to choose their optimal compliance approach, whether it’s relying on harmonized standards, common specifications, or expert opinions. Reducing the complexity and number of technical standards referenced is also recommended.
Lowering Participation Barriers: A Seat at the Table for All
Stakeholder engagement, especially from SMEs and startups, is vital. However, the standardization process tends to be dominated by larger enterprises. Here’s what needs to happen: Subsidies for smaller organizations to participate in committees are essential. Increased transparency and accessibility for existing subsidy programs are needed. Collaborative standardization efforts between large and small players, fostered through inclusive working groups, can help create a more balanced and representative process. Moreover, standardization bodies should be restructured to be more transparent and user-friendly, simplifying entry processes for newcomers.
Practical Implementation Aid: A Helping Hand to Navigate Complexity
The EU AI Office and national supervisory authorities should provide pragmatic guidance tools for AI Act compliance, specifically targeting SMEs. The recommendations include: Issuing clear, sector-specific implementation toolkits and evaluation frameworks. Building expert networks based on two-way communication channels with high-risk AI industries. Offering support in institutionalized environments, such as regulatory sandboxes. The goal is to make the compliance process more manageable and understandable, especially for those with limited resources.
Financial Support: Funding the Future of AI Compliance
Direct financial support is critical for pre-revenue startups pursuing AI Act compliance. The proposed programs should provide funding to address compliance costs before companies start generating revenue. This support can be facilitated through participation in regulatory sandboxes, enabling startups and regulators to learn from practical experiences.
Technical Implementation Guidelines: Clarity Where It’s Needed
Fast, practical, industry-specific implementation guidance is essential, especially for small startups struggling to determine if they fall under high-risk AI Act categories. The recommended actions include: Developing detailed, sector-specific guidance documents with concrete examples and real-world scenarios. Standardization bodies should also aim to design standards that don’t require further operationalization, focusing on threshold-based requirements and easier digital access.
Structured Integration of SMEs: Direct Consultation Channels
Establish advisory forums and scientific panels as outlined by Art. 67 of the AI Act, ensuring these bodies include representation from startups, SMEs, and sectoral industry experts. Develop direct consultation channels between AI startups/SMEs and regulatory bodies, supported by clear EU-level contact points. These measures are intended to ensure that the perspectives and challenges of smaller players are considered in the implementation guidance and ongoing discussions. EU bodies should more actively reach out to startups/SMEs.
Standards Alignment: Consistency Is Key
Finally, it’s recommended that standardization bodies align industry-specific vertical standards with Art. 40 of the AI Act for high-risk AI systems. European and international AI standards should be aligned as closely as possible to streamline compliance efforts for companies. International, European, and national standardization bodies must cooperate more closely. Furthermore, negative conformity presumptions should be avoided, allowing for necessary deviations from standard catalogs while still ensuring product safety. Leverage existing standards to facilitate entry into international markets by maintaining consistency and interoperability.
Ultimately, the current trajectory of AI standardization in Europe presents a complex landscape. While the intention to cultivate trustworthy AI through harmonized standards is commendable, the realities on the ground reveal significant challenges. Burdens disproportionately affect startups and SMEs, raising concerns about stifled innovation and competitive disadvantages. Clearer guidance, reduced participation barriers, and realistic timelines are essential to ensure a level playing field. Failure to address these issues risks creating a fragmented regulatory environment, potentially diverting resources and hindering the EU’s ambition to lead in responsible AI deployment. The future success of the AI Act hinges on a proactive and inclusive approach that considers the diverse needs and capabilities of all stakeholders.