Artificial intelligence is rapidly changing our world, prompting a need for clear guidelines and regulations. The European Union’s AI Act represents a bold step towards shaping the development and deployment of these powerful technologies. A central pillar of this legislation involves the creation and implementation of technical standards. These standards aim to translate the AI Act’s high-level principles into concrete, actionable steps for businesses. However, the process of defining these standards and ensuring their effective adoption presents a unique set of challenges and opportunities that could determine the future of AI innovation and competition within the EU and beyond. Examining the objectives, key stakeholders, and practical implications of these standards is crucial to understanding the AI Act’s potential impact.
What are the fundamental objectives of the EU AI Act?
The European AI Act aims to establish harmonized legal rules for the safe development and deployment of AI within the EU. Technical standardization plays a vital function, translating abstract legal requirements into concrete, actionable guidelines. This will supposedly reduce legal uncertainties and strengthen competitiveness in the internal market.
Notably, the standards aim to:
- Establish consistent competitive conditions.
- Streamline processes to reduce regulatory implementation costs.
- Facilitate more efficient product development and operations.
Key Regulatory Concerns
The EU AI Act seeks to operationalize high-risk legal requirements, making them more prescriptive through technical standards. Alignment with these standards provides a ‘presumption of conformity,’ simplifying compliance and reducing the need for customized, resource-intensive solutions. However, it mandates rigorous adherence to requirements covering areas like risk management, data governance, transparency, and cybersecurity.
Practical Implications
To realize the goals of the AI act, the impact on various industries and market participants needs consideration. The AI Act could become a crucial instrument to implement technical standards as market entry barriers, particularly impacting startups and SMEs that lack the resources to participate in standardization processes. This could reshape AI competition and necessitate policy adjustments to ensure fair access and prevent undue burden on smaller players.
What key stakeholders are involved in the process of AI standardization?
The global process of AI standardization involves multiple key stakeholders. These can mostly be classified as standardization bodies, industry players, civil society groups and scientific organizations.
Standardization Bodies
There are three notable committees focusing on AI standardization:
- ISO/IEC JTC 1/SC 42 (AI) – Organized by the International Organisation for Standardization (ISO) in collaboration with the International Electrotechnical Commission (IEC). Has published 34 standards, with 40 still under development.
- IEEE AI Standards Committee – Organized within the Institute of Electrical and Electronics Engineers (IEEE). Has produced 12 standards and is working on 59 additional standards.
- CEN-CENELEC JTC 21 (AI) – A joint committee by the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). Has published 10 standards, with 33 still under development.
On an EU Member state level, the national standardization bodies have installed working committees mainly mirroring the work of ISO/IEC JTC 1/SC 42 (AI) and CEN-CENELEC JTC 21 (AI). This helps to balance national with overarching international (e.g., European) efforts and ensures the coordinated implementation of European standards across EU member states.
Industry Players and Scientific Organizations
Industry players and scientific organizations contribute to AI standardization, particularly through industry standards, AI audit catalogs, and testing frameworks. Notable examples include:
- AI HLEG ALTAI – The High-Level Expert Group on AI’s Assessment List for Trustworthy AI, operationalizing the AI HLEG Ethics Guidelines.
- NIST AI RMF – The US National Institute of Standards and Technology’s framework to manage AI risks.
- Mission KI Standard – A German initiative developing a voluntary quality standard for AI applications.
- Fraunhofer IAIS catalog – The AI Assessment Catalog by the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). Offers a structured guideline that can be used to define abstract AI quality standards into application-specific assessment criteria covering six dimensions of trustworthiness.
- BSI AIC4 – The German Federal Office for Information Security’s AI Cloud Service Compliance Catalogue, specifying minimum requirements for secure machine learning in cloud services.
What are the core functions of harmonized standards under the AI Act?
Harmonized standards are crucial for effective compliance with the EU AI Act. They offer an accessible route to meet regulatory requirements and reduce legal uncertainties, ultimately boosting competition and growth in the EU market. They aim to create a level playing field for AI system design and development.
Here’s a breakdown of the core functions:
- Operationalizing legal requirements: The AI Act’s high-risk requirements are deliberately abstract. Harmonized standards provide the technical specifications needed to make them prescriptive and actionable.
- Presumption of conformity: Meeting these harmonized standards gives AI systems a presumption of compliance with the relevant AI Act requirements.
- CE marking and market access: These standards pave the way for CE (conformité européenne) marking, simplifying access to the EU market.
- Reducing regulatory costs: Well-designed standards streamline processes, potentially avoiding the need for custom R&D, thus making product development more efficient.
Article 40(1) of the AI Act lays the groundwork for harmonized standards and the presumption of conformity. These standards apply to high-risk AI systems as defined in Article 6 and Annexes I and III of the Act.
The European Commission has tasked CEN and CENELEC with developing these standards (Article 40(2) AI Act). These standards will serve as the foundation for presumption of conformity, simplifying compliance, providing legal certainty, and, ideally, reducing the administrative burden for AI providers.
It is important to note that while international standards are considered, the AI Act often requires new European standards to address fundamental rights protection and societal impacts, ensuring alignment with EU values.
How do vertical AI standards influence the implementation of the AI Act?
While the AI Act is designed to be an industry-agnostic, horizontal regulation, the European Commission has considered specific vertical specifications for particular sectors. Here’s the crux of how these vertical standards impact the AI Act’s roll-out:
Industry Alignment is Key: The involvement of stakeholders from sectors with existing technical requirements – such as machinery, medical devices, aviation, automotive, and finance – is critical for the successful development of harmonized standards under the AI Act. This collaborative approach ensures the new regulations are well-informed and practically implementable.
Delegated Acts Will Incorporate AI Act Requirements: Article 102 et seq. of the AI Act mandates that its high-risk AI requirements are integrated into existing market access regulations for sectors like automotive, aviation, and railway. How will this be done? Through delegated acts leveraging technical specifications. Expect standardization bodies to be tasked with adding AI Act stipulations into existing sector-specific standards related to homologation.
Sector-Specific Standards are Emerging: While most sectors currently lack AI-specific standards, some are pioneering efforts. Examples include BS 30440:2023 by the British Standards Institute (BSI) for AI in healthcare and ISO/PAS 8800 for safety-critical AI systems in road vehicles. The latter could be pivotal in incorporating AI Act requirements via the Type Approval Act (Regulation (EU) No. 168/2013), as per Article 104 AI Act. Similarly, the Society of Automotive Engineers (SAE) is developing SAE ARP6983 for AI-driven aeronautical safety products.
Defense Sector Considerations: Even though largely excluded from the AI Act’s direct scope (Art. 2(3) AI Act), the defense sector recognizes the need for sector-specific AI standards and is actively working towards them.
Voluntary Compliance: Sectors not directly covered by AI Act standards or sector-specific AI standards are showing interest. Some intend to voluntarily comply with corresponding standards, anticipating that their customers may become subject to high-risk AI Act requirements in the future.
Spillover Effects
Both the mobility/automotive and defense sectors, while partially outside the AI Act’s direct scope (as defined by Art. 103 et seq. AI Act), anticipate major implications from the AI Act. AI providers in mobility see standards as a double-edged sword, offering transparency and safety gains but imposing significant operational burdens, especially for complex systems demanding advanced explainability and cybersecurity features.
Defense companies, though explicitly excluded for national security reasons, face indirect pressure through ecosystem impacts and dual-use considerations. These companies closely monitor the AI Act’s impact on open-source AI model availability and general AI standards, often adhering to strict safety standards comparable to civilian applications.
Some mobility companies are considering markets with lower regulatory burdens due to financial and operational challenges. Defense companies, conversely, see potential competitive advantages in adopting high-risk standards, fostering civilian-military collaboration and trust in AI-human collaboration systems.
What are the main challenges in the timeline for AI Act standardization?
Technical standards are crucial for AI Act compliance, but the standardization process faces critical timeline pressures, complex stakeholder dynamics, and concerns over cost and operationalization.
Critical Timeline Compression
The initial deadline for standards development was April 2025, but that is likely to be extended to August 2025. Even with this extension, the timeline remains tight, potentially leaving only 6-8 months for companies to comply after the standards are published in the Official Journal of the European Union, expected in early 2026. This is significantly less than the minimum 12 months most companies, especially startups and SMEs, say they need for effective implementation.
Stakeholder Representation Imbalance
The composition of standardization committees, such as CEN-CENELEC JTC 21, is heavily skewed towards large enterprises, with major US technology and consulting companies often holding a majority presence. This creates an imbalance, limiting the participation and influence of SMEs, startups, civil society organizations, and academia. This disparity can lead to standards that don’t adequately address the needs and concerns of smaller market participants, potentially creating disproportionate barriers to entry.
Cost Concerns
Companies also face challenges related to the cost of understanding and implementing applicable technical standards. The European Court of Justice’s “Malamud” case has put the spotlight on whether harmonized standards should be freely accessible, raising questions about copyright and monetization. Depending on future court cases, European standardization could lose critical contributions from international standardization bodies. If companies cannot afford to purchase relevant technical standards, they risk a “negative conformity presumption,” meaning their alternative compliance efforts might be viewed with bias by supervisory authorities. Non-compliance can result in hefty fines (up to €35 million), restricted market access, and reputational harm.
Operationalization Hurdles
A significant challenge is the need for greater operationalization of technical standards. Currently, these standards are mainly available as PDFs, requiring manual interpretation and application. Addressing this, German standardization bodies such as DIN and DKE are working toward machine-readable, -interpretable, -executable, and -controllable “SMART Standards.” The success of this aim may determine the possibility of cost reduction associated with the application of standards.
How do stakeholder dynamics affect the AI Act standardization process?
Standardization efforts for the AI Act involve over 1,000 experts across national mirror committees, revealing a structural challenge in stakeholder representation. Predominantly, large enterprises, including major US tech and consulting firms, dominate these committees. This creates a disparity, impacting SMEs, startups, civil society, independent institutions, and academia.
Participation in standard-setting offers firms strategic advantages via knowledge transfer and relationship building. However, under-representation of smaller stakeholders stems from the resources required for committee participation. Industry associations have emerged as intermediaries, aggregating and representing these stakeholders’ interests within standardization bodies.
This structural imbalance generates competitive advantages for larger enterprises in the EU market, allowing them to influence technical standards development. The substantial influence of US companies also raises concerns about the representation of EU values and perspectives. Limited participation from smaller entities potentially excludes crucial knowledge, compromising comprehensive safety development.
Challenges in Stakeholder Dynamics:
- Asymmetric Participation: Smaller players are often overshadowed by larger corporations.
- Resource Constraints: SMEs and startups struggle to allocate necessary resources for participation.
- Value Representation: Concerns exist over the adequate representation of EU values given US company influence.
The lack of inclusivity underscores the need for more balanced standardization processes that effectively incorporate diverse perspectives and expertise to avoid standards which disproportionately affect smaller market participants.
What are the potential financial implications related to accessing AI standards?
The landscape of AI standards isn’t just about technical specifications; it’s deeply intertwined with financial considerations that could significantly impact AI providers, particularly startups and SMEs. Here’s a breakdown of the key financial implications:
Direct Costs of Compliance
Although the standards documents themselves are expected to be free, thanks to a ruling by Europe’s highest court, the cost to implement those standards is far from negligible. Companies anticipate significant financial burdens:
- Dedicated Personnel: Firms might need to allocate around €100,000 annually for dedicated compliance personnel.
- Management Time: Founders and management could spend 10-20% of their time on standards-related matters.
- Certification Costs: Some estimates put AI system certification expenses above €200,000.
Indirect Costs and Market Access
The financial ramifications extend beyond the obvious:
- Potential impact on time-to-market and competitive market share
Companies that fail to meet compliance deadlines risk fines of up to 7% of global turnover or €35 million, which could be crippling, especially for smaller companies. Non-compliance can also restrict access to the EU market, putting compliant firms in a stronger position.
Reputational Risk
Beyond direct financial penalties, there’s a risk of reputational damage. Negative media coverage and loss of customer trust can jeopardize long-term business relationships, especially in risk-averse sectors.
Asymmetric Participation in Standardization
Smaller companies often lack the resources to participate effectively in standardization committees. Larger enterprises can influence standards development to their advantage, potentially leading to higher compliance costs for SMEs:
- Influence: Larger players can “bake in” technical standards favoring their business goals.
- Knowledge: Asymmetric influence may mean crucial perspectives get left out of market-defining standards.
The Malamud Case and Standards Accessibility
The “Malamud” case requires that harmonized standards are freely accessible. However, the ISO and IEC are challenging this in court, raising concerns about potential monetization of technical standards. This has raised major concerns about the financial sustainability of the European Standards Organizations who rely on income from sales of the standards to support their operations..
What is the significance of operationalizing technical standards in the context of the AI Act?
The EU AI Act, as the first comprehensive set of rules governing AI development and deployment, relies on technical standardization as a key implementation tool. High-risk AI requirements in the AI Act are deliberately abstract, making them difficult for organizations to implement directly. Technical standards are vital for operationalizing these legal requirements, turning them into more prescriptive and actionable guidelines.
Harmonized Standards and Presumption of Conformity
Harmonized standards, developed by European standardization organizations like CEN, CENELEC, and ETSI based on European Commission requests, represent a critical framework for AI Act compliance. These standards, after publication in the Official Journal of the European Union (OJEU), provide a “presumption of conformity” for high-risk AI systems that meet them. This simplifies compliance, offers legal certainty, and ideally reduces the administrative burden on AI providers.
Impact on Market Access and Competition
Properly designed and implemented technical standards can help to:
- Establish a level playing field for AI system design and development.
- Reduce regulatory implementation costs.
- Streamline processes and potentially eliminate the need for custom R&D solutions.
- Make product development and operations more efficient.
However, the rise of AI standards also reshapes global AI competition and could raise market entry barriers, particularly for startups and small-to-medium enterprises (SMEs).
Challenges and Concerns
While technical standards are designed to ease AI Act compliance, several challenges need to be addressed:
- Timeline Pressure: The current timeline for standards development and implementation may be too ambitious, leaving providers insufficient time to adjust.
- Stakeholder Representation: Large enterprises, including US tech and consulting firms, often dominate standardization committees, resulting in under-representation of SMEs, startups, and civil society.
- Accessibility and Cost: The cost of identifying applicable technical standards and accessing them could put smaller companies at a disadvantage. A pending European Court of Justice case could change whether harmonized standards must be freely accessible.
- Operationalization: Technical standards need to be further operationalized to streamline compliance and ensure that organizations can efficiently apply the standards to their specific use cases.
What is the current status of the AI Act standardization process?
The European Commission’s standardization request for the AI Act has outlined ten essential deliverables addressing key regulatory requirements. These deliverables are the basis for standardization work at CEN-CENELEC JTC 21, with most ongoing work items focused on fulfilling this mandate. Where possible, the work items are based on or co-developed with ISO/IEC standards (approximately 2/3 of work items).
Key Areas of Standardization
The standardization work builds on around 35 standards, with most addressing individual standardization request deliverables. Other deliverables, such as the Artificial Intelligence Trustworthiness Framework and supporting standards on terminology, touch upon multiple SR deliverables. These standards will form an integrated framework through various interrelationships, like hierarchical integration or operational dependencies.
Here’s a breakdown of the draft standardization work status as things stood in late 2024:
- Risk Management: ISO/IEC 23894 already published, but a “home-grown” AI Risk Management European standard is under development (WI: JT021024), with a forecasted voting date of September 30, 2026. This European standard will address shortcomings of the other, especially regarding AI Act compliance.
- Governance and Quality of Datasets: Six work items are in progress, including both ISO/IEC standards and European standards. The standards still under drafting/approval will center on quantifiable measures of data quality and statistical properties throughout the AI system lifecycle.
- Record Keeping: Two work items are running: ISO/IEC 24970 on AI System Logging and the European Artificial Intelligence Trustworthiness Framework (WI: JT021008).
- Transparency and Information to Users: Two CEN-CENELEC JTC 21 work items are planned. ISO/IEC 12792 already in consultation process.
- Human Oversight: Addressed by the Artificial Intelligence Trustworthiness Framework alone (WI: JT021008).
- Accuracy Specifications for AI Systems: Seven work items are underway, including both ISO/IEC and “home-grown” standards, which will establish requirements beyond basic performance metrics.
- Robustness Specifications for AI Systems: CEN-CENELEC JTC 21 assigned four work items to AI and European standards.
- Cybersecurity Specifications for AI Systems: With (at least) two “home-grown” standards planned.
- Quality Management System: Refers to the Art. 17 AI Act (Quality Management System) requirements and is expected to be completed with 2 standards.
- Conformity Assessment for AI Systems: With a forecasted release of 5 standards or documents, this deliverable completes existing work with the specifics asked for in the EU AI Act.
However, it’s worth noting that some work items had forecasted voting dates in mid-2026, exceeding the original standardization request deadline by more than a year.
Challenges and Criticisms
The European Commission has already expressed criticism regarding the standardization work of CEN-CENELEC, particularly concerning the scope and the number of referenced standards. The status of work underscores the ambitious nature of the AI Act standardization process and the challenges in meeting the mandated timelines. More recent assessments are expected in the Summer and Fall of 2025.
What is the core intent of standards addressing risk management?
As the EU AI Act gears up for enforcement, AI providers are grappling with its risk management requirements. Harmonized standards are at the heart of this regulation, offering a pathway for establishing conformity and reducing legal uncertainties. But what’s the core intent behind these standards, particularly when managing risk?
Compliance with the AI Act
The intent is to translate the broad, legally binding requirements of the AI Act into actionable, technically-defined procedures. These standards aim to:
- Ensure Individual Rights Protection: Emphasize the protection of individual rights through a product-centric approach, aligning with the AI Act’s focus on safeguarding health, safety, and fundamental rights.
- Provide a Clear Framework: Offer specifications for a risk management system tailored to AI systems.
- Obligate Testing: Make mandatory the testing of AI systems as stated in Article 9(6) and 9(8) of the AI Act.
The effort involves two key standards:
- ISO/IEC 23894: This standard gives general risk management guidance regarding AI but is limited by its organization-centric view and a definition of risk that is misaligned with Article 3 of the AI Act.
- AI Risk Management (WI: JT021024): This is a “home-grown” standard currently under development to specifically address the shortcomings of existing standards by providing a product-centric approach aligned with the AI Act. This is expected to be completed by September 2026.
Organizations aiming to comply with the AI Act need to understand the nuances of these standards, ensuring that their risk management practices reflect the Act’s emphasis on individual rights and safety.
What is the purpose of standardization regarding governance and data quality?
Standardization plays a crucial role in ensuring robust governance and high-quality data within AI systems. Technical standards offer a clear and accessible route to meet regulatory demands and mitigate legal ambiguities, fortifying competitiveness and fostering growth within the internal market.
The EU AI Act emphasizes statistical validation and bias prevention when it comes to governing data and guaranteeing its quality. The requirements are detailed in Art. 10 of the AI Act (Data and Data Governance), and address the handling of unwanted biases and guarantee data quality.
CEN-CENELEC JTC 21, in collaboration with ISO/IEC, lays out the path to approach data governance with AI:
- ISO/IEC/TS 12791 provides technological support to treat unwanted bias for classification and regression machine learning tasks.
- ISO/IEC 8183 lays the foundation for the AI Data Life Cycle Framework.
- ISO/IEC 5259-1 – 5259-4 provides guidance on Data Quality for Analytics and Machine Learning (ML).
The path continues with “home-grown” standards:
- AI – Concepts, Measures and Requirements for Managing Bias in AI Systems (WI: JT021036)
- AI – Quality and Governance of Datasets in AI (WI: JT021037)
- CEN/CLC/TR 18115 Data Governance and Quality for AI in the European Context.
Furthermore, the standards still under drafting / approval will center on quantifiable measures of data quality and statistical properties throughout the AI system lifecycle. Particularly significant is the Art. 10 AI Act requirement for empirical validation of bias mitigation techniques and the ability to demonstrate the effectiveness of quality assurance measures.
This emphasis on measurable outcomes represents a methodological shift from descriptive to prescriptive standardization, requiring organizations to implement verifiable controls for data representativeness, correctness and completeness.
How do standards address requirements for record keeping?
The European AI Act mandates record-keeping for high-risk AI systems, specifically focusing on traceability and capturing events that could impact system performance or pose risks.
The standards landscape is addressing this requirement through two primary work items:
- ISO/IEC 24970 – AI System Logging: This standard, currently under development in collaboration with ISO/IEC, focuses on defining requirements for logging plans. These plans need to strike a balance between comprehensive event capture and operational efficiency, accommodating varying system architectures and performance demands. For example, high-frequency trading systems, where millisecond-level transaction logging is critical, will have different requirements than less time-sensitive applications.
- Artificial Intelligence Trustworthiness Framework (WI: JT021008): This framework provides an overarching structure that complements the ISO/IEC standard.
The ISO/IEC standard will provide more granular specifications, emphasizing the need to define requirements that allow for system-specific needs. This is critical for consistent verification capabilities across different AI applications.
Here are the critical data points for record-keeping standards:
- Goal: Traceability of AI system operations/performance.
- Status: ISO/IEC standard under drafting.
- Balance: Between event capture and operational efficiency.
- Flexibility: accommodate sector-specific needs to ensure reliable verification capabilities.
What are the main features of standards related to transparency for AI system users?
Technical standards are being developed to support the Article 13 requirements of the EU AI Act, which focus on transparency and providing information to users. The standardization efforts are intended to address the “black box” problem, where the internal decision-making processes of AI systems are opaque.
Key Standards in Development
- ISO/IEC 12792 (Transparency Taxonomy of AI Systems): This standard establishes requirements for transparency artifacts to ensure related information is comprehensive, meaningful, accessible, and understandable for intended audiences.
- Artificial Intelligence Trustworthiness Framework (WI: JT021008): This framework provides an overarching frame for trustworthiness and transparency requirements.
For ISO/IEC 12792, specific attention is given to European regulatory requirements. These standards aim to make the outputs of AI systems understandable to users by specifying what information should be revealed, and how accessible it should be.
What is the role of standards in ensuring human oversight of AI systems?
Standards play a pivotal role in specifying the requirements from Article 14 of the EU AI Act, which focuses on human oversight of AI systems. These standards are primarily addressed by the Artificial Intelligence Trustworthiness Framework (Work Item: JT021008) under development by CEN-CENELEC JTC 21.
Here’s a breakdown of the key aspects:
The overarching goal is to ensure effective human control over AI systems across diverse operational contexts:
- In manufacturing, standards must enable human intervention without sacrificing production efficiency.
- In finance, oversight mechanisms are crucial for algorithmic systems operating at speeds beyond human reaction times. This involves setting up monitoring interfaces and control mechanisms, as well as organizational measures like training protocols.
More specifically, these standards must establish clear criteria for selecting appropriate oversight measures that are aligned with an AI system’s intended use and identified risks.
Key considerations include:
- Technical measures: monitoring interfaces and control mechanisms.
- Organizational measures: training protocols.
- Verification procedures: ensuring that human oversight mechanisms are effective.
Standards must also define verifiable outcomes regarding system oversight. Natural persons should be able to effectively maintain operational control and intervene when necessary, even with increasingly complex and fast AI systems.
In short, standards aim to provide a framework for ensuring that humans retain meaningful control and intervention capabilities over AI systems, regardless of their application or complexity.
What is the focus of accuracy specifications in AI systems?
Accuracy specifications within AI systems, as mandated by Article 15(1) and (3) of the EU AI Act, aren’t just about hitting performance benchmarks. The focus is on ensuring those measurements are demonstrably appropriate and effective in addressing the Act’s regulatory objectives.
Here’s what that means in practical terms:
Defining Appropriate Metrics and Thresholds
Companies can expect standards to offer precise instructions on selecting accuracy metrics and setting clear thresholds. Expect rigorous testing protocols and detailed documentation practices.
Benchmarking for General Use
The emerging standards specify processes and assessment frameworks for evaluating AI models against standardized tasks, particularly in areas like general benchmarking, which can significantly impact practical applicability and reduce regulatory uncertainties.
Metrics and Risk Mitigation
The key, as these standards shape up, will be demonstrably linking accuracy metrics to risk mitigation strategies. This involves selecting, measuring, and validating metrics based on the AI system’s intended use and identified risks.
Currently, CEN-CENELEC JTC 21, the joint committee working on AI standards, has allocated seven work items to this deliverable. These include several standards that are being co-developed with ISO/IEC, as well as several “home-grown” standards. These standards are expected to be finalised in late 2025 or early 2026.
What are the key focuses of robustness specifications for AI systems?
Robustness specifications for AI systems are a key focus of the EU AI Act, aiming to ensure that these systems are resistant to various types of risks and vulnerabilities. Article 15(1) and (4) of the AI Act dictate the requirements that standardization efforts must address to enhance the resilience of AI.
CEN-CENELEC JTC 21, tasked with developing harmonized standards, has assigned four work items to address these robustness specifications:
- International Standards:
- ISO/IEC 24029-2, -3, -5 – AI – Assessment of the Robustness of Neural Networks (partially preliminary with no forecasted voting date)
- ISO/IEC/TR 24029-1 – AI – Assessment of the Robustness of Neural Networks (Published)
- “Home-grown” Standards:
- AI – Concepts, Measures and Requirements for Managing Bias in AI Systems (WI: JT021036) (under drafting; forecasted voting: June 3, 2024)
- Artificial Intelligence Trustworthiness Framework (WI: JT021008)
To fully align with regulatory demands, guidance is needed to complement the ISO/IEC 24029 series. The goal is to set practical metrics, thresholds, and methods tailored to specific use cases. Therefore, the additional standards are extending robustness considerations beyond testing and measurement to include design principles, particularly for systems that evolve post-deployment.
Here are the core insight behind those specifications:
- Beyond Testing: The standards must evolve beyond mere testing and measurement to embed robustness considerations directly into design principles.
- Design Principles and Evolving Systems: The standards should account for systems that continue to evolve after deployment.
- Practical Metrics and Thresholds: It is important to set practical metrics, thresholds, and methods tailored to specific use cases.
What is the purpose of cybersecurity specifications for AI systems?
AI systems, especially those classified as high-risk by the EU AI Act, are increasingly vulnerable to sophisticated cyberattacks that can compromise their integrity, reliability, and safety. Recognizing this growing threat, the EU AI Act mandates cybersecurity specifications to safeguard these systems from malicious interference.
The purpose of these specifications, according to ongoing standardization efforts, is multifaceted:
Key Objectives
- Defining security requirements: Establish clear, objective standards for implementing a robust security risk assessment and mitigation plan specifically tailored for high-risk AI systems.
- Addressing AI-specific vulnerabilities: The standards aim to proactively capture aspects related to AI-specific threats like data poisoning, model poisoning, model evasion, and confidentiality attacks – areas often overlooked by traditional cybersecurity frameworks.
- Defining technical & organizational approaches: The specifications will encompass both technical measures and organizational procedures necessary to establish a resilient security posture for AI systems.
- Establishing verification methods: Defining specific security objectives to achieve and verify through testing is crucial, especially at the system level, when mitigation measures for component-level vulnerabilities may not be fully effective.
As the standardization landscape evolves, ongoing work is beginning to address AI-specific threats, mostly in the form of guidance. However, as new threats and countermeasures emerge constantly, a key goal of new standardization on AI cybersecurity will be to define essential requirements for implementing a security risk assessment and mitigation plan for high-risk AI systems.
Complying with these cybersecurity specifications is not merely about ticking a box; it’s about building trust and ensuring the responsible deployment of AI systems that can have a profound impact on individuals and society. Companies failing to meet these requirements risk significant fines (up to €35 million or 7% of global turnover) and restricted access to the EU market.
What is the main intent of quality management system standards?
Quality management system standards, particularly in the context of the EU AI Act, aim to ensure providers of AI systems adhere to specific quality benchmarks. These standards aren’t just about general quality; they’re specifically designed to address the risks associated with AI, ensuring that high-risk systems are reliable, robust, and safe for users.
Here’s what the intent boils down to:
-
Regulatory Compliance: The standards are designed to operationalize the high-risk legal requirements of the AI Act. Meeting these standards offers a presumption of conformity, simplifying compliance and ideally reducing the administrative load for AI providers.
-
Risk Mitigation: The standards emphasize a product-centric approach to risk management with the goal of promoting individual rights protection.
-
Market Access: Compliance streamlines the CE (conformit´e europ´eenne) marking process, facilitating access to the European market.
-
Setting a level playing field : The standards support the establishment of equal conditions of competition and a level playing field for the technical design and development of AI systems.
The most relevant standard in this area is ISO/IEC 42001, complemented by a “home-grown” standard, AI – Quality Management System for Regulatory Purposes. This latter standard builds on multiple ISO/IEC standards and focuses on regulatory compliance and the specific risks addressed by the AI Act from a product-centric point of view.
How do conformity assessment standards support the AI Act?
Conformity assessment standards are crucial for navigating the AI Act’s complex requirements. These standards, primarily under development by CEN-CENELEC JTC 21, aim to specify how AI systems can be evaluated to ensure they meet the Act’s obligations. This includes defining requirements for bodies performing audits and certifications of AI management systems.
The role of ISO/IEC 42006 and 29119-11
The existing ISO/IEC 42006 (Requirements on Bodies Performing Audit and Certification of AI Management Systems) and ISO/IEC 29119-11 (Testing of AI Systems) serve as starting points. However, new standards are needed to address AI-specific vulnerabilities and conformity assessment.
Areas with standards in development
Key ongoing efforts include:
- Competence Requirements: Developing standards for the competence requirements of AI system auditors and professionals.
- AI Conformity Assessment Framework: Creating a framework specifically for assessing AI conformity. (Work Item: JT021038)
- Addressing AI-Specific Vulnerabilities: Assessing the technical solutions to address AI-specific vulnerabilities.
The importance of harmonized standards
Once these standards are harmonized (published in the Official Journal of the European Union), they create a “presumption of conformity.” This means that AI systems adhering to these standards are automatically assumed to comply with the relevant AI Act requirements, simplifying compliance and reducing the administrative burden for AI providers.
Challenges in standards implementation and assessment
However, several challenges remain:
- Alignment: These standards leverage existing work, such as the ISO CASCO toolbox, which provides a basis for generic principles and guidance on conformity assessment. However, they will also need to define how these conformity assessment frameworks should be adapted and applied specifically to the unique characteristics of high-risk AI systems.
- Implementation Lag: A major concern is the short timeframe companies will have to implement these standards after they are finalized and published (likely early 2026). This could leave as little as 6-8 months before the AI Act’s high-risk requirements become applicable.
- Coordination: There needs to close coordination between parallel standardisation work items to ensure the resulting standards are complementary and fit for purpose in supporting the implementation of the regulatory framework.
Ultimately, the success of the AI Act hinges on the development and effective implementation of these conformity assessment standards. Stakeholders must actively engage to ensure that the standards are practical, comprehensive, and aligned with regulatory objectives.
What are the common cross-sectoral implementation challenges of AI standards?
Across various industries, complying with emerging AI standards presents several shared difficulties. Growth-stage AI startups in high-risk sectors have already started aligning with expected standards, while younger ventures are struggling due to unclear timelines and a lack of specific guidance.
A primary hurdle is the interpretative ambiguity in these standards. Defining compliance can be murky, especially when systems integrate various components or rely on third-party models. Divergent secrecy laws between EU member states add another layer of complexity, creating operational conflicts for sectors like legal tech. Plus, even veterans of highly regulated sectors like healthcare struggle to reconcile AI Act requirements with existing regulations, particularly when blending AI techniques like image processing and language analysis.
Despite the expectation that technical standards themselves will be free, AI providers are wary of substantial indirect costs. Compliance may demand annual investments of, approximately, €100,000 to €300,000 annually, plus the dedicated time from key management figures. Even providers not deemed “high risk” might feel pressured to voluntarily comply to limit their liability, potentially incurring excessive expenses.
Analysis from the interviewed AI providers paints a nuanced picture of how the AI Act impacts innovation, distinguishing between well-established and emerging companies. The tendency shows that for companies in sectors where there is already regulation, such as healthcare, adapting is eased through their previous experience with similar arrangements. Whereas, for those without prior regulation experience, implementing compliance is much more difficult.
Regarding the integration of legal frameworks. Many report that discrepancies between frameworks in the EU already delay market entry to EU, making US a more reasonable option to begin with. It also creates problems for already established frameworks such as GDPR, and is further problematised by conflicting national and local level laws which vary in different member states.
Most companies think the August 2026 is impractical. This is emphasised by estimating it will take one year to meet just one technical standard (e.g. ISO/IEC 27001). Even with expert support. A better option would be a phased introduction, especially for SMEs to allow more practical adaptation periods.
Asymmetric Participation in Standards-Settings Process
Participation in JTC 21 meetings, working groups and member states’ mirror committees appeared very few in comparison to the amount of providers. Most small and medium-sized providers admitted that they are “rarely” or “irregularly” involved in activities aimed at standardization due to minimal knowledge and resources. Standardisation favored large corpotiations and the imbalance is creating a disproportionate environment for smaller companies, say multiple providers.
How does the complexity of AI compliance affect the market?
The complexity of AI compliance, particularly under the EU AI Act, is set to significantly reshape the market landscape. Emerging challenges, arising from ambiguities in compliance boundaries, divergent secrecy requirements, and complex classification rules, necessitate a more granular understanding of how these factors impact different players.
Core Compliance Challenges
Here’s a breakdown of the primary compliance challenges identified by AI providers:
- Interpretative Ambiguities: Defining compliance boundaries is complex, especially when AI systems integrate multiple components or third-party models.
- Sectoral Divergences: Divergent secrecy requirements across EU member states create operational conflicts. Reconciling professional secrecy laws with regulatory logging requirements is proving difficult.<
- Classification Uncertainty: Uncertainty about how risk classification applies in different cases, highlighting concerns regarding dual-use technologies.
- Integration Complexities: Aligning the AI Act’s requirements with existing regulations can be difficult when systems combine multiple AI modalities, such as image processing and language models.
- Enforcement Uncertainty: Ambiguity regarding the specific evidence required to demonstrate compliance, particularly concerning bias testing and model robustness, is causing disquiet.
Impact on Companies
Interview data suggests a concerning perspective on how the AI Act is affecting innovation, particularly highlighting a divide between established and emerging companies.
- Cost Burdens: Substantial costs related to AI Act requirements are anticipated, leading companies to worry about indirect costs.
- Personnel Requirements: AI Act compliance workflows can demand the dedication of very significant dedicated personnel reources. This translates to on-going operational costs beyond initial compliance investments.
- Innovation Barriers: SMEs fear compliance requirements disproportionately affect their scaling abilities.
- Competitive Pressures: The EU’s regulatory burdens may lose ground to jurisdictions like the US, where lower regulatory burdens enable faster innovation cycles and more flexibility.
On the other hand, healthcare and legal technology providers seem better positioned to adapt due to their experience with existing frameworks, so see regulation as potentially beneficial for market trust.
It is also worth noting that fragmentation across jurisdictions leads to delays in EU market entries.
Impact on Standardization
There is a pattern of asymmetric participation in the standards development process. The standardization process may be favoring larger corporations.<
Overall, these factors have the potential to make standardization very difficult for the average player.
What resources are required for achieving AI compliance?
Achieving AI compliance, particularly under the EU AI Act, demands a multifaceted approach involving significant resources and strategic planning. The challenge isn’t merely about understanding the regulation; it’s about operationalizing abstract requirements into tangible practices.
Personnel and Expertise
A key area is the allocation of skilled personnel. Compliance officers are pivotal, but increasingly, organizations require AI-specific legal-tech expertise and deep understanding of AI model behaviour. Technical staff needs to adapt AI risk management tools, quality systems, and robust post-market monitoring. Organizations interviewed report allocating staff specifically for AI Act compliance. The level of expertise internally or the resources used to acquire external support needs to be factored into the cost of resources.
The cost of expertise is further compounded by the current state of AI standardization. This is evident when organizations grapple with defining compliance boundaries if integrating multiple components or relying on third-party models.
Financial Considerations
Implementing and maintaining AI systems compliant with the AI Act will create a financial burden on organizations. Costs include dedicated compliance personnel, executive time dedicated to compliance matters, and certification costs. Startups and SMEs with less than 20 staff will be drastically affected by the cost allocations, with estimates of dedicating €100,000 annually to maintain compliance. With these overheads, AI providers report anticipated annual compliance costs of ca. €100,000 for dedicated compliance personnel and 10-20 % of the founder’s/management’s time spent on standard matters.
Even companies prepared for implementing and maintaining costs, such as medical-tech companies, estimate certification costs may exceed €200,000, and legal tech companies report an estimated annual cost between €200,000-300,000.
Navigating Ambiguity and Uncertainty
A final key resource is the ability to invest in regulatory and legal support to proactively navigate ambiguity. With divergent interpretations of professional secrecy vs. logging requirements between national and EU laws, these are all areas to factor into planning.
Companies should also expect costs in developing and implementing clear verification protocols to prove compliance. For demonstrating compliance, evidence needs to show the work regarding bias testing and model robustness.
What is the impact of the AI Act on market reputation and innovation?
The AI Act’s impact on market reputation and innovation is a complex issue, particularly for small and medium-sized enterprises (SMEs). While established companies in sectors like healthcare and legal tech see regulation as potentially boosting market trust, emerging AI companies express concerns about innovation barriers tied to these new standards.
Innovation Barriers for SMEs
- Scaling Ability: The majority of SMEs classified as high-risk under the AI Act worry that compliance disproportionately affects their ability to scale.
- Risk Classification Uncertainty: Companies face uncertainty about risk classification when transitioning from design support to operational systems.
- Competitive Disadvantage: There’s concern about losing ground to more flexible jurisdictions like the US, where lower regulatory burdens enable faster innovation cycles.
Market Trust and Sector Preparedness
Companies in already regulated sectors, such as healthcare, seem better prepared to adapt due to their experience with existing frameworks like the Medical Device Regulation (MDR). In contrast, companies in previously unregulated sectors face steeper adaptation challenges, struggling to interpret and implement AI standards without prior regulatory experience.
Experimentation Limitations
Early-stage companies are particularly frustrated by intransparent and bureaucratic conditions that limit experimentation.
Asymmetric Participation in Standards-Setting
The interview data reveal a pattern of limited participation in the standards development process of JTC 21, its working groups, and mirror committees in member states. Among the interviewed providers, only a small fraction reports active engagement in AI standardization efforts or formal consultations, with participation levels remarkably low among smaller companies.
- Resource Constraints: Most small and medium-sized providers acknowledge being ‘rarely’ or ‘irregularly’ involved in standardization activities, citing resource constraints and knowledge gaps as primary barriers.
- Big Player Favoritism: Several interviewees characterize the standardization process as favoring larger corporations, describing discussions as “one-sided”. Multiple providers express concern that this imbalance could lead to standards that create disproportionate barriers for smaller market participants.
- Lack of Support Mechanisms: While some companies indicate interest in future participation, they emphasize the need for structured support mechanisms.
How does participation in standards-setting influence competition?
Participation in standards-setting can be a double-edged sword for AI firms, profoundly impacting the competitive landscape, according to a recent report. While seemingly technical, these standards have far-reaching implications, especially for startups and SMEs navigating the EU’s AI Act.
Strategic Advantages of Participation
Active involvement in standardization offers distinct strategic advantages:
- Knowledge Transfer: Being part of the standards development process allows firms to intimately understand the technical nuances of compliance.
- Relationship Building: Participation fosters crucial relationships, extending beyond simple lobbying, and facilitates smoother technical compliance down the line.
However, the playing field isn’t level.
The Asymmetry of Influence
Standardization committees are often dominated by larger enterprises, including major US tech and consulting firms. This creates a significant disparity, leaving SMEs, startups, civil society, and academia underrepresented. The report highlights that this imbalance leads to:
- Competitive Advantages for Large Companies: Larger firms have the resources to shape standards to their advantage, gaining both knowledge and implementation advantages.
- Concerns About EU Values: The substantial influence of US companies raises concerns about the adequate representation of EU values, especially regarding fundamental rights protection. Standards necessitate a value-oriented balancing of these rights, potentially reviewed by the European Court of Justice.
- Exclusion of Crucial Knowledge: Limited participation from smaller entities means essential knowledge is excluded from the standards that will define market access, potentially compromising comprehensive safety.
The issue boils down to resources. Effective participation requires substantial investment, making it difficult for smaller organizations to prioritize it alongside core operational activities. Industry associations often step in, but their ability to fully represent the diverse interests of all stakeholders is limited.
The Road Ahead
For fair competition, a more inclusive standardization process is crucial. This means incorporating diverse perspectives and expertise to ensure the resulting standards are both robust and equitable. Policy interventions are needed to level the playing field and prevent standards from becoming a barrier to entry for smaller AI innovators.
In what ways does regulatory fragmentation create compliance challenges?
Regulatory fragmentation poses significant compliance challenges for AI providers, particularly those operating across multiple jurisdictions. This stems from several key issues:
Divergence in Secrecy Requirements: Different EU member states have varying secrecy laws, creating operational conflicts for sectors like legal technology. This makes it difficult to establish consistent compliance practices across borders.
Classification Ambiguity: The AI Act’s scope can be unclear, especially for companies operating across multiple sectors. Dual-use technologies, serving both regulated and non-regulated purposes, create uncertainty about risk classification. The same applies to General Purpose AI (GPAI) models.
Overlapping Regulatory Frameworks: Companies face operational conflicts due to overlapping EU and national-level laws. Varying interpretations of similar requirements across member states complicate implementation, similar to previous experiences with PSD2.
Readiness of EU Regulatory Bodies: Concerns exist about the readiness of EU regulatory bodies to manage certifications consistently. Delays and inconsistencies in the certification process could disrupt market access, even for compliant AI systems. Fragmented interpretations and certification processes pose notable problems for startups lacking the resources to navigate them.
How do implementation timelines impact businesses?
The upcoming EU AI Act introduces harmonized AI standards, but concerns are rising about the feasibility of meeting the compliance deadlines, particularly for startups and SMEs. As standards development lags, with key deliverables expected by early 2026, a mere 6-8 months remain before compliance becomes mandatory in August 2026. This compressed timeframe, especially considering the potential volume of around 35 technical standards, raises serious questions about businesses’ ability to adapt.
Industry research suggests companies typically require at least 12 months for a single standard’s compliance, pointing to significant market access delays for newcomers venturing into the AI space. Moreover, larger organizations with prior experience in regulated sectors are better positioned, increasing the divide between large corporations and innovative startups. Startups and SMEs face disproportionate disadvantages and may lose competitive ground if they cannot respond proactively.
Short implementation timelines carry serious risks:
- Financial Penalties: Non-compliance could result in fines of up to €35 million or 7% of global turnover, posing a severe threat to smaller businesses.
- Market Access Restrictions: Compliance delays could limit access to the EU market, giving compliant firms an advantage.
- Reputational Damage: Negative media coverage can lead to a loss of customer trust and harm business relationships.
To address these timeline crunches, several measures are recommended:
- Legislative Postponement: The EU legislator needs to postpone the AI Act’s implementation deadlines to allow organizations sufficient time for standards-based compliance.
- Standards Publication: Rapidly publishing near-final standards could enable businesses to begin adapting well in advance of mandatory deadlines.
- Transparent Access: Creating a central online portal where businesses could monitor standards development and requirements would create transparency and encourage feedback.
- Service-Oriented Approach: The AI Office and national authorities need to engage in continuous, service-oriented dialogue with affected businesses.
What are the sector-specific implications of implementing horizontal standards?
Implementing the EU AI Act’s horizontal standards will have diverse implications depending upon a sector’s existing regulatory maturity and the nature of its AI applications.
Healthcare and MedTech
While balancing privacy, accuracy, and care quality is often discussed, organizations with existing regulatory experience in the healthcare sector are finding practical solutions by leveraging their Medical Device Regulation (MDR) compliance. This sector benefits from AI Act standards, potentially enhancing interoperability and seamlessly integrating AI tools into existing systems while focusing on clinical accuracy and public trust.
Manufacturing
The manufacturing sector anticipates close alignment between technical standards and established frameworks (ISO 9001, ISO 31000, and Industry 4.0 protocols). This integration offers chances to improve quality control and standardize data processing. However, challenges arise in maintaining comprehensive documentation for AI-driven decisions—especially in high-speed production contexts. Furthermore, extensive pre-deployment testing could slow down the adoption of real-time automation solutions, affecting smaller manufacturers who could struggle with compliance costs.
Legal Tech
Legal tech firms are concerned about the resource intensiveness of maintaining audit trails for AI outputs, specifically when handling sensitive client data. Integrating regulations, including GDPR, necessitates technical updates and careful consideration of data governance. However, these firms see compliance as an opportunity to establish themselves as leaders in ethical AI practices and boost client trust in regulated markets.
FinTech
The FinTech sector worries that overly prescriptive requirements may favor established institutions over startups. These interviewees draw parallels to their experiences with PSD2. While standardization may catalyze trust and clarity in areas like customer authentication, companies worry that complex compliance requirements could disproportionately burden smaller firms, like in previous financial sector regulations.
Mobility (Automotive) and Defense
While these sectors may partially fall outside the AI Act’s scope, they will still face implications from the high-risk AI requirements derived from the harmonized standards. AI providers in the mobility sector see these standards as enhancing transparency and safety while imposing operational burdens, particularly for complex systems needing explainability and cybersecurity measures. The defense sector, while explicitly excluded for national security reasons, will experience indirect pressure through ecosystem impacts and dual-use considerations, especially regarding autonomous systems operating in high-stakes environments.
How do AI standards have spillover effects on different sectors?
The implementation of AI standards, driven largely by the EU AI Act, is causing ripples across diverse sectors, even those that aren’t directly within the regulation’s scope. While sectors like healthcare and finance grapple with direct compliance, others are feeling the indirect pressure and anticipating long range effect. Let’s break down how this is playing out.
Mobility & Automotive
The mobility sector presents a fascinating case. Interview data suggests companies view AI standards as a double-edged sword. On one hand, enhanced transparency and safety are appealing. On the other, substantial operational burdens are anticipated, especially concerning complex systems requiring advanced explainability and robust cybersecurity. A key finding is how widely the “high-risk” AI label applies. Mobility providers are realizing that seemingly routine processes, such as route planning, may fall under this classification due to their dynamic nature and reliance on multiple data points. This creates significant operational and compliance hurdles.
Defense
The defense sector, largely excluded from the AI Act due to national security concerns, is experiencing indirect pressure through ecosystem effects and “dual-use” considerations — technologies with both civilian and military applications. Although not directly regulated, defense companies are closely monitoring the AI Act because it affects open-source AI model availability and general AI standards. A surprising insight? The sector often adheres to strict safety standards comparable to civilian applications, a factor that may drive voluntary alignment with AI Act requirements.
Integrating high-risk AI standards, such as explainability, risk management, and transparency frameworks, could enhance safety and interoperability for defense AI systems, particularly autonomous systems that function in high-stakes environments like urban combat zones or disaster response. The interview data suggest some companies are considering voluntarily adopting some of the high-risk AI because they believe it will allow for greater civilian-military collaboration and foster trust in AI-human collaboration systems.
Financial and Real-World Challenges
These cross-sector implications are leading to varying responses. Some mobility companies are considering alternative markets with lower regulatory burdens, citing financial and operational challenges. In contrast, defense companies see potential competitive advantages in adopting high-risk standards. The common thread? Both sectors acknowledge that aligning with these guidelines for transparency and interoperability is ultimately beneficial, despite the initial implementation hurdles.
Essentially, even sectors beyond the direct reach of the EU AI Act are facing the challenge of navigating stringent AI standards. As AI becomes more pervasive, these spillover effects are likely to expand, reshaping operations and potentially influencing market competition. This makes understanding and adapting to AI standards crucial for companies regardless of where they operate.
How can the implementation timeline for the AI Act be optimized?
The European Union’s ambitious goal of regulating AI through the EU AI Act faces challenges due to tight implementation timelines, complex stakeholder dynamics, and implementation costs. Here’s how the timeline can be optimized, according to industry insights:
Adjusting Implementation Deadlines
The current pace of AI standards development raises concerns. The gap between the expected standards publication (early 2026) and the compliance deadline (August 2026) provides a mere 6-8 months for implementation. Organizations report needing at least 12 months per standard. Options include:
- Legislative Action: The EU legislator should consider postponing implementation deadlines to align with realistic adoption timelines.
- Reduce Standard Scope: Decrease the amount and complexity of the 35 technical standards being developed.
- Early Publication: Publish near-final standards early, but acknowledge these may change.
- Transparency Portal: Create an online platform for free access to draft standards and provide a feedback system, especially for SMEs.
- Ongoing Dialogue: The AI Office and national authorities should engage in continuous dialogue with businesses during implementation, similar to financial supervisory authorities.
Lowering Participation Barriers
Stakeholder representation in standardization committees is uneven, with large enterprises often dominating. According to the Court of Justice of the European Union (CJEU) Malamud ruling, harmonized standards must be accessible to EU citizens free of charge. Additional strategies for improved engagement are:
- Expert Networks: The European Commission or CEN-CENELEC should build industry-specific expert networks to guide sector-specific compliance.
- Financial Support: Establish substantial funding mechanisms at EU and federal levels to subsidize SME participation, excluding large corporations who have sufficient resources. Focus funding based on the actual personnel expenses of committee engagement.
- Mentorship Programs: Implement mentorship programs coupling experts with startup and SME representatives.
- Committee Accessibility: Transform AI standardization committees into transparent bodies that provide information and streamline entry processes. Experienced members can then guide newcomers.
Practical Aid for Implementation
Provide pragmatic guidance tools for AI Act compliance, with the EU AI Office and subsequent authorities focusing on SMEs:
- Pragmatic Guidance: Establish regular interpretative guidance, concrete implementation tips, and direct support through dedicated contact persons, specifically designed for SMEs.
- Specific Guidance Documentation: Supply sector-specific guidance documents, real-world examples, and step-by-step implementation guides, as well as providing for consistent industry updates given AI developments.
- Operational Communication: To enable the understanding of challenges and needs and ensure support addresses actual market needs, a practical communication system should be implemented between regulators and key industries.
- Evaluation Frameworks: Create such framework to measure progress, ensure accountability, and track improvements through quantifiable metrics.
Additional Considerations
Beyond the above, regulatory sandboxes as stipulated in Art. 57 AI Act could facilitate communication and collaboration between the high-risk AI developing community and the EU regulators.
By addressing the above mentioned time constraints, standardization imbalance, and high AI compliance costs, the policy adjustments for EU AI Act Implementation are optimized.
How can participation in developing AI standards be improved?
The EU’s ambitious AI Act hinges on technical standards, making their effective development crucial. However, participation in standardization committees is skewed towards larger enterprises, often with significant representation from US tech giants. This imbalance leaves SMEs, startups, civil society, and academia underrepresented, potentially leading to standards that don’t address their specific needs and challenges.
To level the playing field and foster broader participation, several actionable steps can be taken:
Financial Support
Establish robust EU and national funding mechanisms to subsidize SME and startup participation in these committees. This support should cover the actual costs of dedicating personnel to standardization work, ensuring they can afford to contribute meaningfully.
Mentorship Programs
Implement mentorship programs connecting experienced standardization experts with SME and startup representatives. This would provide invaluable guidance and support, helping them navigate the complex process.
Streamline Committee Access
Overhaul the accessibility of standardization committees by creating a centralized, user-friendly platform with transparent information and simplified entry processes. A “standardization guide” system, where experienced members assist newcomers, would further ease the onboarding process and promote more active collaboration between different stakeholders.
It is crucial that industry and and especially companies from out of the Safe AI community actively participate in standards development. Active collaboration between large and smaller companies should be fostered by standardization bodies to promote many-faceted collaborative standardisation.
By addressing these practical barriers, we can ensure that the development of AI standards is more inclusive, balanced, and ultimately more effective in fostering responsible AI innovation across the European landscape.
How can practical support for AI implementation be enhanced?
For AI providers aiming to navigate the EU AI Act’s standardization demands, a multi-pronged strategy focusing on accessible guidance, financial assistance, and collaborative approaches is essential. The following recommendations address key challenges identified through interviews with organizations across various sectors.
Adjusting Implementation Deadlines and Scope
The present timeline for AI Act implementation poses a concrete challenge. Technical expertise is needed to be integrated into standards committees. There is a need to delay the deadline of the Act by, at least, 12 months, so more companies can achieve compliance.
- Reduce the sheer volume of technical standards envisioned as deliverables.
- Provide early access to near-final drafts of standards, recognizing the risk of subsequent changes.
- Establish a centralized online portal offering free access to draft standards and a low-threshold feedback system.
- Foster ongoing dialogue between the AI Office, national authorities, and affected businesses, emulating service-oriented regulatory practices.
Lowering Barriers to Participation in Standardization
SME participation in standardization needs to be financially supported by, for instance, the European Commission, due to the SMEs and startups’ lack of resources. Promote what startups can gain from participating in such committees – direct influence in shaping the regulations governing their technologies.
- Establish funding mechanisms at both EU and national levels to subsidize SME participation in standardization committees.
- Implement mentorship programs pairing experienced standardization experts with startup and SME representatives to provide guidance and support.
- Overhaul the accessibility of standardization committees through a centralized, user-friendly platform, transparent processes, and clear priorities.
Providing Practical Aid for Implementation
The following are some pragmatical guidance tools and action points that will aid the AI Act compliance, with focus on SMEs. This includes regular interpretative guidance, concrete implementation tips, and direct support through dedicated contact persons who maintain ongoing relationships with the AI provider community.
- The European Commission and EU member states should create programs for pre-revenue startups, in order to pursue AI Act compliance.
- The European Commission and the AI Office could provide practical, industry-specific guidelines to help parties determine if they fall under high-risk AI Act categories.
- For efficiency, threshold-based requirements should more strongly be considered for standardization. Also, easier digital access is crucial.
Structured Integration of SMEs in Implementation
The advisory forum and scientific panel, as mentioned by Art. 67 AI Act, must include startup and SME representation to ensure their challenges are considered in implementation guidance. Also, there should be direct consultation channels between small businesses and regulatory bodies, beyond the formal advisory structures.
Standards Alignment
Standardization bodies should align all existing industry standards with the AI Act to streamline compliance. The European and international AI standards should also be aligned to streamline compliance efforts for organisations.
- There should be early action for industries to make sure that they are following future requirements.
- The international, European, and national standardization bodies must increase their levels of cooperation with one another.
How can the structured integration of SMEs in AI implementation be improved?
Small and medium-sized enterprises (SMEs) developing and deploying AI systems face unique hurdles in complying with the EU AI Act. The challenges stem from tight implementation timelines, complex stakeholder dynamics, and significant compliance costs. Therefore, a structured approach to integrating SMEs is crucial for fostering innovation and ensuring a level playing field.
Key Challenges for SMEs:
- Temporal: The gap between the expected publication of harmonized standards (early 2026) and the compliance deadline (August 2026) leaves only a narrow window for implementation, insufficient for most SMEs.
- Structural: Limited participation in the standards development process (CEN-CENELEC JTC 21) hinders SMEs from shaping regulations that directly impact their operations.
- Operational: Significant compliance costs (estimated between €100,000-300,000 annually) and regulatory complexities pose a disproportionate burden, particularly for smaller entities.
Policy Recommendations for Enhanced SME Integration:
To address these challenges, several policy recommendations can be implemented:
Adjust Implementation Deadlines
The EU legislator should consider extending the AI Act implementation timeline to mitigate the bottleneck caused by delays in harmonized standards development. This would restore the balance and allow companies to choose their optimal compliance approach, whether through harmonized standards, common specifications, or expert opinions.
Lower Barriers to Participation
Access to standardization committees must be better financially supported for smaller SMEs and startups, enabling specialized support for industry requirements and effectively addressing sector-specific compliance challenges.
The European Commission or CEN-CENELEC should build industry-specific expert networks on a EU level that can provide targeted guidance for sector-specific compliance challenges.
Overhaul the accessibility of standardization committees via a centralized, user-friendly platform, processes, and priorities.
Practical Aid for Implementation
Establish pragmatic guidance tools for AI Act compliance, with particular focus on SMEs. This would align with the AI Office’s obligation from Art. 62 AI Act, specifically covering standardized templates, a single information platform, and communication campaigns.
Create financial support programs designed for pre-revenue startups pursuing AI Act compliance. These programs would provide direct funding to cover compliance-related costs before startups have established revenue streams.
Regulators, including the AI Office and national authorities, should act as service providers, systematically monitoring and analyzing how organizations implement technical standards.
Structured Integration of SMEs
Rapidly establish and staff the advisory forum and scientific panel as outlined by Art. 67 AI Act. These bodies must include startup and SME representation and sectoral industry knowledge.
Build direct consultation channels between AI startups/SMEs and regulatory bodies, supported by clear EU-level contact points and extending beyond formal advisory structures.
Standards Alignment
Standardization bodies (especially ISO/IEC and CEN-CENELEC) should align industry-specific vertical standards with Article 40 of the AI Act for high-risk AI systems. This approach aligns with findings that some sectors, like healthcare and manufacturing, are already leveraging existing regulatory experience to address AI challenges.
Compliance burdens can be reduced by systematically leveraging existing standards for consistency and interoperability and facilitate entry into international markets.
When developing and implementing harmonized standards, it is crucial to avoid creating a negative conformity presumption, which can significantly increase the compliance burden on providers.
By implementing these recommendations, policymakers can ensure a structured integration of SMEs in AI implementation, fostering innovation and promoting ethical AI development within the EU.
How should standards be aligned to facilitate compliance?
To streamline compliance and reduce the burden on AI providers, especially SMEs and startups, it’s crucial for standardization bodies to align industry-specific vertical standards with the horizontal requirements of the EU AI Act (Art. 40). As Article 103 et seq. likely mandates this alignment, early action can prepare industries for future obligations.
This approach is supported by the observation that sectors like healthcare and manufacturing are already leveraging their existing regulatory expertise to navigate AI-related challenges. Aligning European and international AI standards as closely as possible will further streamline compliance efforts.
Key Aspects of Standards Alignment:
European and international standardisation bodies must cooperate more closely, while ensuring such cooperation respects European values. When developing and implementing harmonized standards, it is crucial to avoid a negative conformity presumption. A negative conformity presumption – where failure to fully comply with technical standards automatically implies non-compliance with the AI Act – needs to be carefully avoided. While adherence to standards can simplify the demonstration of compliance with AI Act requirements, a degree of flexibility must remain to account for technological realities and avoid standard-based barriers-to-entry.
Here are some best practices:
- Leverage Existing Standards: Systematically use existing standards to promote consistency and interoperability, facilitating access to broader international markets.
- Harmonization is Key: This applies to industries operating on the periphery of the AI Act’s direct scope, such as defense and automotive.
- Adapt Existing Frameworks: Instead of creating entirely new regulations or standards, modify existing frameworks at both the federal and EU levels to incorporate AI compliance requirements.
By promoting strategic, targeted standardization, policymakers can avoid redundant efforts, maintain consistency across sectors, and promote a more streamlined and accessible compliance landscape for all stakeholders.
Ultimately, success hinges on navigating a complex interplay of technical specifications, stakeholder dynamics, and financial realities. The implementation timeline demands careful recalibration, guaranteeing sufficient time for all actors to adapt. Wider participation is essential, particularly for SMEs and startups, ensuring a more inclusive and balanced development of standards. Through targeted guidance, practical support, and a more harmonized standardization effort, the EU AI Act can achieve its goals of fostering innovation while safeguarding fundamental rights and fostering a responsible AI ecosystem.