EU Commission’s Contingency Plans for AI Standards Delays

EU Commission’s Preparedness for AI Standards Delays

The European Commission is poised to take action should there be delays in the technical standards necessary for companies to demonstrate compliance with the EU AI Act. A spokesperson for the Commission indicated the possibility of providing alternative solutions if these standards are not finalized on time.

Current Status of AI Standards

According to recent reports, the primary bodies responsible for drafting the standards under the AI Act, namely CEN-CENELEC, are currently behind schedule. Initially, these standards were expected to be ready by August 2025, but have now been pushed back to 2026.

Regrettably, this delay may pose challenges for companies striving to meet compliance requirements. The spokesperson, Thomas Regnier, outlined the Commission’s readiness to consider temporary measures to support providers in maintaining compliance amidst these setbacks.

Implications of Delayed Standards

The AI Act aims to regulate high-risk applications of artificial intelligence, ensuring that products and services are safe and trustworthy. However, it is important to note that the standards drafted are not mandatory. Providers can still develop high-risk AI systems even in the absence of these standards.

Nonetheless, having these standards in place is deemed crucial. Regnier emphasized that they will significantly facilitate compliance efforts for providers, making adherence to the regulations more manageable.

Next Steps in the Standards Development Process

As the year progresses, the first drafts of the standards are expected to be released. These drafts will undergo a rigorous process of editing, assessment by the Commission, and will require consultations and votes before they can be finalized.

The AI Act became applicable in August 2023, with full implementation anticipated by 2027. The Commission’s proactive approach aims to ensure that both providers and conformity assessment bodies are adequately prepared ahead of the legal requirements coming into effect, particularly those set for August 2026.

Conclusion

In summary, the European Commission’s readiness to implement alternative solutions highlights the importance of timely standardization for AI technologies. As the landscape of AI regulation continues to evolve, the emphasis remains on ensuring safety, trustworthiness, and compliance in high-risk AI applications.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...