Navigating the Complexities of the EU AI Act

The EU AI Act: Current Status and Future Challenges

The EU AI Act is poised to be the world’s first major regulation dedicated solely to artificial intelligence. Its primary objective is to ensure that AI systems utilized in Europe are safe, fair, and uphold individual rights. As the implementation of these regulations progresses, many organizations, especially startups and small businesses, are left grappling with the current status and necessary preparations for compliance.

Implementation Timeline

According to the European Commission, the AI Act officially came into force on 1 August 2024, with a phased rollout. Key dates include:

  • 2 February 2025: Initial provisions come into effect, including bans on certain unacceptable AI practices and obligations to promote AI literacy.
  • 2 August 2025: Governance framework and rules for general-purpose AI models are implemented.
  • 2 August 2026: Major requirements for high-risk AI systems take effect.
  • 2 August 2027: Deadline extended for high-risk AI systems integrated into regulated products, such as medical devices or vehicles.

A recent report sheds light on the process of implementing these regulations, particularly focusing on the formulation of technical standards.

Understanding Technical Standards

Technical standards serve as essential guidelines that outline how companies can comply with the AI Act. By adhering to these standards, organizations can achieve compliance, which is crucial for product development, providing legal certainty, and ensuring market access within the EU.

Current Developments

The EU has tasked a panel of experts under the CEN-CENELEC JTC 21 committee with drafting approximately 35 technical standards supporting the AI Act. These standards will address various aspects, such as:

  • Risk management
  • Data quality and bias
  • Transparency for users
  • Human oversight
  • Cybersecurity
  • Accuracy

Currently, many of these standards are still under development. While they were initially expected to be ready by April 2025, delays have shifted this deadline to August 2025. After finalization, the standards will need to be reviewed and published, likely by early 2026, leaving companies limited time to implement them before the AI Act applies to high-risk systems in August 2026.

Challenges Ahead

The report identifies several significant challenges that companies must navigate to ensure compliance:

  • Tight timelines: Companies have only six to eight months to understand, implement, and validate numerous new technical standards, a daunting task for smaller teams lacking dedicated legal or compliance resources.
  • Cost implications: Acquiring access to essential standards can cost thousands of euros, a substantial burden for startups and small and medium-sized enterprises (SMEs).
  • Access to standards: Historically, many harmonized standards were not publicly accessible, even though compliance was expected. A recent ruling from the EU Court of Justice addressed this by mandating that such standards must be freely available, though it has faced resistance from international standardization bodies.
  • Representation in the standardization process: The process has largely been dominated by larger tech and consulting firms, often based outside the EU, leaving smaller European entities and civil society groups at a disadvantage.

Recommendations for Improvement

The report suggests several strategies to enhance the standardization process:

  • Allow companies more time to comply with the regulations.
  • Ensure standards are freely available and comprehensible.
  • Provide financial and technical assistance, particularly to startups and SMEs.
  • Encourage diverse participation in the drafting of standards.
  • Develop digital tools, such as “smart standards”, to facilitate compliance.

Conclusion

For organizations developing or deploying AI systems within the EU, it is essential to ascertain whether their use cases fall under the high-risk category as delineated in the AI Act. Staying informed about the evolving standardization process is crucial, as these technical standards will form the foundation for future compliance.

Now is an opportune moment for companies to reflect on their internal processes: How transparent are your models? What kind of data governance do you have in place? Are your systems tested for bias, accuracy, and cybersecurity?

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...