Essential Compliance Steps for High-Risk AI Systems under the EU AI Act

EU AI Act High-Risk Requirements: What Companies Need to Know

As the EU AI Act enters implementation, organizations involved in developing, deploying, importing, and distributing high-risk AI systems will face new obligations outlined in Sections 2 and 3 of the Act.

Among these obligations, providers and deployers will encounter the most comprehensive set of requirements, particularly those found in Articles 9 to 15. These requirements are specifically designed to ensure that identified high-risk AI systems do not undermine the fundamental rights, safety, and health of European citizens.

Understanding High-Risk AI Systems

The AI Act categorizes AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. Prohibited systems are banned outright, while limited-risk systems face light transparency duties, such as chatbot disclosures. In contrast, high-risk systems come with the most detailed compliance burdens, impacting organizational processes, procurement, and oversight.

High-risk AI systems are defined further by specific use cases in Annex III, which fall within various domains, including:

  • Biometrics
  • Critical infrastructure
  • Education and vocational training
  • Employment and worker management
  • Access to essential services
  • Law enforcement and migration
  • Administration of justice

Organizations must assess whether their AI systems fall into the high-risk category, as this will dictate compliance requirements.

Key Deadlines

  • August 2, 2026 – All high-risk AI systems must comply with core requirements (Articles 9–49).
  • August 2, 2027 – Compliance deadline for high-risk AI systems embedded in regulated products under EU product safety laws.

Core Obligations (Articles 9-15)

1. Article 9 – Risk Management System

Organizations must implement a documented, ongoing risk management process covering the entire AI lifecycle. This involves identifying and evaluating known and foreseeable risks to health, safety, and fundamental rights.

2. Article 10 – Data and Data Governance

AI systems must be trained and validated on datasets that are relevant, representative, and complete. Operational definitions of “representative” or “free of errors” remain ambiguous.

3. Article 11 – Technical Documentation

Organizations must maintain detailed technical documentation to prove compliance, including system design and intended purpose.

4. Article 12 – Record-Keeping

High-risk systems must log events to support traceability and post-market monitoring, ensuring logs are tamper-resistant.

5. Article 13 – Transparency and Information for Users

Users must be clearly informed about the system’s intended purpose and limitations.

6. Article 14 – Human Oversight

Systems must be designed to ensure effective human oversight, with documented oversight mechanisms and adequately trained personnel.

7. Article 15 – Accuracy, Robustness, and Cybersecurity

High-risk AI systems must maintain appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.

Post-Market Monitoring

Obligations outlined in Articles 9–15 closely interact with post-deployment monitoring requirements. If a system’s accuracy degrades over time, it must be detected and corrected.

Preparation Steps for Organizations

To meet the new obligations, organizations should:

  • Understand the high-risk requirements as outlined in the AI Act.
  • Map current AI use against Annex III and Annex I to identify high-risk systems.
  • Assess current practices against Articles 9–15.
  • Identify key gaps in logging practices and data governance policies.
  • Begin developing a compliance policy supported by documentation.

What’s Next?

The European Commission is expected to release implementation guidelines in the second half of 2025. Early preparation, guided by Articles 9–15, is the best way for organizations to remain proactive and demonstrate responsible AI leadership.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...