EU AI Act High-Risk Requirements: What Companies Need to Know
As the EU AI Act enters implementation, organizations involved in developing, deploying, importing, and distributing high-risk AI systems will face new obligations outlined in Sections 2 and 3 of the Act.
Among these obligations, providers and deployers will encounter the most comprehensive set of requirements, particularly those found in Articles 9 to 15. These requirements are specifically designed to ensure that identified high-risk AI systems do not undermine the fundamental rights, safety, and health of European citizens.
Understanding High-Risk AI Systems
The AI Act categorizes AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. Prohibited systems are banned outright, while limited-risk systems face light transparency duties, such as chatbot disclosures. In contrast, high-risk systems come with the most detailed compliance burdens, impacting organizational processes, procurement, and oversight.
High-risk AI systems are defined further by specific use cases in Annex III, which fall within various domains, including:
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment and worker management
- Access to essential services
- Law enforcement and migration
- Administration of justice
Organizations must assess whether their AI systems fall into the high-risk category, as this will dictate compliance requirements.
Key Deadlines
- August 2, 2026 – All high-risk AI systems must comply with core requirements (Articles 9–49).
- August 2, 2027 – Compliance deadline for high-risk AI systems embedded in regulated products under EU product safety laws.
Core Obligations (Articles 9-15)
1. Article 9 – Risk Management System
Organizations must implement a documented, ongoing risk management process covering the entire AI lifecycle. This involves identifying and evaluating known and foreseeable risks to health, safety, and fundamental rights.
2. Article 10 – Data and Data Governance
AI systems must be trained and validated on datasets that are relevant, representative, and complete. Operational definitions of “representative” or “free of errors” remain ambiguous.
3. Article 11 – Technical Documentation
Organizations must maintain detailed technical documentation to prove compliance, including system design and intended purpose.
4. Article 12 – Record-Keeping
High-risk systems must log events to support traceability and post-market monitoring, ensuring logs are tamper-resistant.
5. Article 13 – Transparency and Information for Users
Users must be clearly informed about the system’s intended purpose and limitations.
6. Article 14 – Human Oversight
Systems must be designed to ensure effective human oversight, with documented oversight mechanisms and adequately trained personnel.
7. Article 15 – Accuracy, Robustness, and Cybersecurity
High-risk AI systems must maintain appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle.
Post-Market Monitoring
Obligations outlined in Articles 9–15 closely interact with post-deployment monitoring requirements. If a system’s accuracy degrades over time, it must be detected and corrected.
Preparation Steps for Organizations
To meet the new obligations, organizations should:
- Understand the high-risk requirements as outlined in the AI Act.
- Map current AI use against Annex III and Annex I to identify high-risk systems.
- Assess current practices against Articles 9–15.
- Identify key gaps in logging practices and data governance policies.
- Begin developing a compliance policy supported by documentation.
What’s Next?
The European Commission is expected to release implementation guidelines in the second half of 2025. Early preparation, guided by Articles 9–15, is the best way for organizations to remain proactive and demonstrate responsible AI leadership.