Understanding the EU AI Act Risk Pyramid

Understanding the EU AI Act Risk Pyramid

The EU AI Act introduces a groundbreaking approach to the regulation of artificial intelligence systems, categorizing them based on the level of risk they pose to safety, rights, and societal values. This risk-based framework sorts AI systems into four distinct tiers, each with varying degrees of regulatory requirements.

1. Unacceptable Risk

At the pinnacle of the pyramid, unacceptable risk AI systems are deemed so hazardous to fundamental rights—such as dignity, freedom, and privacy—that they are banned outright. Notable examples of such systems include:

  • Social scoring systems, which evaluate citizens’ behaviors for potential rewards or punishments.
  • Predictive policing tools designed to forecast the likelihood of individuals committing crimes.
  • Real-time biometric surveillance in public spaces, permitted only under very limited and regulated conditions, such as targeted law enforcement activities.

2. High-Risk AI Systems

The second tier encompasses high-risk AI systems, which are allowed but come with stringent conditions due to their operation in areas where errors or biases could have severe implications for individuals. Examples include:

  • AI used in medical devices.
  • AI applications in critical infrastructure, such as energy and water supply.
  • AI employed in employment and recruitment processes, like CV filtering or automated interviews.
  • Educational tools, including exam scoring or grade predictions.
  • Financial services, particularly credit scoring systems.

Providers of high-risk AI systems must adhere to seven essential criteria, including:

  • A comprehensive risk management system throughout the lifecycle of the AI system.
  • Utilization of high-quality datasets to minimize bias and ensure optimal performance.
  • Implementation of human oversight mechanisms to mitigate potential harm stemming from automation.
  • Robust technical documentation and thorough record-keeping.
  • Transparency to ensure users understand the workings of the AI.
  • Commitment to accuracy, robustness, and cybersecurity.
  • A full quality management system aligned with EU product safety regulations.

This regulatory landscape represents new territory for many technology firms, which typically embrace agile development rather than regulatory compliance. Meeting these obligations will require significant investment and the establishment of new engineering practices, as well as increased collaboration among cross-functional teams.

3. Limited Risk

AI systems categorized as limited risk do not pose serious threats; however, they still necessitate transparency obligations to inform users of AI involvement. Examples include:

  • Chatbots that must disclose to users that they are interacting with a machine.
  • Generative AI tools that must label synthetic content, such as AI-generated images or videos.

4. Minimal Risk

At the base of the pyramid are minimal risk AI systems, which have little to no risk associated with them. These include everyday applications that pose low stakes and do not significantly impact safety or rights, such as:

  • Spam filters.
  • AI utilized in video games.
  • Recommendation algorithms for movies or music.

Despite their minimal risk, these systems must still comply with existing laws, such as consumer protection and anti-discrimination regulations, and providers are encouraged to adhere to voluntary codes of conduct.

The EU AI Act signifies a pivotal shift in how AI systems are governed, emphasizing a balanced approach between innovation and the safeguarding of fundamental rights. As the landscape evolves, stakeholders must remain vigilant in navigating these regulations to foster responsible AI development.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...