Assessing Responsibility Allocation in High-Risk AI Systems

Does the AI Act Adequately Allocate Responsibilities along the Value Chain for High-Risk Systems?

The European Union’s Artificial Intelligence (AI) Act regulates high-risk systems by allocating responsibilities to designated actors throughout the systems’ value chain. This study discusses the allocation of these responsibilities and argues that while the Act’s linear approach promotes compliance and accountability at each stage of the systems’ design, development, and deployment, it also has notable limitations that could pose risks to individuals.

Introduction

In 2024, the European Union adopted the AI Act to promote the uptake of human-centric and trustworthy AI while safeguarding people’s health, safety, and fundamental rights. The Act adopts a risk-based approach that categorizes AI systems as unacceptable risk, high-risk, limited-risk, and low-risk, in addition to specific provisions for general-purpose AI models.

This study focuses on high-risk AI systems and examines whether the AI Act adequately allocates responsibilities throughout the systems’ life cycle. We begin by unpacking the definition of high-risk AI systems and identifying the key actors at each stage of the value chain.

Decoding High-Risk AI Systems and Their Key Actors

According to Article 6 of the AI Act, an AI system is classified as high-risk in two instances:

  1. The AI system is intended to be used as a safety component of a product, or a product covered by EU laws in Annex I of the Act and is required to undergo a third-party conformity assessment (e.g., in vitro medical devices, lifts, toys, etc.);
  2. The system is referred to in Annex III (mainly dealing with fundamental rights concerns).

However, paragraph 3 of Article 6 provides an exemption to this categorization. It clarifies that an AI system referred to in Annex III is not considered high-risk when it is intended to:

  • (a) perform a narrow procedural task;
  • (b) improve the result of a previously completed human activity;
  • (c) detect decision-making patterns or deviations, and is not meant to influence the previously completed human assessment without proper human review;
  • (d) perform a preparatory task of the evaluation relevant to the use cases under Annex III.

An AI system is exempted where it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. In that case, the systems’ providers must document their assessment before the system is placed on the market or put into service, and register themselves and the system in a new EU database.

Evaluating the Responsibility Allocation

The AI Act outlines the roles and obligations of the actors in a linear fashion within a flexible regulatory environment to promote transparency, compliance, and accountability. However, this study argues that further refinement is necessary to better address the unique complexity, opacity, and autonomy of AI systems, which introduce particular liability issues that the Act does not fully address.

For instance, the linear approach may not adequately capture the intricate interactions between various stakeholders involved in the AI lifecycle, such as developers, users, and regulatory bodies. This could lead to gaps in accountability, especially in scenarios where the AI system operates autonomously or makes decisions that significantly impact individuals.

Conclusion

In conclusion, while the AI Act represents a significant step towards regulating high-risk AI systems and promoting accountability, it is essential to tighten the flexibility within the Act to ensure better protection of individuals’ safety, health, and fundamental rights. As AI technology continues to advance, ongoing evaluation and adaptation of regulatory frameworks will be critical in addressing the challenges and risks associated with high-risk AI systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...