AI Decision-Making: Balancing Fairness and Accountability

Understanding Right to Explanation and Automated Decision-Making in Europe’s GDPR and AI Act

Automated decision-making (ADM) systems are designed to either fully replace or support human decision-making, aiming to enhance the accuracy, efficiency, consistency, and objectivity of decisions traditionally made by humans. Examples of ADM include automated recruitment systems, healthcare triage systems, online content moderation, and predictive policing.

The Importance of Fairness in Decision-Making

In liberal democracies, it is customary for significant decisions in fields such as education, welfare entitlement, employment, healthcare, and the judiciary to adhere to standardized procedures and appeals processes that are open to public scrutiny. This establishes a fundamental understanding that human decision-makers may not always be infallible or fair, but that standards can be set to evaluate the fairness of consequential decisions.

To ensure substantive and procedural fairness in automated decisions, relevant provisions in Europe’s General Data Protection Regulation (GDPR) and the AI Act are in place. Substantive fairness encompasses considerations like distributive justice, non-discrimination, proportionality, accuracy, and reliability. Procedural fairness includes transparency, due process, consistency, human oversight, and the right to an explanation.

Recent Failures of AI Systems

Several instances highlight AI systems that have failed to meet these fairness requirements, including:

  • Welfare fraud detection systems in Amsterdam and the UK.
  • Families wrongfully flagged for child abuse investigations in Japan.
  • Low-income residents denied food subsidies in Telangana, India.
  • Instances of racial bias in generative AI tools assisting hiring processes.

Rights Established by GDPR and AI Act

In response to concerns regarding the accuracy, reliability, and fairness of ADM, the GDPR grants individuals the right to:

  • Provide explicit consent for automated decisions (GDPR art 22).
  • Be informed of the use of ADM (GDPR art 13, 14 & 15 and AIA art 26).
  • Request human intervention or oversight (GDPR art 22 and AIA art 86).
  • Receive an explanation for decisions made (GDPR art 13, 14 & 15 and AIA art 86).

The right to explanation applies to decisions based solely on automated processing that have significant legal effects concerning an individual (GDPR art 22). The GDPR requires the provision of “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of such processing.

Similarly, the AI Act mandates a “right to obtain clear and meaningful explanations” regarding the role of AI systems in decision-making procedures and the main elements of the decisions made (AI Act art 86).

Debate Over Explanation Requirements

There is ongoing debate among legal and policy experts about the interpretation of the GDPR’s requirement for “meaningful information.” This includes questions about the type of explanation required under the AI Act and the intended outcomes of these explanations. A critical aspect often overlooked is the technical difficulty of providing explanations for model outputs used in ADM.

Explainable AI

The field of Explainable AI (XAI) focuses on ensuring that the outputs of AI systems can be explained and understood by those affected by their decisions. There are two primary methods to achieve this: intrinsic and post-hoc.

Intrinsic Methods

Intrinsic methods are applicable when the AI model is simple enough to trace the relationship between inputs and outputs. For instance, a decision tree model for credit scoring allows for a clear path from inputs (e.g., income, employment history) to the output (loan eligibility).

Post-Hoc Methods

For more complex models, post-hoc methods like Shapley Values and LIME provide insights into a model’s reasoning without accessing its internal structure. However, these approximations may lack reliability and can lead to inaccuracies in understanding the model’s reasoning.

Challenges of Predictive ADM

Critics like Narayanan and Kapoor argue that predictions of “life outcomes” are often unreliable and ethically questionable. Such predictions can influence the systems they aim to forecast, potentially undermining individual agency.

Instead, the focus should be on understanding the social factors leading to various outcomes, enabling individuals and relevant agencies to leverage this knowledge to shape better futures.

Recommendations for ADM Regulations

To protect individuals effectively, regulations like the GDPR and AI Act should limit fully automated decisions to interpretable models. Outputs should include clear explanations of decisions, especially in contexts where human agency is crucial. Failure to do so may result in unfair targeting or denial of access to public goods, undermining the fairness expected in liberal democracies.

As AI systems continue to be deployed in public administration, it is vital to ensure that ADM processes uphold the substantive and procedural fairness required for significant citizen impact.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...