AI Decision-Making: Balancing Fairness and Accountability

Understanding Right to Explanation and Automated Decision-Making in Europe’s GDPR and AI Act

Automated decision-making (ADM) systems are designed to either fully replace or support human decision-making, aiming to enhance the accuracy, efficiency, consistency, and objectivity of decisions traditionally made by humans. Examples of ADM include automated recruitment systems, healthcare triage systems, online content moderation, and predictive policing.

The Importance of Fairness in Decision-Making

In liberal democracies, it is customary for significant decisions in fields such as education, welfare entitlement, employment, healthcare, and the judiciary to adhere to standardized procedures and appeals processes that are open to public scrutiny. This establishes a fundamental understanding that human decision-makers may not always be infallible or fair, but that standards can be set to evaluate the fairness of consequential decisions.

To ensure substantive and procedural fairness in automated decisions, relevant provisions in Europe’s General Data Protection Regulation (GDPR) and the AI Act are in place. Substantive fairness encompasses considerations like distributive justice, non-discrimination, proportionality, accuracy, and reliability. Procedural fairness includes transparency, due process, consistency, human oversight, and the right to an explanation.

Recent Failures of AI Systems

Several instances highlight AI systems that have failed to meet these fairness requirements, including:

  • Welfare fraud detection systems in Amsterdam and the UK.
  • Families wrongfully flagged for child abuse investigations in Japan.
  • Low-income residents denied food subsidies in Telangana, India.
  • Instances of racial bias in generative AI tools assisting hiring processes.

Rights Established by GDPR and AI Act

In response to concerns regarding the accuracy, reliability, and fairness of ADM, the GDPR grants individuals the right to:

  • Provide explicit consent for automated decisions (GDPR art 22).
  • Be informed of the use of ADM (GDPR art 13, 14 & 15 and AIA art 26).
  • Request human intervention or oversight (GDPR art 22 and AIA art 86).
  • Receive an explanation for decisions made (GDPR art 13, 14 & 15 and AIA art 86).

The right to explanation applies to decisions based solely on automated processing that have significant legal effects concerning an individual (GDPR art 22). The GDPR requires the provision of “meaningful information about the logic involved, as well as the significance and the envisaged consequences” of such processing.

Similarly, the AI Act mandates a “right to obtain clear and meaningful explanations” regarding the role of AI systems in decision-making procedures and the main elements of the decisions made (AI Act art 86).

Debate Over Explanation Requirements

There is ongoing debate among legal and policy experts about the interpretation of the GDPR’s requirement for “meaningful information.” This includes questions about the type of explanation required under the AI Act and the intended outcomes of these explanations. A critical aspect often overlooked is the technical difficulty of providing explanations for model outputs used in ADM.

Explainable AI

The field of Explainable AI (XAI) focuses on ensuring that the outputs of AI systems can be explained and understood by those affected by their decisions. There are two primary methods to achieve this: intrinsic and post-hoc.

Intrinsic Methods

Intrinsic methods are applicable when the AI model is simple enough to trace the relationship between inputs and outputs. For instance, a decision tree model for credit scoring allows for a clear path from inputs (e.g., income, employment history) to the output (loan eligibility).

Post-Hoc Methods

For more complex models, post-hoc methods like Shapley Values and LIME provide insights into a model’s reasoning without accessing its internal structure. However, these approximations may lack reliability and can lead to inaccuracies in understanding the model’s reasoning.

Challenges of Predictive ADM

Critics like Narayanan and Kapoor argue that predictions of “life outcomes” are often unreliable and ethically questionable. Such predictions can influence the systems they aim to forecast, potentially undermining individual agency.

Instead, the focus should be on understanding the social factors leading to various outcomes, enabling individuals and relevant agencies to leverage this knowledge to shape better futures.

Recommendations for ADM Regulations

To protect individuals effectively, regulations like the GDPR and AI Act should limit fully automated decisions to interpretable models. Outputs should include clear explanations of decisions, especially in contexts where human agency is crucial. Failure to do so may result in unfair targeting or denial of access to public goods, undermining the fairness expected in liberal democracies.

As AI systems continue to be deployed in public administration, it is vital to ensure that ADM processes uphold the substantive and procedural fairness required for significant citizen impact.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...