Bridging the Regulatory Gap in AI Compliance

Gap in the EU’s Rules for AI Requires a Well-Documented Approach

The European Union’s regulations for artificial intelligence create complex challenges for organizations deploying AI using special category, or sensitive, personal data for detecting and correcting bias.

With the EU AI Act regulating the technology, alongside the established General Data Protection Regulation (GDPR) regulating personal data, legal practitioners and business leaders face the task of ensuring dual compliance. This is particularly crucial with the increase of AI processing sensitive personal data.

Regulatory challenges often hinge on the fundamental tension between algorithmic fairness—ensuring AI systems treat everyone equally, free from discrimination—and protecting sensitive personal data.

Regulatory Challenges

To address this, Article 10(5) of the AI Act permits processing special categories of personal data when “strictly necessary” for detecting and correcting bias in high-risk AI systems. However, this seems to contradict the GDPR’s general prohibition in Article 9 on processing such data without explicit consent or another specific legal basis, such as a substantial public interest.

Article 10(5) appears to create what some interpret as a new legal basis for processing sensitive data, by expanding the Article 5 fair processing principles to cover bias detection or correction.

Yet the GDPR’s Article 9 legal basis exception options contain no explicit exception for bias detection or correction. It remains unclear whether bias detection or correction can be considered a recognized substantial public interest.

This regulatory gap creates uncertainty for organizations striving to ensure their AI systems are both fair and compliant with EU data protection and AI regulatory requirements.

Processing Pathways

While the AI Act acknowledges the GDPR’s supremacy in cases of conflict, organizations must identify the appropriate legal bases under Article 9 when processing sensitive data for bias detection and correction.

A recent article by the European Parliament Research Service noted the interplay between these regulations, suggesting that legislators may need to intervene. However, organizations will need to contend with this balancing act in the meantime.

One practical solution may involve a more nuanced approach to interpreting “substantial public interest” under Article 9(2)(g) of the GDPR. The European Data Protection Supervisor elaborated on how this article might allow processing under “substantial public interest,” with the AI Act potentially serving as the legal basis.

The suggestion by the European Data Protection Supervisor is a positive move, but this interpretation requires supervisory authorities to reach a conclusion and to provide reliable legal certainty.

A second option follows a Belgian supervisory authority’s perspective, noting that correcting bias is consistent with the GDPR’s fair processing principle. However, fair processing isn’t a recognized legal basis under Article 9. Thus, broader consensus is required among supervisory authorities to help organizations achieve regulatory certainty.

Dual Compliance

In the absence of definitive regulatory guidance, organizations should consider a comprehensive approach addressing both regulatory frameworks.

Those implementing high-risk AI systems must continue to conduct thorough risk assessments that consider both AI Act and GDPR requirements. This includes:

  • Identifying high-risk classification
  • Determining if special categories data processing is needed for bias detection
  • Conducting data protection impact assessments
  • Documenting decision-making processes and risk mitigation strategies

Technical and organizational measures, important under GDPR, are vital when processing sensitive personal data through an AI platform. It’s fair to conclude that supervisory authorities—the individual data protection regulators located in each EU country who interpret and apply EU law—and the European Parliament will continue to emphasize the importance of strong cybersecurity safeguards.

Organizations should apply state-of-the-art security measures, ensure robust access controls for processing sensitive data, implement strong data minimization principles, delete special category data promptly after bias correction, and explore anonymization techniques where feasible.

When considering appropriate legal bases for processing special category personal data within an AI platform, organizations may need a hybrid approach. This involves obtaining explicit consent where feasible and exploring the “substantial public interest” exception where consent is impractical for documenting how bias detection serves important societal interests.

In parallel, it may be wise to also document that correcting bias is consistent with the GDPR’s fair processing principle following the Belgian supervisory authority perspective.

None of this analysis is helpful if documentation and transparency fail to demonstrate compliance. Organizations must maintain detailed records of processing activities involving special categories data, document necessity assessments and legal basis analyses, communicate clearly and transparently about how they use data, and develop governance structures overseeing AI systems processing sensitive data.

The lack of regulatory clarity for using sensitive personal data for bias mitigation in AI poses a challenge for organizations striving to comply with both the EU AI Act and GDPR. While organizations await much-needed guidance from lawmakers and supervisory authorities, they must adopt a proactive and well-documented approach to risk assessment, data minimization, and transparency.

More Insights

Tariffs and the EU AI Act: Impacts on the Future of AI Innovation

The article discusses the complex impact of tariffs and the EU AI Act on the advancement of AI and automation, highlighting how tariffs can both hinder and potentially catalyze innovation. It...

Europe’s Ambitious AI Sovereignty Action Plan

The European Commission has unveiled its AI Continent Action Plan, a comprehensive strategy aimed at establishing Europe as a leader in artificial intelligence. This plan emphasizes investment in AI...

Balancing Innovation and Regulation in Singapore’s AI Landscape

Singapore is unveiling its National AI Strategy 2.0, positioning itself as an innovator and regulator in the field of artificial intelligence. However, challenges such as data privacy and AI bias loom...

Ethical AI Strategies for Financial Innovation

Lexy Kassan discusses the essential components of responsible AI, emphasizing the need for regulatory compliance and ethical implementation within the FinTech sector. She highlights the EU AI Act's...

Empowering Humanity Through Ethical AI

Human-Centered AI (HCAI) emphasizes the design of AI systems that prioritize human values, well-being, and trust, acting as augmentative tools rather than replacements. This approach is crucial for...

AI Safeguards: A Step-by-Step Guide to Building Robust Defenses

As AI becomes more powerful, protecting against its misuse is critical. This requires well-designed "safeguards" – technical and procedural interventions to prevent harmful outcomes. Research outlines...

EU AI Act: Pioneering Regulation for a Safer AI Future

The EU AI Act, introduced as the world's first major regulatory framework for artificial intelligence, aims to create a uniform legal regime across all EU member states while ensuring citizen safety...

EU’s Ambitious AI Continent Action Plan Unveiled

On April 9, 2025, the European Commission adopted the AI Continent Action Plan, aiming to transform the EU into a global leader in AI by fostering innovation and ensuring trustworthy AI. The plan...

Updated AI Contractual Clauses: A New Framework for Public Procurement

The EU's Community of Practice on Public Procurement of AI has published updated non-binding AI Model Contractual Clauses (MCC-AI) to assist public organizations in procuring AI systems. These...