Balancing Data Protection and AI Regulation

Navigating the Cross-Over Between Data Protection and AI Regulation

Data is the fuel that powers artificial intelligence (AI). Without data, there’s not much for AI systems to be intelligent about. The UK and EU versions of the General Data Protection Regulation (GDPR) apply across sectors and potentially extend far beyond the UK and EU. Organizations that operate any AI system to process personal data will likely need to comply with the GDPR or other applicable data protection laws. Violations can attract significant fines, as evidenced by Italy’s data protection watchdog, which imposed fines of €15 million on OpenAI for breaching GDPR obligations around personal data use in 2023.

Data Protection Rules: On the Regulatory Watchlist

To assist companies in compliance, the UK’s Information Commissioner’s Office (ICO) and the European Data Protection Board have released guidance materials. These resources, which are constantly evolving, provide support for interpreting existing laws and preparing for incoming legislation.

The ICO guidance addresses Data Protection Impact Assessments (DPIAs), which allow for the analysis of data flows and logic to identify “allocative” and “representational” harms. An example includes the use of an AI tool in recruitment that discriminates based on gender.

As noted by a legal expert, organizations cannot simply disengage from the AI black box and claim ignorance. They must transparently explain how AI decisions and outputs are generated, the inferences made by the AI system, how new personal data is created, and the individual’s rights over modified data. This transparency can be challenging to achieve.

Understanding AI data flows and ensuring transparency are crucial for compliance and innovation. Businesses must proactively address these challenges to build trust and meet regulatory requirements.

New supportive concepts are emerging. The GDPR identifies “legitimate interests” as a lawful basis for businesses to use personal data. Companies relying on this basis must assess and balance the benefits of using personal data against the individual’s rights and freedoms, a complex and uncertain task.

Currently, the concept of recognized legitimate interests is proposed in the UK’s Data (Use and Access) Bill, which is progressing through parliament. This bill could provide clarity in identifying situations where businesses can rely on legitimate interests for data processing, such as using AI to safeguard vulnerable individuals.

However, the further removed the data controller is from the original data source, the less likely it will be able to rely on the legitimate interests ground. It is most likely applicable to personal data collected directly from the individual and least likely when data is obtained from public sources.

EU AI Act: A Global Gamechanger

AI, much like the Internet, the cloud, and other technological innovations, is governed by technology-neutral rules and regulations that apply to all sectors of society. These include data privacy laws, discrimination legislation, intellectual property rules, and liability obligations.

However, the influence of AI is so profound that it has its own technology-specific legislation—the EU AI Act, which came into effect in August 2023 and has the potential to impact regions outside the EU. It represents the world’s first comprehensive collection of AI-specific laws, featuring cross-sector application, a risk-based approach, and hefty fines of up to €35 million or 7% of global turnover for breaches.

In drafting the EU AI Act, the EU aimed to balance the significant benefits of AI with its risks. This legislation categorizes AI according to risk and prohibits applications such as social scoring and subliminal manipulation.

High-risk AI systems, such as those used in biometrics or law enforcement, face stringent obligations. A last-minute addition to the legislation addresses the rise of ChatGPT following its November 2022 launch, introducing regulations for general-purpose AI (GPAI) systems.

Legal experts acknowledge the challenges of legislating for rapidly evolving technologies. The complexity of defining prohibited AI systems is reflected in a 140-page post-application guidance note, highlighting the intricate environment that must be navigated.

Currently, the AI contracting environment favors suppliers, who often limit their liabilities to the narrowest contractual obligations. They attempt to shift legal and financial risks onto buyers, providing minimal commitments regarding compliance with the EU AI Act or GDPR. This “at your own risk” approach is typical during the lifecycle of new technologies, with risks and responsibilities becoming more nuanced as the market matures.

Compliance Priorities for AI Users

Businesses and procurement functions are adapting to accommodate GDPR obligations where their AI uses or interacts with personal data. Now, they must also comply with the EU AI Act.

To prepare for compliance, organizations should:

  • Plan for appropriate due diligence on vendors that use AI to process personal data, understanding data flows, logic, inputs, outputs, and inferences made.
  • Seek appropriate contractual protections in negotiations with AI system providers regarding how the AI operates and compliance with legislative obligations.
  • Demand human involvement and oversight in AI operations and decision-making.
  • Practice privacy by design, incorporating data protection considerations at the earliest stages of AI system development.
  • Conduct regular DPIAs to identify and mitigate risks as AI systems evolve.
  • Stay informed on evolving data protection rules and AI laws, engaging with regulators and industry experts to prepare for adaptation.

As businesses begin to experiment with AI, they face a complex convergence of legislation, rules, responsibilities, and guidance frequently drafted at speed. This challenging compliance environment is expected to clarify as the EU AI Act is rolled out over the next 18 months. Adhering to GDPR principles will be crucial, as they are directly relevant to AI regulation and should facilitate innovation rather than obstruct its use.

As AI technologies progress, businesses must remain agile and proactive in their compliance strategies. Collaboration with stakeholders, including legal experts, technology providers, and regulators, is essential for navigating AI compliance complexities. By working together, businesses can ensure compliance while leading the way in ethical AI development.

This document is provided for informational purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from action based on the contents of this document.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...