Balancing Data Protection and AI Regulation

Navigating the Cross-Over Between Data Protection and AI Regulation

Data is the fuel that powers artificial intelligence (AI). Without data, there’s not much for AI systems to be intelligent about. The UK and EU versions of the General Data Protection Regulation (GDPR) apply across sectors and potentially extend far beyond the UK and EU. Organizations that operate any AI system to process personal data will likely need to comply with the GDPR or other applicable data protection laws. Violations can attract significant fines, as evidenced by Italy’s data protection watchdog, which imposed fines of €15 million on OpenAI for breaching GDPR obligations around personal data use in 2023.

Data Protection Rules: On the Regulatory Watchlist

To assist companies in compliance, the UK’s Information Commissioner’s Office (ICO) and the European Data Protection Board have released guidance materials. These resources, which are constantly evolving, provide support for interpreting existing laws and preparing for incoming legislation.

The ICO guidance addresses Data Protection Impact Assessments (DPIAs), which allow for the analysis of data flows and logic to identify “allocative” and “representational” harms. An example includes the use of an AI tool in recruitment that discriminates based on gender.

As noted by a legal expert, organizations cannot simply disengage from the AI black box and claim ignorance. They must transparently explain how AI decisions and outputs are generated, the inferences made by the AI system, how new personal data is created, and the individual’s rights over modified data. This transparency can be challenging to achieve.

Understanding AI data flows and ensuring transparency are crucial for compliance and innovation. Businesses must proactively address these challenges to build trust and meet regulatory requirements.

New supportive concepts are emerging. The GDPR identifies “legitimate interests” as a lawful basis for businesses to use personal data. Companies relying on this basis must assess and balance the benefits of using personal data against the individual’s rights and freedoms, a complex and uncertain task.

Currently, the concept of recognized legitimate interests is proposed in the UK’s Data (Use and Access) Bill, which is progressing through parliament. This bill could provide clarity in identifying situations where businesses can rely on legitimate interests for data processing, such as using AI to safeguard vulnerable individuals.

However, the further removed the data controller is from the original data source, the less likely it will be able to rely on the legitimate interests ground. It is most likely applicable to personal data collected directly from the individual and least likely when data is obtained from public sources.

EU AI Act: A Global Gamechanger

AI, much like the Internet, the cloud, and other technological innovations, is governed by technology-neutral rules and regulations that apply to all sectors of society. These include data privacy laws, discrimination legislation, intellectual property rules, and liability obligations.

However, the influence of AI is so profound that it has its own technology-specific legislation—the EU AI Act, which came into effect in August 2023 and has the potential to impact regions outside the EU. It represents the world’s first comprehensive collection of AI-specific laws, featuring cross-sector application, a risk-based approach, and hefty fines of up to €35 million or 7% of global turnover for breaches.

In drafting the EU AI Act, the EU aimed to balance the significant benefits of AI with its risks. This legislation categorizes AI according to risk and prohibits applications such as social scoring and subliminal manipulation.

High-risk AI systems, such as those used in biometrics or law enforcement, face stringent obligations. A last-minute addition to the legislation addresses the rise of ChatGPT following its November 2022 launch, introducing regulations for general-purpose AI (GPAI) systems.

Legal experts acknowledge the challenges of legislating for rapidly evolving technologies. The complexity of defining prohibited AI systems is reflected in a 140-page post-application guidance note, highlighting the intricate environment that must be navigated.

Currently, the AI contracting environment favors suppliers, who often limit their liabilities to the narrowest contractual obligations. They attempt to shift legal and financial risks onto buyers, providing minimal commitments regarding compliance with the EU AI Act or GDPR. This “at your own risk” approach is typical during the lifecycle of new technologies, with risks and responsibilities becoming more nuanced as the market matures.

Compliance Priorities for AI Users

Businesses and procurement functions are adapting to accommodate GDPR obligations where their AI uses or interacts with personal data. Now, they must also comply with the EU AI Act.

To prepare for compliance, organizations should:

  • Plan for appropriate due diligence on vendors that use AI to process personal data, understanding data flows, logic, inputs, outputs, and inferences made.
  • Seek appropriate contractual protections in negotiations with AI system providers regarding how the AI operates and compliance with legislative obligations.
  • Demand human involvement and oversight in AI operations and decision-making.
  • Practice privacy by design, incorporating data protection considerations at the earliest stages of AI system development.
  • Conduct regular DPIAs to identify and mitigate risks as AI systems evolve.
  • Stay informed on evolving data protection rules and AI laws, engaging with regulators and industry experts to prepare for adaptation.

As businesses begin to experiment with AI, they face a complex convergence of legislation, rules, responsibilities, and guidance frequently drafted at speed. This challenging compliance environment is expected to clarify as the EU AI Act is rolled out over the next 18 months. Adhering to GDPR principles will be crucial, as they are directly relevant to AI regulation and should facilitate innovation rather than obstruct its use.

As AI technologies progress, businesses must remain agile and proactive in their compliance strategies. Collaboration with stakeholders, including legal experts, technology providers, and regulators, is essential for navigating AI compliance complexities. By working together, businesses can ensure compliance while leading the way in ethical AI development.

This document is provided for informational purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from action based on the contents of this document.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...