GDPR Enforcement: Shaping AI Governance in the EU
The rapid deployment of artificial intelligence (AI) systems across the European Union (EU) has brought data protection law to the forefront of AI governance. Due to the reliance of many AI systems on large-scale processing of personal data, the General Data Protection Regulation (GDPR) has emerged as the EU’s first effective enforcement framework for AI, predating the EU Artificial Intelligence Act (AI Act).
Current Enforcement Landscape
More than seven years after the GDPR came into force, national data protection authorities (DPAs) are actively applying data protection rules to AI-related practices. Across the EU, DPAs have initiated investigations, adopted corrective measures, and imposed significant fines on various AI technologies, including:
- Biometric identification
- Facial recognition
- Automated decision making
- Profiling
- Training and deployment of AI models
This activity reflects the growing role of DPAs in safeguarding fundamental rights.
Challenges in Enforcement
Despite the increased activity, enforcement remains uneven across the EU. Variations in resources, technical expertise, and national priorities have led to a fragmented application of the GDPR in AI contexts. Additionally, the technical complexity of AI systems has tested the limits of a technology-neutral framework designed for various processing operations.
Alongside enforcement actions, EU and national authorities use guidance, opinions, and soft-law instruments to clarify how core GDPR principles—such as lawfulness, transparency, data minimization, and accountability—apply to AI. These efforts shape compliance practices and provide legal certainty for organizations.
DPAs’ Role in AI Enforcement
DPAs are gaining valuable experience in tackling AI-related issues through guidelines and concrete enforcement actions. The European Data Protection Board (EDPB) has recognized that DPAs should have a prominent role in AI enforcement, encouraging member states to designate them as market surveillance authorities for high-risk AI systems.
AI-enabled biometric technologies have faced stringent enforcement. For instance, facial recognition systems that scrape images from public websites have been found to violate GDPR principles. A notable case involved the Dutch authority imposing a €30.5 million fine on Clearview AI for illegally collecting facial images to provide recognition services.
Automated Decision Making and Profiling
Another central enforcement area is automated decision making (ADM) and profiling. DPAs have sanctioned organizations that target individuals for commercial purposes without valid legal bases. In 2022, the Spanish DPA fined a financial institution €3 million for unlawful processing related to commercial profiling. Moreover, in 2025, the Hamburg DPA fined a provider nearly €500,000 for automated credit card application rejections based solely on algorithms.
Regulation of AI Systems Affecting Minors
Enforcement has also focused on AI systems affecting minors. The Italian DPA imposed a €5 million fine on an AI developer for non-compliance in relation to a virtual companion, citing inadequate age-verification mechanisms.
Guidance on Developing AI Systems
At the EU level, the EDPB has clarified that AI models and automated decision-making systems are subject to the full set of GDPR principles. Controllers must assess risks throughout the AI lifecycle, from data collection to deployment. The EDPB emphasizes that complexity does not diminish the duty to explain processing in a meaningful way.
Identifying an appropriate legal basis for AI-related processing is crucial. While reliance on legitimate interest is allowed, it requires strict necessity and documented safeguards.
From GDPR to the AI Act
The AI Act does not displace the role of DPAs in supervising AI systems involving personal data. Instead, it assigns oversight obligations to DPAs for certain AI applications, maintaining a decentralized model. This ensures continuity between GDPR enforcement and future AI Act oversight.
Competitiveness and Regulatory Simplification
GDPR enforcement is embedded in a broader policy debate on EU competitiveness and regulatory burden. The Draghi Report highlights regulatory complexity as a factor in Europe’s innovation gap, advocating for a more predictable implementation of EU rules. In response, the European Commission (EC) has launched the Digital Omnibus initiative to recalibrate the implementation of EU digital legislation, including the AI Act.
Targeted adjustments to core GDPR concepts are proposed to adapt the enforcement framework to remain operational and proportionate in light of AI developments.