AI Compliance Under GDPR: Lessons from the DPC Inquiry

DPC Inquiry and AI GDPR Obligations

The recent inquiry by the Irish Data Protection Commission (DPC) serves as a crucial reminder for companies utilizing artificial intelligence (AI) tools to remain vigilant about their obligations under the General Data Protection Regulations (GDPR).

Background of the Inquiry

The DPC has launched an inquiry into X, formerly known as Twitter, focusing on the processing of personal data from EU/EEA users on the social media platform. This investigation particularly examines the use of publicly accessible posts to train generative AI models, specifically the Grok Large Language Models (LLMs) developed by xAI, a company owned by Elon Musk.

The inquiry aims to scrutinize the compliance of X with GDPR provisions, particularly regarding the lawfulness and transparency of data processing.

Compliance and Data Processing

The DPC’s investigation will determine whether the personal data used to train Grok was processed legally and if the company adhered to mandatory transparency requirements. The expert opinion emphasizes the need for robust regulatory frameworks to ensure that AI development aligns with legal and ethical standards.

Using personal data to train AI models poses challenges from a data protection perspective. For instance, it can be difficult to ensure that data subject rights are protected. There is a risk that personal data may inadvertently be revealed to third parties in unexpected ways if the AI model lacks appropriate safeguards.

Previous Investigations and Commitments

In the summer of 2024, the DPC initiated and swiftly concluded an investigation into X regarding the alleged unlawful processing of user data to train Grok. Consequently, X committed to permanently refrain from processing EU users’ data for training Grok and deleted all previously processed data used for this purpose. Despite these measures, the ongoing inquiry seeks to ensure compliance and address any remaining issues.

Regulatory Focus and Scope

The DPC’s inquiry is a response to its increasing focus on AI matters over the past year. The inquiry’s scope reaches far, addressing various GDPR provisions, particularly concerning the lawfulness and transparency of data processing. This includes evaluating whether X had a lawful basis to process personal data in this context and if users were adequately informed that their personal data would be used to train AI models.

Of particular concern is the potential for special category personal data to be used in training the AI model if not adequately filtered out. The GDPR mandates that special category data meets a condition laid down in Article 9 for processing to be permitted.

Broader Implications for AI Development

The DPC’s inquiry is part of a broader effort to ensure that AI technologies are developed and deployed in compliance with data protection regulations. The central role played by the Irish DPC in regulating the EU data protection compliance of international tech companies is emphasized, especially regarding the interplay between data and AI.

The Irish government has identified AI as a primary focus, and with many leading international tech companies headquartered in Ireland, the country is well-positioned to become a hub for AI innovation. However, with innovation comes the necessity for regulation, and the DPC, alongside other regulators, will likely play a significant role in the regulation of the upcoming EU AI Act.

Conclusion and Future Considerations

The investigation will be closely monitored in light of the upcoming EU AI Act implementation deadline of 2 August 2025, which includes obligations covering General Purpose AI (GPAI) models like Grok LLM. The EU AI Act mandates detailed documentation and transparency requirements for these models.

The outcome of this inquiry could influence future regulatory approaches to AI and data protection, shaping how data protection authorities conduct investigations involving AI systems and GPAI models.

More Insights

Congress’s Silent Strike Against AI Regulation

A provision in Congress's budget bill could preempt all state regulation of AI for the next ten years, effectively removing public recourse against AI-related harm. This measure threatens the progress...

Congress Moves to Limit California’s AI Protections

House Republicans are advancing legislation that would impose a 10-year ban on state regulations regarding artificial intelligence, alarming California leaders who fear it would undermine existing...

AI Missteps and National Identity: Lessons from Malaysia’s Flag Controversies

Recent incidents involving AI-generated misrepresentations of Malaysia’s national flag highlight the urgent need for better digital governance and AI literacy. The failures in recognizing national...

Responsible AI: Insights from the Global Trust Maturity Survey

The rapid growth of generative AI and large language models is driving adoption across various business functions, necessitating the deployment of AI in a safe and responsible manner. A recent...

Driving Responsible AI: The Business Case for Ethical Innovation

Philosophical principles and regulatory frameworks have often dominated discussions on AI ethics, failing to resonate with key decision-makers. This article identifies three primary drivers—top-down...

Streamlining AI Regulations for Competitive Advantage in Europe

The General Data Protection Regulation (GDPR) complicates the necessary use of data and AI, hindering companies from leveraging AI's potential effectively. To enhance European competitiveness, there...

Colorado’s AI Act: Legislative Setback and Compliance Challenges Ahead

The Colorado Legislature recently failed to amend the Artificial Intelligence Act, originally passed in 2024, which imposes strict regulations on high-risk AI systems. Proposed amendments aimed to...

AI in Recruitment: Balancing Innovation and Compliance

AI is revolutionizing recruitment by streamlining processes such as resume screening and candidate engagement, but it also raises concerns about bias and compliance with regulations. While the EU has...

EU Member States Struggle to Fund AI Act Enforcement

EU policy adviser Kai Zenner has warned that many EU member states are facing financial difficulties and a shortage of expertise necessary to enforce the AI Act effectively. As the phased...