AI Compliance Under GDPR: Lessons from the DPC Inquiry

DPC Inquiry and AI GDPR Obligations

The recent inquiry by the Irish Data Protection Commission (DPC) serves as a crucial reminder for companies utilizing artificial intelligence (AI) tools to remain vigilant about their obligations under the General Data Protection Regulations (GDPR).

Background of the Inquiry

The DPC has launched an inquiry into X, formerly known as Twitter, focusing on the processing of personal data from EU/EEA users on the social media platform. This investigation particularly examines the use of publicly accessible posts to train generative AI models, specifically the Grok Large Language Models (LLMs) developed by xAI, a company owned by Elon Musk.

The inquiry aims to scrutinize the compliance of X with GDPR provisions, particularly regarding the lawfulness and transparency of data processing.

Compliance and Data Processing

The DPC’s investigation will determine whether the personal data used to train Grok was processed legally and if the company adhered to mandatory transparency requirements. The expert opinion emphasizes the need for robust regulatory frameworks to ensure that AI development aligns with legal and ethical standards.

Using personal data to train AI models poses challenges from a data protection perspective. For instance, it can be difficult to ensure that data subject rights are protected. There is a risk that personal data may inadvertently be revealed to third parties in unexpected ways if the AI model lacks appropriate safeguards.

Previous Investigations and Commitments

In the summer of 2024, the DPC initiated and swiftly concluded an investigation into X regarding the alleged unlawful processing of user data to train Grok. Consequently, X committed to permanently refrain from processing EU users’ data for training Grok and deleted all previously processed data used for this purpose. Despite these measures, the ongoing inquiry seeks to ensure compliance and address any remaining issues.

Regulatory Focus and Scope

The DPC’s inquiry is a response to its increasing focus on AI matters over the past year. The inquiry’s scope reaches far, addressing various GDPR provisions, particularly concerning the lawfulness and transparency of data processing. This includes evaluating whether X had a lawful basis to process personal data in this context and if users were adequately informed that their personal data would be used to train AI models.

Of particular concern is the potential for special category personal data to be used in training the AI model if not adequately filtered out. The GDPR mandates that special category data meets a condition laid down in Article 9 for processing to be permitted.

Broader Implications for AI Development

The DPC’s inquiry is part of a broader effort to ensure that AI technologies are developed and deployed in compliance with data protection regulations. The central role played by the Irish DPC in regulating the EU data protection compliance of international tech companies is emphasized, especially regarding the interplay between data and AI.

The Irish government has identified AI as a primary focus, and with many leading international tech companies headquartered in Ireland, the country is well-positioned to become a hub for AI innovation. However, with innovation comes the necessity for regulation, and the DPC, alongside other regulators, will likely play a significant role in the regulation of the upcoming EU AI Act.

Conclusion and Future Considerations

The investigation will be closely monitored in light of the upcoming EU AI Act implementation deadline of 2 August 2025, which includes obligations covering General Purpose AI (GPAI) models like Grok LLM. The EU AI Act mandates detailed documentation and transparency requirements for these models.

The outcome of this inquiry could influence future regulatory approaches to AI and data protection, shaping how data protection authorities conduct investigations involving AI systems and GPAI models.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...