AI Compliance Under GDPR: Lessons from the DPC Inquiry

DPC Inquiry and AI GDPR Obligations

The recent inquiry by the Irish Data Protection Commission (DPC) serves as a crucial reminder for companies utilizing artificial intelligence (AI) tools to remain vigilant about their obligations under the General Data Protection Regulations (GDPR).

Background of the Inquiry

The DPC has launched an inquiry into X, formerly known as Twitter, focusing on the processing of personal data from EU/EEA users on the social media platform. This investigation particularly examines the use of publicly accessible posts to train generative AI models, specifically the Grok Large Language Models (LLMs) developed by xAI, a company owned by Elon Musk.

The inquiry aims to scrutinize the compliance of X with GDPR provisions, particularly regarding the lawfulness and transparency of data processing.

Compliance and Data Processing

The DPC’s investigation will determine whether the personal data used to train Grok was processed legally and if the company adhered to mandatory transparency requirements. The expert opinion emphasizes the need for robust regulatory frameworks to ensure that AI development aligns with legal and ethical standards.

Using personal data to train AI models poses challenges from a data protection perspective. For instance, it can be difficult to ensure that data subject rights are protected. There is a risk that personal data may inadvertently be revealed to third parties in unexpected ways if the AI model lacks appropriate safeguards.

Previous Investigations and Commitments

In the summer of 2024, the DPC initiated and swiftly concluded an investigation into X regarding the alleged unlawful processing of user data to train Grok. Consequently, X committed to permanently refrain from processing EU users’ data for training Grok and deleted all previously processed data used for this purpose. Despite these measures, the ongoing inquiry seeks to ensure compliance and address any remaining issues.

Regulatory Focus and Scope

The DPC’s inquiry is a response to its increasing focus on AI matters over the past year. The inquiry’s scope reaches far, addressing various GDPR provisions, particularly concerning the lawfulness and transparency of data processing. This includes evaluating whether X had a lawful basis to process personal data in this context and if users were adequately informed that their personal data would be used to train AI models.

Of particular concern is the potential for special category personal data to be used in training the AI model if not adequately filtered out. The GDPR mandates that special category data meets a condition laid down in Article 9 for processing to be permitted.

Broader Implications for AI Development

The DPC’s inquiry is part of a broader effort to ensure that AI technologies are developed and deployed in compliance with data protection regulations. The central role played by the Irish DPC in regulating the EU data protection compliance of international tech companies is emphasized, especially regarding the interplay between data and AI.

The Irish government has identified AI as a primary focus, and with many leading international tech companies headquartered in Ireland, the country is well-positioned to become a hub for AI innovation. However, with innovation comes the necessity for regulation, and the DPC, alongside other regulators, will likely play a significant role in the regulation of the upcoming EU AI Act.

Conclusion and Future Considerations

The investigation will be closely monitored in light of the upcoming EU AI Act implementation deadline of 2 August 2025, which includes obligations covering General Purpose AI (GPAI) models like Grok LLM. The EU AI Act mandates detailed documentation and transparency requirements for these models.

The outcome of this inquiry could influence future regulatory approaches to AI and data protection, shaping how data protection authorities conduct investigations involving AI systems and GPAI models.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...