AI Compliance Under GDPR: Lessons from the DPC Inquiry

DPC Inquiry and AI GDPR Obligations

The recent inquiry by the Irish Data Protection Commission (DPC) serves as a crucial reminder for companies utilizing artificial intelligence (AI) tools to remain vigilant about their obligations under the General Data Protection Regulations (GDPR).

Background of the Inquiry

The DPC has launched an inquiry into X, formerly known as Twitter, focusing on the processing of personal data from EU/EEA users on the social media platform. This investigation particularly examines the use of publicly accessible posts to train generative AI models, specifically the Grok Large Language Models (LLMs) developed by xAI, a company owned by Elon Musk.

The inquiry aims to scrutinize the compliance of X with GDPR provisions, particularly regarding the lawfulness and transparency of data processing.

Compliance and Data Processing

The DPC’s investigation will determine whether the personal data used to train Grok was processed legally and if the company adhered to mandatory transparency requirements. The expert opinion emphasizes the need for robust regulatory frameworks to ensure that AI development aligns with legal and ethical standards.

Using personal data to train AI models poses challenges from a data protection perspective. For instance, it can be difficult to ensure that data subject rights are protected. There is a risk that personal data may inadvertently be revealed to third parties in unexpected ways if the AI model lacks appropriate safeguards.

Previous Investigations and Commitments

In the summer of 2024, the DPC initiated and swiftly concluded an investigation into X regarding the alleged unlawful processing of user data to train Grok. Consequently, X committed to permanently refrain from processing EU users’ data for training Grok and deleted all previously processed data used for this purpose. Despite these measures, the ongoing inquiry seeks to ensure compliance and address any remaining issues.

Regulatory Focus and Scope

The DPC’s inquiry is a response to its increasing focus on AI matters over the past year. The inquiry’s scope reaches far, addressing various GDPR provisions, particularly concerning the lawfulness and transparency of data processing. This includes evaluating whether X had a lawful basis to process personal data in this context and if users were adequately informed that their personal data would be used to train AI models.

Of particular concern is the potential for special category personal data to be used in training the AI model if not adequately filtered out. The GDPR mandates that special category data meets a condition laid down in Article 9 for processing to be permitted.

Broader Implications for AI Development

The DPC’s inquiry is part of a broader effort to ensure that AI technologies are developed and deployed in compliance with data protection regulations. The central role played by the Irish DPC in regulating the EU data protection compliance of international tech companies is emphasized, especially regarding the interplay between data and AI.

The Irish government has identified AI as a primary focus, and with many leading international tech companies headquartered in Ireland, the country is well-positioned to become a hub for AI innovation. However, with innovation comes the necessity for regulation, and the DPC, alongside other regulators, will likely play a significant role in the regulation of the upcoming EU AI Act.

Conclusion and Future Considerations

The investigation will be closely monitored in light of the upcoming EU AI Act implementation deadline of 2 August 2025, which includes obligations covering General Purpose AI (GPAI) models like Grok LLM. The EU AI Act mandates detailed documentation and transparency requirements for these models.

The outcome of this inquiry could influence future regulatory approaches to AI and data protection, shaping how data protection authorities conduct investigations involving AI systems and GPAI models.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

Harnessing AI for Effective Risk Management

Artificial intelligence is becoming essential for the risk function, helping chief risk officers (CROs) to navigate compliance and data governance challenges. With a growing number of organizations...

Senate Reverses Course on AI Regulation Moratorium

In a surprising turn, the U.S. Senate voted overwhelmingly to eliminate a provision that would have imposed a federal moratorium on state regulations of artificial intelligence for the next decade...

Bridging the 83% Compliance Gap in Pharmaceutical AI Security

The pharmaceutical industry is facing a significant compliance gap regarding AI data security, with only 17% of companies implementing automated controls to protect sensitive information. This lack of...

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...