DPC Inquiry and AI GDPR Obligations
The recent inquiry by the Irish Data Protection Commission (DPC) serves as a crucial reminder for companies utilizing artificial intelligence (AI) tools to remain vigilant about their obligations under the General Data Protection Regulations (GDPR).
Background of the Inquiry
The DPC has launched an inquiry into X, formerly known as Twitter, focusing on the processing of personal data from EU/EEA users on the social media platform. This investigation particularly examines the use of publicly accessible posts to train generative AI models, specifically the Grok Large Language Models (LLMs) developed by xAI, a company owned by Elon Musk.
The inquiry aims to scrutinize the compliance of X with GDPR provisions, particularly regarding the lawfulness and transparency of data processing.
Compliance and Data Processing
The DPC’s investigation will determine whether the personal data used to train Grok was processed legally and if the company adhered to mandatory transparency requirements. The expert opinion emphasizes the need for robust regulatory frameworks to ensure that AI development aligns with legal and ethical standards.
Using personal data to train AI models poses challenges from a data protection perspective. For instance, it can be difficult to ensure that data subject rights are protected. There is a risk that personal data may inadvertently be revealed to third parties in unexpected ways if the AI model lacks appropriate safeguards.
Previous Investigations and Commitments
In the summer of 2024, the DPC initiated and swiftly concluded an investigation into X regarding the alleged unlawful processing of user data to train Grok. Consequently, X committed to permanently refrain from processing EU users’ data for training Grok and deleted all previously processed data used for this purpose. Despite these measures, the ongoing inquiry seeks to ensure compliance and address any remaining issues.
Regulatory Focus and Scope
The DPC’s inquiry is a response to its increasing focus on AI matters over the past year. The inquiry’s scope reaches far, addressing various GDPR provisions, particularly concerning the lawfulness and transparency of data processing. This includes evaluating whether X had a lawful basis to process personal data in this context and if users were adequately informed that their personal data would be used to train AI models.
Of particular concern is the potential for special category personal data to be used in training the AI model if not adequately filtered out. The GDPR mandates that special category data meets a condition laid down in Article 9 for processing to be permitted.
Broader Implications for AI Development
The DPC’s inquiry is part of a broader effort to ensure that AI technologies are developed and deployed in compliance with data protection regulations. The central role played by the Irish DPC in regulating the EU data protection compliance of international tech companies is emphasized, especially regarding the interplay between data and AI.
The Irish government has identified AI as a primary focus, and with many leading international tech companies headquartered in Ireland, the country is well-positioned to become a hub for AI innovation. However, with innovation comes the necessity for regulation, and the DPC, alongside other regulators, will likely play a significant role in the regulation of the upcoming EU AI Act.
Conclusion and Future Considerations
The investigation will be closely monitored in light of the upcoming EU AI Act implementation deadline of 2 August 2025, which includes obligations covering General Purpose AI (GPAI) models like Grok LLM. The EU AI Act mandates detailed documentation and transparency requirements for these models.
The outcome of this inquiry could influence future regulatory approaches to AI and data protection, shaping how data protection authorities conduct investigations involving AI systems and GPAI models.