“Moving Forward Responsibly”: The Dutch DPA’s Vision on Generative AI
Today, the Dutch Data Protection Authority (AP) has issued a regulatory statement regarding generative AI. On February 4, the AP published its vision document Verantwoord Vooruit: AP-visie op generatieve AI (“Moving Forward Responsibly: AP’s Vision on Generative AI”), outlining how organizations can develop and deploy generative artificial intelligence in compliance with the GDPR (AVG in Dutch).
This document is not merely theoretical; it indicates the AP’s regulatory priorities for AI chatbots, image generators, and other general-purpose AI models. The message is clear: while innovation is encouraged, it must not come at the expense of fundamental rights.
Key Safeguards Expected by the AP
To guide responsible AI deployment, the AP proposes a set of technical and organizational safeguards closely aligned with GDPR principles and emerging best practices:
- Transparent System Design and Operation
Transparency is a key theme in the AP’s vision. AI developers and providers should ensure that generative AI applications are easily identifiable and amenable to further analysis. This may involve sharing documentation, such as model cards, that explain the AI’s capabilities, limitations, and potential biases. Transparency fosters trust and is essential for accountability, complying with GDPR obligations regarding information provision and fairness.
- Risk Assessments and Mitigation
Prior to and during AI deployment, organizations should conduct thorough risk assessments, such as Data Protection Impact Assessments (DPIAs) and Fundamental Rights Impact Assessments under the AI Act. The AP expects organizations to proactively address privacy, bias, and safety risks posed by generative AI. This approach echoes the GDPR mandate for “privacy by design” and the upcoming EU AI Act requirements for risk management.
- Clear Purpose Limitation and Legal Basis
Organizations must define a specific, explicit purpose for processing personal data through AI and identify a valid legal basis under the GDPR for that processing. The AP expects organizations to clearly articulate why data is being processed, adhering to the principle of purpose limitation. Vague data collection for AI projects will not be acceptable.
- Controlled Environments and Robust Data Governance
The AP urges organizations to maintain control over the environments in which AI systems operate and the data they process. This may involve hosting models on secure, EU-based infrastructure, enforcing strict access controls, and applying comprehensive data governance policies to monitor AI usage. By containing generative AI within well-governed IT environments, businesses can prevent unauthorized data breaches and ensure compliance with data residency and security requirements.
- Lawfulness from Development Through Deployment
The AP’s report emphasizes that both the development and deployment of AI models must comply with the GDPR. AI developers need a legitimate basis to collect and use personal data for training, while AI service providers must ensure any personal data processed by their generative AI is handled lawfully.
Expected Responsibilities: Purpose Limitation, Risk Assessment, and AI Governance
The vision document underscores that organizations bear responsibility for ensuring their use of generative AI is purpose-specific, risk-managed, and well-governed. The AP expects businesses to establish robust internal governance throughout the entire AI lifecycle:
- Think first, deploy second: Clearly define the purpose of the AI and process personal data only as necessary.
- Conduct DPIAs: Identify how AI might affect individual rights or present ethical concerns, addressing risks proactively.
- Maintain continuous governance: Ensure oversight and documentation across the AI’s lifecycle, from design to monitoring and updates.
- Demonstrate accountability: Keep records of AI systems and audit logs of AI outputs, ensuring compliance with GDPR requirements.
In summary, the AP expects a holistic governance approach: clear purpose definition, rigorous risk assessment, transparent operations, and ongoing control. Organizations that implement these practices will be better positioned to satisfy both the AP’s expectations and the forthcoming obligations of the EU AI Act.
Upcoming AP Guidance and Coordination Under the AI Act
To help steer generative AI in the right direction, the AP is planning to roll out further guidance and tools in 2026:
- Final guidance on generative AI and data protection: The AP will provide clarity on complex legal questions regarding training data and GDPR principles.
- AI Helpdesk: A support desk for generative AI will allow developers and users to pose questions and share concerns.
- AI Regulatory Sandbox: A proposed Dutch AI sandbox will facilitate compliant innovation, allowing developers to experiment under regulatory guidance.
- Coordination Role Under the EU AI Act: The AP is expected to take a leading role in national AI oversight once the AI Act is in effect.
What Can We Expect?
The AP’s vision on generative AI points to a future of more structured oversight of AI technologies. Businesses can anticipate concrete guidelines in 2026, clarifying how to develop and deploy generative AI in compliance with the GDPR. Increased engagement from the AP, through its AI Helpdesk and regulatory sandbox, will provide support and ensure adherence to AI best practices.
Moreover, the AP has identified AI as a top priority moving forward. There may be heightened scrutiny of generative AI deployments, particularly regarding compliance with GDPR for personal data used in training AI models. The AP’s strategy outlines a commitment to intervene against non-compliant AI uses, especially those with significant societal impact, while also offering guidance for responsible AI use.