Spanish Supervisory Authority Issues Detailed Guidance on Agentic AI and GDPR Compliance
In February 2026, the Spanish data protection authority, Agencia Española de Protección de Datos (AEPD), published guidance addressing data protection issues related to the use of AI agents. This guidance follows a similar analysis by the UK Information Commissioner’s Office.
Understanding AI Agents
The AEPD defines an AI agent as a system that “acts appropriately according to their circumstances and objectives, is flexible in the face of changing environments and goals, learns from experience, and makes appropriate decisions given their perceptual and computational limitations.” The key characteristic of an AI agent is its operational autonomy, allowing it to plan and adapt actions independently in pursuit of a goal.
For example, an AI agent can automatically organize a business trip by accessing an employee’s calendar, booking transport and accommodation, and gathering relevant information like weather updates.
Controllers and Processors
From a data protection perspective, AI agents can perform operations on personal data. However, the AEPD clarifies that the AI agent itself is not responsible for this processing. Instead, AI agents are viewed as technical means through which processing occurs, and not as autonomous legal actors. The distinction between execution and responsibility is crucial; while an AI agent may perform data-handling operations autonomously, the legal responsibility lies with the controller or processor deploying the system.
Automated Decision-Making
The guidance discusses when actions taken by AI agents could be classified as automated decision-making under Article 22 of the GDPR. This classification depends on the effects of the decision and the level of meaningful human intervention, rather than merely the use of autonomous technology.
Use of External Services
A significant aspect of AI agents is their ability to connect to third-party tools, APIs, databases, or online platforms. This capability enhances their power but also complicates the processing chain. The AEPD advises controllers to evaluate:
- whether personal data is sent to third parties;
- the reliability and traceability of external sources;
- the compliance of contracts, governance, and technical controls with GDPR.
Memory as a Compliance Risk
AI agents can maintain data in various memory layers, including short-term context and long-term stores. Each layer presents unique data protection issues. The AEPD emphasizes the need for clear rules regarding:
- what the agent may store;
- the purpose of storage;
- the duration of storage.
Excessive data retention can conflict with the principles of purpose limitation and data minimization in the GDPR. Additionally, data-subject rights concerning memory and logs must be respected, and organizations should maintain a balance between necessary logging for traceability and the risks of excessive data.
Actionable Recommendations for Organizations
The AEPD guidance, spanning 81 pages, offers a comprehensive assessment of the data protection implications of agentic AI. Organizations should prioritize:
- clear accountability for AI-enabled processing;
- a solid understanding of data flows, including external tools;
- well-defined rules for agent memory and retention;
- early application of data protection by design and by default concepts.
As EU supervisory authorities increasingly engage with autonomous AI systems, the guidance underscores that enhanced technical autonomy does not diminish legal responsibility. Organizations are expected to demonstrate effective governance and accountability over agentic AI processing, as highlighted by a recent warning from the Dutch Data Protection Authority regarding security and data protection risks.
Continuous monitoring of regulatory developments relating to AI is essential for compliance, and organizations should seek expert advice on navigating complex regulatory landscapes.