Guidelines for Compliance with the EU AI Act

EU AI Act Compliance: Guidance from the Spanish AI Regulator

On December 10, 2025, the Spanish supervisory authority for the EU AI Act (Agencia Española de Supervisión de Inteligencia Artificial, or AESIA) published a comprehensive set of 16 guidelines and non-binding checklists designed to assist companies in navigating their obligations under the AI Act, which came into force in August 2024.

Overview of the Guidelines

The guidelines cover a variety of essential topics to ensure compliance with the AI Act.

1. Introduction to the AI Act

Guidelines No. 1 (26 pages) provide an overview of the main principles of the AI Act, including its risk-based approach, along with prohibitions and obligations that vary according to the risk level of an AI system. The guidelines also clarify the roles of economic operators and key obligations such as AI literacy and transparency requirements.

2. Practical Examples for Understanding the AI Act

Guidelines No. 2 (21 pages) offer real-world examples of AI systems, illustrating how the law’s obligations apply. Scenarios include biometric identification systems in the workplace and AI tools for HR management.

3. Conformity Assessments

Guidelines No. 3 (47 pages) detail the requirement for conformity assessments. This section explains the assessment process, recommended formats, and steps for meeting compliance requirements.

4. Quality Management Systems

Guidelines No. 4 (44 pages) outline the necessary elements for establishing a quality management system specifically for high-risk AI systems.

5. Risk Management Systems

Guidelines No. 5 (63 pages) describe the essential components required for implementing a risk management system for high-risk AI systems.

6. Human Oversight

Guidelines No. 6 (36 pages) explain how to incorporate human oversight obligations in the design and development of AI systems.

7. Data and Data Governance

Guidelines No. 7 (79 pages) provide instructions on managing data for AI systems, including training, validation, and testing datasets.

8. Transparency

Guidelines No. 8 (56 pages) clarify how to implement the AI Act’s transparency requirements in practice.

9. Accuracy

Guidelines No. 9 (62 pages) focus on compliance with the AI Act’s accuracy requirements, offering specific measures to implement throughout an AI system’s lifecycle.

10. Robustness

Guidelines No. 10 (73 pages) outline necessary measures to ensure the robustness of high-risk AI systems.

11. Cybersecurity

Guidelines No. 11 (79 pages) list cybersecurity measures and provide practical guidance on their implementation.

12. Record Keeping

Guidelines No. 12 (34 pages) assist AI system providers in fulfilling their record-keeping obligations throughout the system’s lifecycle.

13. Post-Market Monitoring

Guidelines No. 13 (38 pages) offer procedures for post-market monitoring after an AI system is operational.

14. Incident Reporting

Guidelines No. 14 (25 pages) outline the steps to report serious incidents involving high-risk AI systems.

15. Technical Documentation

Guidelines No. 15 (62 pages) detail the content required for the technical documentation of high-risk AI systems.

16. Checklist Manual and Checklists

Guidelines No. 16 (16 pages) include a Checklist Manual with 13 Excel files to help companies document their compliance measures.

These guidelines are a vital resource for companies involved in the production, marketing, and deployment of regulated AI systems, offering a practical tool for documenting and reviewing compliance measures. AESIA has confirmed that these guidelines are under ongoing review and may be amended based on future regulatory changes.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...