Draft Guidance on Reporting Serious Incidents Under the EU AI Act

EU AI Act: European Commission Issues Draft Guidance on Incident Reporting

On September 26, 2025, the European Commission (EC) published draft guidance regarding the reporting requirements for serious incidents related to high-risk AI systems under the EU AI Act. This guidance is vital for organizations involved in developing or deploying AI systems that may be classified as high-risk, as understanding these new reporting obligations is essential for effective compliance planning.

Key Takeaways

  • The Commission released a draft incident reporting template and guidance document.
  • Providers of high-risk AI systems must report “serious incidents” to national authorities.
  • Reporting timelines range from two (2) to fifteen (15) days, depending on the incident’s severity and type.
  • A public consultation is open until November 7, 2025.

Understanding the Incident Reporting Framework

According to Article 73 of the EU AI Act, a tiered reporting system is established for serious incidents involving high-risk AI systems. Although these requirements will not take effect until August 2026, the newly released draft guidance provides insights into the Commission’s expectations.

The reporting framework serves multiple purposes: it aims to create an early warning system for harmful patterns, establish clear accountability for providers and users, enable timely corrective measures, and foster transparency to build public trust in AI technologies.

What Qualifies as a “Serious Incident”?

Under Article 3(49) of the Act, a serious incident is defined as an occurrence when an AI system incident or malfunction leads directly or indirectly to:

  1. The death of a person or serious harm to an individual’s health;
  2. Serious and irreversible disruption of critical infrastructure;
  3. Infringement of fundamental rights obligations under EU law;
  4. Serious harm to property or the environment.

Notably, the draft guidance emphasizes both direct and indirect causation. For instance, if an AI system provides incorrect medical analysis that leads to patient harm through subsequent physician decisions, it would qualify as an indirect serious incident. This highlights the importance for organizations to account for downstream effects within their risk management frameworks.

Intersection with Existing Reporting Regimes

For organizations managing multiple compliance frameworks, the guidance clarifies overlapping reporting obligations. High-risk AI systems already subject to equivalent reporting requirements under other EU laws, such as NIS2, DORA, or CER, generally only need to report fundamental rights violations under the AI Act.

This approach reflects the Commission’s effort to minimize duplicative reporting burdens; however, practical implementation necessitates careful coordination between AI governance, legal, and compliance teams.

Practical Implications for Organizations

Organizations are encouraged to begin mapping their AI systems against the high-risk criteria and to develop internal processes for incident detection, investigation, and reporting. Key considerations include:

  • Establishing clear incident response protocols;
  • Implementing monitoring systems to detect potential serious incidents;
  • Developing investigation procedures that preserve evidence;
  • Creating cross-functional teams to manage reporting obligations;
  • Updating risk assessments to incorporate serious incident scenarios.

Next Steps

Organizations should actively participate in the public consultation, which remains open until November 7, 2025. The Commission is particularly interested in feedback and examples regarding the interplay with other reporting regimes.

Moreover, organizations should review their AI governance frameworks to ensure they can effectively implement these reporting requirements once they become applicable in August 2026.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...