Draft Guidance on Reporting Serious Incidents Under the EU AI Act

EU AI Act: European Commission Issues Draft Guidance on Incident Reporting

On September 26, 2025, the European Commission (EC) published draft guidance regarding the reporting requirements for serious incidents related to high-risk AI systems under the EU AI Act. This guidance is vital for organizations involved in developing or deploying AI systems that may be classified as high-risk, as understanding these new reporting obligations is essential for effective compliance planning.

Key Takeaways

  • The Commission released a draft incident reporting template and guidance document.
  • Providers of high-risk AI systems must report “serious incidents” to national authorities.
  • Reporting timelines range from two (2) to fifteen (15) days, depending on the incident’s severity and type.
  • A public consultation is open until November 7, 2025.

Understanding the Incident Reporting Framework

According to Article 73 of the EU AI Act, a tiered reporting system is established for serious incidents involving high-risk AI systems. Although these requirements will not take effect until August 2026, the newly released draft guidance provides insights into the Commission’s expectations.

The reporting framework serves multiple purposes: it aims to create an early warning system for harmful patterns, establish clear accountability for providers and users, enable timely corrective measures, and foster transparency to build public trust in AI technologies.

What Qualifies as a “Serious Incident”?

Under Article 3(49) of the Act, a serious incident is defined as an occurrence when an AI system incident or malfunction leads directly or indirectly to:

  1. The death of a person or serious harm to an individual’s health;
  2. Serious and irreversible disruption of critical infrastructure;
  3. Infringement of fundamental rights obligations under EU law;
  4. Serious harm to property or the environment.

Notably, the draft guidance emphasizes both direct and indirect causation. For instance, if an AI system provides incorrect medical analysis that leads to patient harm through subsequent physician decisions, it would qualify as an indirect serious incident. This highlights the importance for organizations to account for downstream effects within their risk management frameworks.

Intersection with Existing Reporting Regimes

For organizations managing multiple compliance frameworks, the guidance clarifies overlapping reporting obligations. High-risk AI systems already subject to equivalent reporting requirements under other EU laws, such as NIS2, DORA, or CER, generally only need to report fundamental rights violations under the AI Act.

This approach reflects the Commission’s effort to minimize duplicative reporting burdens; however, practical implementation necessitates careful coordination between AI governance, legal, and compliance teams.

Practical Implications for Organizations

Organizations are encouraged to begin mapping their AI systems against the high-risk criteria and to develop internal processes for incident detection, investigation, and reporting. Key considerations include:

  • Establishing clear incident response protocols;
  • Implementing monitoring systems to detect potential serious incidents;
  • Developing investigation procedures that preserve evidence;
  • Creating cross-functional teams to manage reporting obligations;
  • Updating risk assessments to incorporate serious incident scenarios.

Next Steps

Organizations should actively participate in the public consultation, which remains open until November 7, 2025. The Commission is particularly interested in feedback and examples regarding the interplay with other reporting regimes.

Moreover, organizations should review their AI governance frameworks to ensure they can effectively implement these reporting requirements once they become applicable in August 2026.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...