BaFin’s AI Risk Management Expectations for Financial Institutions

BaFin’s Expectations for ICT Risk Management and the Use of AI

The German Financial Supervisory Authority, known as BaFin, has recently issued non-binding guidance to clarify how financial institutions should manage Information and Communication Technology (ICT) risks arising from Artificial Intelligence (AI) systems. This guidance is particularly relevant under Regulation (EU) 2022/2554, commonly referred to as DORA, along with related EU regulations.

The Situation

Financial institutions are increasingly integrating AI systems, especially generative AI and large language models (LLM), into their operations. As a result, these systems must be thoroughly embedded within existing ICT governance, testing, and third-party risk frameworks, and are subjected to heightened supervisory scrutiny.

The Result

To comply with evolving supervisory expectations, financial institutions that employ or plan to deploy AI must reassess their governance, testing, cloud outsourcing, and incident reporting practices.

Key Guidance Elements

The guidance aims to provide additional direction regarding AI systems under DORA, addressing third-party and outsourcing risks as outlined in:

  • Delegated Regulation (EU) 2024/1774 on the ICT Risk Management Framework (RTS RMF)
  • Delegated Regulation (EU) 2025/532 on outsourcing

Among the notable features of the guidance, a case study illustrates an institution operating an LLM-based AI assistant across various infrastructures, analyzing the associated risks and their treatment under Regulation 575/2013 (CRR) and Directive 2009/138/EG (Solvency II).

Governance and Risk Management

According to the guidance, financial institutions must ensure:

  • AI Strategy: Develop a management-approved strategy that outlines clear responsibilities, fosters AI competencies, and promotes interdisciplinary collaboration, particularly when AI supports critical functions. This strategy should complement a technology roadmap encompassing ICT resources, capacity, and investments.
  • Integration of AI Systems: Integrate AI-based systems into DORA-compliant ICT risk management frameworks, covering aspects such as identification, protection, detection, incident response, recovery, training, and crisis communication.
  • Robust Development Standards: Apply stringent development, change management, and documentation standards to in-house AI developments, especially concerning open-source components and AI-assisted code generation.
  • Testing Obligations: Extend testing requirements to AI-based systems similar to other ICT systems, with the depth of testing depending on criticality. Special care should be given to generative AI and LLM due to their complexity.
  • Operational Processes: Establish defined processes for AI systems covering asset identification, classification, capacity monitoring, access control, logging, anomaly detection, and incident response.
  • Third-Party Risk Management: Emphasize the importance of managing third-party risks, especially given reliance on cloud services for AI systems. This includes conducting thorough risk assessments, due diligence, and establishing clear contractual provisions.
  • Cybersecurity and Data Security: Implement cybersecurity and data security controls throughout the AI lifecycle, ensuring data integrity and quality, especially for training data.
  • Incident Management: Ensure that incidents related to AI systems are identified, assessed, and reported, incorporating AI-specific detection and impact analysis.

Three Key Takeaways

  1. The guidance emphasizes that AI-based systems are not subject to a separate regulatory regime but must be integrated into existing DORA-compliant ICT governance.
  2. While non-binding, this guidance is expected to serve as a de facto benchmark, urging financial institutions to fully embed AI systems into their governance frameworks.
  3. Financial institutions should prioritize robust third-party and cloud risk management, end-to-end cybersecurity, and effective incident detection and reporting processes to align with supervisory expectations.

In conclusion, as AI continues to evolve within the financial sector, institutions must remain vigilant in adapting their risk management frameworks to meet BaFin’s expectations and ensure the safe deployment of AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...