BaFin’s Expectations for ICT Risk Management and the Use of AI
The German Financial Supervisory Authority, known as BaFin, has recently issued non-binding guidance to clarify how financial institutions should manage Information and Communication Technology (ICT) risks arising from Artificial Intelligence (AI) systems. This guidance is particularly relevant under Regulation (EU) 2022/2554, commonly referred to as DORA, along with related EU regulations.
The Situation
Financial institutions are increasingly integrating AI systems, especially generative AI and large language models (LLM), into their operations. As a result, these systems must be thoroughly embedded within existing ICT governance, testing, and third-party risk frameworks, and are subjected to heightened supervisory scrutiny.
The Result
To comply with evolving supervisory expectations, financial institutions that employ or plan to deploy AI must reassess their governance, testing, cloud outsourcing, and incident reporting practices.
Key Guidance Elements
The guidance aims to provide additional direction regarding AI systems under DORA, addressing third-party and outsourcing risks as outlined in:
- Delegated Regulation (EU) 2024/1774 on the ICT Risk Management Framework (RTS RMF)
- Delegated Regulation (EU) 2025/532 on outsourcing
Among the notable features of the guidance, a case study illustrates an institution operating an LLM-based AI assistant across various infrastructures, analyzing the associated risks and their treatment under Regulation 575/2013 (CRR) and Directive 2009/138/EG (Solvency II).
Governance and Risk Management
According to the guidance, financial institutions must ensure:
- AI Strategy: Develop a management-approved strategy that outlines clear responsibilities, fosters AI competencies, and promotes interdisciplinary collaboration, particularly when AI supports critical functions. This strategy should complement a technology roadmap encompassing ICT resources, capacity, and investments.
- Integration of AI Systems: Integrate AI-based systems into DORA-compliant ICT risk management frameworks, covering aspects such as identification, protection, detection, incident response, recovery, training, and crisis communication.
- Robust Development Standards: Apply stringent development, change management, and documentation standards to in-house AI developments, especially concerning open-source components and AI-assisted code generation.
- Testing Obligations: Extend testing requirements to AI-based systems similar to other ICT systems, with the depth of testing depending on criticality. Special care should be given to generative AI and LLM due to their complexity.
- Operational Processes: Establish defined processes for AI systems covering asset identification, classification, capacity monitoring, access control, logging, anomaly detection, and incident response.
- Third-Party Risk Management: Emphasize the importance of managing third-party risks, especially given reliance on cloud services for AI systems. This includes conducting thorough risk assessments, due diligence, and establishing clear contractual provisions.
- Cybersecurity and Data Security: Implement cybersecurity and data security controls throughout the AI lifecycle, ensuring data integrity and quality, especially for training data.
- Incident Management: Ensure that incidents related to AI systems are identified, assessed, and reported, incorporating AI-specific detection and impact analysis.
Three Key Takeaways
- The guidance emphasizes that AI-based systems are not subject to a separate regulatory regime but must be integrated into existing DORA-compliant ICT governance.
- While non-binding, this guidance is expected to serve as a de facto benchmark, urging financial institutions to fully embed AI systems into their governance frameworks.
- Financial institutions should prioritize robust third-party and cloud risk management, end-to-end cybersecurity, and effective incident detection and reporting processes to align with supervisory expectations.
In conclusion, as AI continues to evolve within the financial sector, institutions must remain vigilant in adapting their risk management frameworks to meet BaFin’s expectations and ensure the safe deployment of AI technologies.