AI Puts Financial Sector Resilience Under Pressure
As artificial intelligence (AI) integrates deeper into the financial system, firms must adopt stronger governance, operational safeguards, and enhanced workforce skills to manage emerging risks. These insights stem from the second phase of the Global Risk Institute’s (GRI) Financial Industry Forum on AI (FAFAI II).
Partnership and Findings
FAFAI II, a collaboration between GRI and key Canadian authorities including the Office of the Superintendent of Financial Institutions (OSFI), the Bank of Canada, and the Department of Finance Canada, focused on various AI-related risks, mitigants, and opportunities. The forum culminated in the introduction of an “AGILE” framework aimed at navigating AI risk.
AI Integration in Core Functions
AI adoption is no longer limited to pilot projects; it is now embedded in crucial functions such as credit decisioning, pricing, trading, fraud detection, and customer interaction. This necessitates an evolution in risk management and governance that keeps pace with deployment rather than lagging behind.
Key Priorities for Financial Institutions
Three main priorities emerged for financial institutions:
- Elevating AI Governance to the boardroom.
- Reinforcing Operational Resilience.
- Building AI Literacy across the workforce.
Board-Level AI Governance
AI governance has become a strategic issue. As advanced systems, including autonomous AI, are rolled out, boards must understand where AI is deployed, how it is monitored, and who is accountable for failures. Key elements include:
- Raising board-level awareness of AI-related risks.
- Clarifying decision-making responsibilities for AI-driven outcomes.
- Embedding flexible oversight mechanisms.
This intersects with Directors’ and Officers’ (D&O) exposure, as weak oversight can affect valuation and investor confidence amid shifting regulatory expectations.
Operational Resilience Under Strain
The adoption of AI amplifies existing operational risks. As institutions rely more on AI tools and external data providers, they increase their dependence on technology supply chains. Participants highlighted the need for:
- Strong cyber hygiene.
- Rigorous third-party risk management.
- Clear oversight of technology and data dependencies.
The 2026 Risk Barometer from Allianz indicates that cyber incidents are the top global business threat, with AI now ranking second, underscoring the need for robust coverage as institutions digitize.
Workforce Readiness and AI Literacy
With AI tools spreading across all functions, firms must invest in training to ensure that employees—from engineers to executives—understand AI’s capabilities and limitations. Building sector-wide AI literacy is critical for:
- Responsible deployment.
- Detecting new threats like AI-enabled fraud and sophisticated cyber attacks.
Regulatory and Liability Implications
The findings align with a tightening regulatory environment around AI. For example, the EU Artificial Intelligence Act is expected to take effect in August 2026, requiring disclosures for AI-generated content. Such regulations will likely influence how insurers assess AI risks and draft policy language.
Collaboration to Prevent Systemic Shocks
Discussions from FAFAI II emphasize viewing AI risk as a potential source of sector-wide stress rather than just firm-level technology issues. By convening stakeholders from banking, insurance, and regulatory bodies, GRI aims to establish an early-warning mechanism for emerging vulnerabilities and promote consistent supervisory expectations.