AI’s Impact on Financial Sector Resilience

AI Puts Financial Sector Resilience Under Pressure

As artificial intelligence (AI) integrates deeper into the financial system, firms must adopt stronger governance, operational safeguards, and enhanced workforce skills to manage emerging risks. These insights stem from the second phase of the Global Risk Institute’s (GRI) Financial Industry Forum on AI (FAFAI II).

Partnership and Findings

FAFAI II, a collaboration between GRI and key Canadian authorities including the Office of the Superintendent of Financial Institutions (OSFI), the Bank of Canada, and the Department of Finance Canada, focused on various AI-related risks, mitigants, and opportunities. The forum culminated in the introduction of an “AGILE” framework aimed at navigating AI risk.

AI Integration in Core Functions

AI adoption is no longer limited to pilot projects; it is now embedded in crucial functions such as credit decisioning, pricing, trading, fraud detection, and customer interaction. This necessitates an evolution in risk management and governance that keeps pace with deployment rather than lagging behind.

Key Priorities for Financial Institutions

Three main priorities emerged for financial institutions:

  • Elevating AI Governance to the boardroom.
  • Reinforcing Operational Resilience.
  • Building AI Literacy across the workforce.

Board-Level AI Governance

AI governance has become a strategic issue. As advanced systems, including autonomous AI, are rolled out, boards must understand where AI is deployed, how it is monitored, and who is accountable for failures. Key elements include:

  • Raising board-level awareness of AI-related risks.
  • Clarifying decision-making responsibilities for AI-driven outcomes.
  • Embedding flexible oversight mechanisms.

This intersects with Directors’ and Officers’ (D&O) exposure, as weak oversight can affect valuation and investor confidence amid shifting regulatory expectations.

Operational Resilience Under Strain

The adoption of AI amplifies existing operational risks. As institutions rely more on AI tools and external data providers, they increase their dependence on technology supply chains. Participants highlighted the need for:

  • Strong cyber hygiene.
  • Rigorous third-party risk management.
  • Clear oversight of technology and data dependencies.

The 2026 Risk Barometer from Allianz indicates that cyber incidents are the top global business threat, with AI now ranking second, underscoring the need for robust coverage as institutions digitize.

Workforce Readiness and AI Literacy

With AI tools spreading across all functions, firms must invest in training to ensure that employees—from engineers to executives—understand AI’s capabilities and limitations. Building sector-wide AI literacy is critical for:

  • Responsible deployment.
  • Detecting new threats like AI-enabled fraud and sophisticated cyber attacks.

Regulatory and Liability Implications

The findings align with a tightening regulatory environment around AI. For example, the EU Artificial Intelligence Act is expected to take effect in August 2026, requiring disclosures for AI-generated content. Such regulations will likely influence how insurers assess AI risks and draft policy language.

Collaboration to Prevent Systemic Shocks

Discussions from FAFAI II emphasize viewing AI risk as a potential source of sector-wide stress rather than just firm-level technology issues. By convening stakeholders from banking, insurance, and regulatory bodies, GRI aims to establish an early-warning mechanism for emerging vulnerabilities and promote consistent supervisory expectations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...