Implementing AI Governance in Banking: A Strategic Framework

Banking’s AI Rulebook: Turning the Treasury Framework Into Action

The AI frontier has been likened to the Wild West: competing priorities, limited oversight, and banks and financial institutions (FIs) racing to stake their claim in a modern gold rush. Now, a new regulatory sheriff has arrived. The U.S. Treasury’s Financial Services AI Risk Management Framework (FS AI RMF) adapts the 2023 NIST AI Risk Management Framework for financial institutions.

This framework provides the industry with a shared vocabulary and a common control architecture for governing AI – from fraud detection to customer engagement to internal productivity tools. The sheriff isn’t out to shut down these digital boomtowns: the goal is to tame the chaos and to bring accountability and consistency to AI governance.

Structured Evaluation of AI Risk

The framework provides FIs of all sizes a structured way to evaluate and manage AI risk. The challenge for most institutions is that the framework assumes that banks already know where AI resides within their organizations and how it is being used. Those without a baseline inventory of AI usage will face immediate sequencing challenges when applying the framework.

Implementing the framework is a heavy lift, demanding cross-functional coordination and executive backing. Without clear ownership and accountability, organizations may struggle to operationalize a program spanning governance, legal, risk, and other functions.

More importantly, the framework is not just a regulatory checkbox. It signals a clear shift in how Chief Risk and Compliance Officers and external counsel support accountability, operational execution, and advisory roles within these frameworks and outcomes. For law firms, maturity classifications are consequential because they reshape downstream regulatory, supervisory, and litigation exposure.

Framework Structure

The FS AI RMF is structured around four integrated components:

  • AI Adoption Stage Questionnaire: Determines institutional maturity.
  • Risk and Control Matrix (RCM): Maps applicable controls.
  • Guidebook: Provides implementation guidance.
  • Control Objective Reference Guide: Offers detailed technical support.

The application of the framework begins with the Adoption Stage Questionnaire, a self-assessment that categorizes an institution into one of four maturity levels based on usage: Initial, Minimal, Evolving, or Embedded. Your stage then determines which control objectives in the RCM apply to you. Once the maturity level is determined, controls are built cumulatively. Each stage inherits prior requirements while introducing more rigorous requirements.

Challenges in Implementation

The jump from one stage to another is material and a governance multiplier. The difference between Initial and Minimal expands the control environment by more than 100 objectives. The Questionnaire scopes the breadth and rigor of your risk management obligations under the framework.

The Questionnaire assumes you know how AI is deployed across your institution. In practice, many organizations do not. To characterize the organization’s use across key dimensions – business impact, governance, deployment model, third-party AI use, organizational goals, and data sensitivity – is challenging without an existing inventory.

Building the AI inventory is one of the control objectives (GV-1.6), spanning six sub-objectives from shadow IT to portfolio-level risk analysis. While the framework implies building an inventory through controls, an organization cannot complete the Questionnaire without a base-level inventory of AI usage.

Practical Examples

Consider a hypothetical mid-market bank that has moved cautiously into AI. It has two deployments: a customer-facing chatbot that routes individuals to help desk resources, and enterprise-wide access to ChatGPT for employees. These represent distinct risk profiles within the same institution and provide a practical lens for how the framework operates.

Before starting the Questionnaire, the bank must document its AI tools. The chatbot is straightforward: procured and owned by someone within the organization. Enterprise ChatGPT is different: available to all employees, its use may be opaque. Is it summarizing loan applications, supporting compliance reporting, or drafting client communications?

Each represents a different data sensitivity and risk profile, many of which may not have formal documentation. The range of use cases, particularly those without a formal request and review process, makes ChatGPT challenging.

Evaluating AI Usage

The Questionnaire evaluates six dimensions – business impact, governance, deployment model, third-party AI use, organizational goals, and data sensitivity – by reviewing statements that describe its current practices. The Questionnaire works from the most mature level of adoption to the least, starting with Stage 4 (Embedded) and moving toward Stage 1 (Initial).

This is where subjectivity comes into play. For the bank, Stage 4 and Stage 3 (Evolving) probably don’t fit. AI isn’t driving autonomous decision-making or transforming critical business functions. However, the chatbot is customer-facing, and employees have access to a third-party LLM that could handle sensitive information, depending on how it’s used.

If even one employee uses ChatGPT with customer data or regulated information, the bank may have greater risk exposure than its governance structure reflects. The conservative move – classifying as Evolving rather than Minimal – means the bank scopes in 193 control objectives, rather than 126, reducing potential blind spots.

Control Objectives and Implementation

Once the bank firms its adoption stage, the RCM becomes the primary working document. Each control objective maps to a risk statement and aligns with the NIST Govern-Map-Measure-Manage structure, with implementation guidance included.

For the chatbot, many of the relevant controls resemble those used in model risk management or vendor oversight. For enterprise ChatGPT, the controls include acceptable use policy, data lifecycle and retention, third-party AI risk documentation, and human-AI supervision policies.

The gap between current controls and the RCM becomes the roadmap, with adoption-stage scoping guiding prioritization as AI usage matures.

Landing on an adoption stage and identifying the relevant controls is straightforward. For a bank classified as Evolving, what follows is the validation or implementation of 193 specified control objectives – each with its own risk statement, implementation guidance, and evidence expectations.

Executing this work is not a light lift. It must encompass governance, legal, compliance, technology, vendor management, HR, and the board, requiring coordination across functions that may not be used to working together.

Moreover, it assumes a level of organizational readiness – dedicated resources, clear ownership, and executive support – that many institutions are still building.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...