Artificial Intelligence and Data: A Critical Examination for Financial Services
The year ahead will test how well financial services (FS) firms can balance ambition with robust guardrails in their use of Artificial Intelligence (AI). Our latest survey shows that appetite for AI remains strong: 94% of firms plan to increase investment in the next 12 months, with 39% expecting a significant rise.
Boards rightly see AI as a potent force for transformation. However, the shift from experimentation to scaling AI use cases into full production, particularly in an outcome-based regulatory environment, remains a challenge. Establishing effective AI governance and staying within risk appetite, especially for more complex systems such as Generative AI, is a particular hurdle. Nearly a third of respondents cite managing AI risks (29%) and meeting regulatory obligations (28%) as the main obstacles to realizing returns.
These pressures will intensify in 2026 as AI moves into more critical processes and complex applications, including Agentic AI. In response, FS supervisors are looking to boards and senior managers to understand the risks and ensure that they are comfortable with the trade-offs between risks and rewards inherent in AI adoption.
Regulating AI – Where Are We?
The international regulatory environment for AI remains a mix of well-established and still-evolving frameworks. International industry standards, distinct from regulation, also play a key role in guiding good practices in governance and risk management.
On AI-specific rules, the UK and the EU are taking different paths. The UK has no dedicated AI legislation for FS and none is expected. In the EU, implementation of the AI Act remains in flux, with proposals under negotiations to delay compliance deadlines for high-risk AI systems.
Assuming the Omnibus proposals are adopted, high-risk AI systems used in FS – including credit scoring, health and life insurance risk assessment and pricing, and employment-related systems – will likely need to comply with the AI Act at some point between Q1 2027 and the end of 2027. This extension is no reason to delay preparations. Over the coming year, we expect a multitude of technical standards, guidance, and supervisory clarifications to be issued, leaving limited time for implementation. Firms that wait for complete clarity may find themselves short of time.
Yet AI-specific regulation is only a small part of the story. In both jurisdictions, supervisors will continue to rely mainly on the existing full suite of technology-neutral FS frameworks and, where personal data is used, data protection rules. This means that a number of AI use cases in FS – including credit risk models for capital calculations, transaction monitoring, trading algorithms, and financial advice – will be assessed primarily through prudential and model risk management standards, conduct requirements, operational resilience, and, if relevant, EU and UK General Data Protection Regulation (GDPR).
AI Governance, Accountability, and Outcomes
Effective AI governance and accountability will determine the pace and scale of AI adoption in FS. Supervisors in both the EU and the UK are consistent on one point: AI is a technological tool, and firms remain responsible for using it safely and in compliance with their regulatory obligations.
As firms increasingly consider use of AI in higher impact areas of their businesses such as credit risk assessment, capital management, and algorithmic trading, we should expect a stronger, more rigorous degree of oversight and challenge by their management and boards – particularly given AI’s autonomy, dynamism, and lack of explainability.
Supervisors will not conduct a line-by-line review of the source code of AI models. Instead, they will assess whether firms can demonstrate that their AI governance and controls ensure decision-makers understand the risks of their models, can explain and manage uncertainty in their outputs, and can evidence reliable, fair, and consistent outcomes.
While regulators actively support responsible innovation, as evidenced by the UK Financial Conduct Authority (FCA)’s ‘Supercharged Sandbox’ and ‘Live Testing’ programmes, and the EU’s regulatory sandboxes, a tech-positive stance does not mean lighter scrutiny. As AI becomes embedded in core activities and infrastructure, supervisory attention to accountability and effective oversight will intensify.
In the UK, the Senior Managers & Certification Regime will be leveraged to review accountability. In the EU, the Capital Requirements Directive 6 moves banking closer to the UK model including through stronger fit-and-proper standards, clearer individual responsibilities, and wider supervisory powers over board members and senior managers. Across sectors, the European Supervisory Authorities (ESAs) have reinforced the need for clear, transparent accountability arrangements.
This raises expectations for boards and senior executives. They will need a clear, actionable risk appetite for AI, setting boundaries on where it can be used, acceptable levels of autonomy, and how outcomes are monitored and tested.
Data Governance: The Foundation That Matters
Data governance is fundamental to effective AI deployment. High-quality, well-managed data underpins transparency, model validation and explainability, fairness, and accountable oversight. It also supports cybersecurity, operational resilience, and privacy protection.
Regulators across the EU and UK converge on this view. The ESAs have positioned data governance as a central pillar of AI risk management. In the UK, both the Prudential Regulation Authority and FCA have similarly elevated data governance as a priority, with the FCA linking ethical concerns over personal data use and algorithmic bias to the delivery of good consumer outcomes under the Consumer Duty.
Yet for many firms, data governance remains a persistent challenge. Legacy systems, past acquisitions, and fragmented architectures have left data inconsistent, low-quality, and siloed. This makes it harder to train and test AI models effectively, monitor AI-amplified risks, or explain behaviour to supervisors, customers, or boards.
The EU AI Act will add further expectations for high-risk systems. Even if compliance deadlines were to slip to 2027, firms should use the time to strengthen data foundations. This includes documenting data provenance, demonstrating that training, validation, and testing data are relevant, representative, and as free of error or distortion as possible, explaining how bias is identified and mitigated, and ensuring personal data use is compliant with EU GDPR.
As AI scales, many firms will also need to move beyond pilot-stage oversight to a more standardized, and for some material AI use cases centralized, governance. This transition requires visible senior leadership support, with boards, risk committees, and executives setting the right tone and practices for how AI risk is understood and managed.