AI Regulation in Financial Services: Turning Principles into Practice
As 2026 approaches, the UK financial services sector stands at a pivotal moment in its AI journey. What began as cautious pilots has matured into widespread deployment, prompting regulators to sharpen their focus on governance and risk. The FCA’s recent oral evidence to the Treasury Committee inquiry on AI in financial services — and Chief Executive Nikhil Rathi’s remarks at the FT Global Banking Summit — highlight the delicate balance: promoting innovation while protecting consumers and financial stability.
This desire for balance was reinforced in early December 2025 when Rathi reaffirmed that the FCA will not introduce AI-specific rules, citing the technology’s rapid evolution “every three to six months.” Instead, the regulator is doubling down on its principles-based, outcomes-focused approach, encouraging firms to innovate while committing to intervene only in cases of “egregious failures that are not dealt with.” This stance signals a shift toward adaptive oversight and a collaborative regulator–industry relationship, rather than rigid prescription.
The UK’s Incremental, Not Prescriptive, Regulatory Approach
Despite growing political attention on the safe development of advanced AI – led by the rebranded AI Security Institute (AISI) within the Department of Science, Innovation and Technology – the UK financial regulators have resisted calls for an AI-specific rulebook. In their recent oral evidence before the Treasury Committee, the FCA confirmed that their approach remains technology-neutral, principles-based, and outcomes-focused, relying on existing frameworks such as Consumer Duty, Senior Managers & Certification Regime (SM&CR), and operational resilience rules.
Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, stated that the regulator does not intend to “introduce prescriptive AI rules” but will embed AI oversight within current conduct and prudential standards, focusing on fairness, transparency, and accountability. David Geale, FCA Executive Director for Payments and Digital Finance, reinforced that explainability and governance for AI models remain non-negotiable, particularly where decisions affect consumers or market integrity.
This approach supports innovation without stifling growth while reserving the option to tighten expectations through guidance rather than statute. It positions the UK in contrast to the EU and US, where the EU’s AI Act introduces prescriptive obligations for “high-risk” systems, while the US leans toward sectoral guidance. The UK’s pro-innovation stance aims to keep regulation agile, although firms should expect incremental tightening, particularly around auditability and consumer protection.
AI Moves from Pilot to Performance – with Important Caveats
The pace of AI adoption in the financial sector has accelerated dramatically. The Bank of England and FCA’s third survey of AI and machine learning in UK financial services showed that 75% of firms are already using AI, with a further 10% planning to adopt it within three years. Foundation models, including large language models (LLMs), accounted for 17% of use cases, although most deployments remain of low materiality.
According to Lloyds’ Financial Institutions Sentiment Survey published in September 2025, 59% of institutions now see measurable productivity gains from AI, up from 32% a year earlier. Over half plan to increase investment in 2026, and nearly half have established dedicated AI teams.
The recent Treasury Committee inquiry into AI use in financial services reveals several common use cases:
Fraud Detection and App Scam Prevention
The British Insurance Brokers’ Association (BIBA) utilizes AI to process large volumes of data quickly, enabling faster and more accurate fraud analysis. Similarly, the Electronic Money Association (EMA) employs AI-driven models to identify unusual transaction patterns, helping differentiate legitimate transactions from suspicious ones in real time. Mastercard applies its AI capabilities to predict scams and identity theft, enhancing consumer protection in digital payment systems.
AML/KYC Compliance and Transaction Monitoring
The FCA has partnered with the Alan Turing Institute on the AML and Synthetic Data Project, designed to enhance money laundering detection through advanced analytics. This initiative uses real anonymized transaction data from high street banks, augmented with AI capabilities to create a fully synthetic dataset, paving the way for scalable, privacy-preserving solutions in AML and KYC compliance.
Cybersecurity Threat Modelling
UK Finance notes that firms are increasingly using AI to detect and respond to cyber threats. For instance, AI tools assist security analysts in classifying suspicious emails and identifying phishing attempts, enabling faster incident responses and reducing the risk of breaches.
Customer Service Automation and Chatbots
The use of AI-driven customer service automation has surged, with NatWest Group’s AI-powered digital assistant “Cora” handling over 11 million customer interactions in 2024 alone. UK Finance employs AI in customer engagement to personalize interactions and improve retention and satisfaction.
Back-Office Optimization
Zurich Insurance UK reports significant AI use in back-office functions, streamlining data extraction and enhancing administrative workflows, thus freeing resources for higher-value tasks.
Early-Stage Robo-Advisory Tools
Lloyd’s Market Association employs an augmented underwriting process where human underwriters remain central to decision-making, with AI tools assisting in risk scoring and insights.
Emerging Themes
While the examples illustrate AI’s growing role in operations, its use is largely confined to non-critical functions rather than core banking or trading decisions. Both the FCA and Bank of England anticipate a shift towards agentic AI in core decision-making, raising questions about consumer confidence and trust in AI-driven financial services.
Consumers are broadly receptive to AI for low-risk applications like fraud detection, but confidence declines for high-stakes decisions such as loan approvals, highlighting concerns over data privacy, algorithmic bias, and loss of human interaction.
The Regulators’ Perspective on Emerging Risks
Alongside the developing use cases, significant risks associated with AI remain. The Treasury Committee’s session with the FCA and Bank of England highlighted three supervisory priorities:
- Transparency and explainability – Firms must articulate how AI models reach decisions.
- Accountability – Responsibility for AI-driven outcomes remains within existing accountability regimes.
- Systemic risk monitoring – Assessment of whether widespread AI adoption could amplify market shocks.
Regulators are also monitoring a wider range of emerging risks, including bias, cybersecurity threats, third-party concentration, model complexity, operational resilience, and market integrity risks.
The Regulators as AI Users
The FCA and Bank of England are not just observers but also adopters of AI. Current applications include predictive AI for real-time knowledge retrieval and an AI voice bot to triage consumer queries.