UK Parliamentary Committee Publishes Report on AI in Financial Services
On 20 January 2026, the House of Commons Treasury Select Committee published a report on AI in financial services. This follows an inquiry launched in February 2025, which took evidence throughout the year. The core question posed by the inquiry was whether the financial services regulators are doing enough to manage the risks to consumers and to financial stability presented by AI.
Overall, the report is critical of the regulators’ approach to AI, which somewhat undermines their current pro-innovation stance. Shortly after, on 27 January 2026, the FCA announced a review into the long-term impact of AI on retail financial services, known as “the Mills Review.” The review aims to ensure that the FCA is prepared for the future of AI in financial services and can adapt accordingly.
Key Findings of the Report
The Treasury Select Committee found that the FCA, the Bank of England, and HM Treasury are not doing enough to manage the risks presented by AI. By taking a “wait and see” approach, the regulators expose consumers and the financial system to potentially serious harm. In contrast, the regulators have stated that the existing regulatory framework offers sufficient protection.
Specific risks associated with AI highlighted in the report include:
- Lack of transparency in AI-driven decision-making.
- AI financial decision-making leading to financial exclusion.
- Unregulated financial advice from AI search engines that could mislead consumers.
- Heightened cybersecurity vulnerabilities.
- Operational resilience issues due to reliance on a small number of US technology firms for AI and cloud services.
Dame Meg Hillier, Chair of the Treasury Select Committee, expressed concern, stating, “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying.” The report admonishes the regulators for their “reactive” approach, which leaves firms with “little practical clarity” on how to apply existing rules to their AI usage.
Recommendations
The report offers three key recommendations:
- The FCA should publish comprehensive, practical guidance for firms on the application of existing consumer protection rules to their use of AI by the end of 2026.
- The Bank of England and the FCA must conduct AI-specific stress testing.
- By the end of 2026, HM Treasury must designate major AI and cloud providers as critical third parties for the purposes of the Critical Third Parties Regime.
FCA Review
The FCA’s review, announced shortly after the Treasury Select Committee’s report, aims to explore the future regulatory approach to AI alongside the evolution of AI technology, its impact on markets and firms, and future consumer trends. The FCA will consider whether existing frameworks remain flexible and sufficiently outcomes-focused.
Input for this review is being sought by 24 February 2026, with recommendations planned to be shared with the FCA board in the summer, followed by an external publication of findings.
Conclusion
While the Treasury Select Committee has not recommended new AI-specific regulations for financial services, its critique indicates a misalignment on how best to tackle the potential risks and benefits of AI. The FCA’s proactive stance, as indicated by the launch of its review, aims to balance the government’s pro-growth agenda with consumer protection concerns.
Industry stakeholders are likely to welcome additional practical guidance, provided it offers clarity rather than confusion. Firms must continue to ensure that they deploy AI-based solutions responsibly and with appropriate oversight, as emphasized by Dame Hillier’s comments on the need for firms to address the associated risks actively.
Work in this area will be further supported by the appointment of two AI Champions in financial services, announced alongside the Treasury Select Committee report.