UK Regulators Urged to Act on AI Risks in Financial Services

UK Parliamentary Committee Publishes Report on AI in Financial Services

On 20 January 2026, the House of Commons Treasury Select Committee published a report on AI in financial services. This follows an inquiry launched in February 2025, which took evidence throughout the year. The core question posed by the inquiry was whether the financial services regulators are doing enough to manage the risks to consumers and to financial stability presented by AI.

Overall, the report is critical of the regulators’ approach to AI, which somewhat undermines their current pro-innovation stance. Shortly after, on 27 January 2026, the FCA announced a review into the long-term impact of AI on retail financial services, known as “the Mills Review.” The review aims to ensure that the FCA is prepared for the future of AI in financial services and can adapt accordingly.

Key Findings of the Report

The Treasury Select Committee found that the FCA, the Bank of England, and HM Treasury are not doing enough to manage the risks presented by AI. By taking a “wait and see” approach, the regulators expose consumers and the financial system to potentially serious harm. In contrast, the regulators have stated that the existing regulatory framework offers sufficient protection.

Specific risks associated with AI highlighted in the report include:

  • Lack of transparency in AI-driven decision-making.
  • AI financial decision-making leading to financial exclusion.
  • Unregulated financial advice from AI search engines that could mislead consumers.
  • Heightened cybersecurity vulnerabilities.
  • Operational resilience issues due to reliance on a small number of US technology firms for AI and cloud services.

Dame Meg Hillier, Chair of the Treasury Select Committee, expressed concern, stating, “Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying.” The report admonishes the regulators for their “reactive” approach, which leaves firms with “little practical clarity” on how to apply existing rules to their AI usage.

Recommendations

The report offers three key recommendations:

  1. The FCA should publish comprehensive, practical guidance for firms on the application of existing consumer protection rules to their use of AI by the end of 2026.
  2. The Bank of England and the FCA must conduct AI-specific stress testing.
  3. By the end of 2026, HM Treasury must designate major AI and cloud providers as critical third parties for the purposes of the Critical Third Parties Regime.

FCA Review

The FCA’s review, announced shortly after the Treasury Select Committee’s report, aims to explore the future regulatory approach to AI alongside the evolution of AI technology, its impact on markets and firms, and future consumer trends. The FCA will consider whether existing frameworks remain flexible and sufficiently outcomes-focused.

Input for this review is being sought by 24 February 2026, with recommendations planned to be shared with the FCA board in the summer, followed by an external publication of findings.

Conclusion

While the Treasury Select Committee has not recommended new AI-specific regulations for financial services, its critique indicates a misalignment on how best to tackle the potential risks and benefits of AI. The FCA’s proactive stance, as indicated by the launch of its review, aims to balance the government’s pro-growth agenda with consumer protection concerns.

Industry stakeholders are likely to welcome additional practical guidance, provided it offers clarity rather than confusion. Firms must continue to ensure that they deploy AI-based solutions responsibly and with appropriate oversight, as emphasized by Dame Hillier’s comments on the need for firms to address the associated risks actively.

Work in this area will be further supported by the appointment of two AI Champions in financial services, announced alongside the Treasury Select Committee report.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...