The Bank of England and PRA Set Out Plans for Safe AI Innovation
On April 1, 2026, a significant communication was made by the Bank of England (BoE) and the Prudential Regulation Authority (PRA) to key government officials, outlining their strategic framework for enabling safe AI innovation within the financial services sector. This initiative responds to a government request from January 28, 2026, aimed at publishing a comprehensive plan for fostering responsible AI advancements while committing to annual reporting on progress.
Key Messages from the BoE and PRA
The primary message conveyed is clear: existing regulations apply to AI technologies, and firms must be prepared to demonstrate compliance. With AI recognized as a supervisory priority for 2026, firms are encouraged to reassess their AI governance frameworks to ensure robust oversight of AI-driven decision-making processes.
Objectives of the Initiative
The BoE and PRA aspire to create an environment where responsible AI adoption enhances financial sector innovation, competition, and overall growth, while ensuring the integrity of the financial system remains intact. Their regulatory approach remains technology-agnostic, meaning it is adaptable to various technological advancements, including rapid developments in AI.
Recent Context and Developments
This initiative builds on prior discussions, specifically the joint 2022 Discussion Paper (DP5/22) with the Financial Conduct Authority (FCA), where respondents indicated that no regulatory barriers hinder safe AI adoption. In 2023, the PRA established Model Risk Management Principles relevant to AI, signifying a proactive approach toward regulatory alignment with technological advancements.
AI as a Supervisory Priority
AI has been explicitly designated as a supervisory priority for the PRA in 2026. This prioritization means that firms will encounter direct inquiries regarding their governance, model risk management, and oversight frameworks during supervisory engagements. Companies should be prepared for these discussions and ensure their AI practices comply with the expectations set forth by regulators.
No New AI-Specific Rules for Now
Currently, the BoE and PRA have no intentions of implementing specific AI regulations or creating a dedicated AI sandbox. The existing technology-agnostic framework will continue to apply, as recent industry discussions revealed that most participants do not see an immediate need for detailed AI-specific guidance. However, this stance is under continuous review, as the landscape of AI adoption evolves.
Concentration Risk and Third-Party Providers
The AI Consortium, established in May 2025, is actively investigating concentration risks associated with third-party AI model providers. This focus aligns with concerns raised in the FCA’s February 2026 Mills Review, which highlighted the UK retail financial services market’s dependence on a limited number of dominant AI infrastructure providers. Firms with substantial reliance on third-party AI models must review their outsourcing frameworks to adequately address these risks.
Transparency and Explainability in AI
There is heightened scrutiny on explainability, transparency, and potential contagion risks associated with AI. The AI Consortium is examining generative AI’s implications in regulated activities, stressing the importance for firms to articulate how AI outputs are generated and validated. Firms are encouraged to stay informed about forthcoming reports from the AI Consortium to understand emerging expectations.
International Coordination and Collaboration
The BoE is increasing its efforts in international collaboration, particularly within the G20 Financial Stability Board, to establish sound practices for AI adoption in financial institutions. This includes working with the G7 on managing cybersecurity risks related to AI and continuing partnerships with domestic entities like the AI Security Institute.
Next Steps for Firms
In light of these developments, firms are advised to:
- Review and enhance their AI governance frameworks to ensure clear oversight and validation processes.
- Assess concentration risks from third-party AI model providers and strengthen outsourcing risk management.
- Monitor the AI Consortium’s reports and the PRA’s supervisory engagement program for insights on regulatory expectations.
Conclusion
The BoE and PRA’s letter sets a proactive tone for AI regulation in the UK, emphasizing the need for firms to align their practices with existing regulatory frameworks while preparing for future developments in the AI landscape.