Evolving Plans for AI Regulation
The landscape of AI regulation is rapidly changing, with significant developments occurring in various jurisdictions. As of April 2025, the EU’s prescriptive AI Act has officially entered into force, introducing specific rules for generative AI that will take effect in August 2025. In contrast, the UK is adopting a more flexible, principles-based approach, which does not require new regulatory frameworks in the immediate future. However, both the EU and the UK are reassessing their strategies in light of geopolitical developments, balancing competitiveness with necessary regulatory safeguards.
Current Landscape of AI Utilization
Recent findings from the BoE/FCA’s AI survey reveal that 75% of firms are already utilizing some form of AI in their operations, a significant increase from 53% in 2022. This surge is not solely driven by back-office efficiency; it extends to critical applications such as credit risk assessments, algorithmic trading, and capital management.
To further understand this trend, the Treasury Committee has initiated a Call for Evidence focusing on the impacts of AI in banking, pensions, and other financial sectors. Additionally, the Bank of England’s Financial Policy Committee has published an assessment of AI’s influence on financial stability.
UK’s Regulatory Approach
Despite the rising use of AI, the UK government continues to prioritize a principles-based approach. The BoE/PRA and FCA have determined that their existing toolkits are suitable for managing the risks associated with AI, emphasizing that these risks are not unique. The focus remains on outcomes, allowing for flexibility in adapting to unforeseen technological and market changes.
This regulatory approach aligns with a broader agenda to enhance growth and competitiveness, positioning AI as a fundamental engine for growth. The UK and US notably opted out of endorsing a non-binding international declaration on ‘inclusive and sustainable’ AI at the Paris AI Action Summit held in February.
AI Safety and Security Initiatives
In a strategic shift, the UK government has rebranded its AI Safety Institute to the AI Security Institute, signaling a focus on unleashing economic growth rather than concentrating solely on bias or freedom of speech issues. The FCA continues to encourage engagement with its AI Lab initiatives, aiming to foster innovation within the sector.
Furthermore, the government has supported almost all recommendations from the AI Opportunities Action Plan, which includes establishing AI “growth zones” and creating a “sovereign AI unit.” This plan emphasizes transparency, requiring regulators to report annually on their activities to promote AI innovation.
EU’s Regulatory Framework
In the EU, the AI Act classifies AI applications by risk levels, imposing stringent requirements for high-risk areas. Member States are tasked with designating national competent authorities to oversee the application of the Act’s rules by August 2025. Some countries, like Spain and Italy, are already implementing their own national regulations, which will necessitate continuous monitoring and updates to align with the EU framework.
While the AI Act represents a significant regulatory milestone as the first AI law from a major jurisdiction, its prescriptive nature may hinder agility in adapting to fast-evolving technology. The AI Code of Practice (COP) for General Purpose AI aims to provide detailed guidance for companies to adhere to ethical standards, even for systems not deemed high-risk.
Looking Ahead
As jurisdictions like the UK and EU navigate the pressures of competitiveness, international standard setters continue to emphasize risk management. The IMF has raised concerns regarding herding and concentration risk within capital markets, urging regulators to provide guidance on model risk management and stress testing.
Despite movements toward regulatory simplification, firms must ensure their risk and control frameworks adequately address AI use. Companies operating within the EU must begin adapting to the AI Act, while UK firms face the challenge of navigating a less defined regulatory landscape.
In conclusion, as regulatory frameworks continue to evolve, the integration and management of AI technology remain pivotal for firms across sectors. The dynamic interplay between innovation and regulation will shape the future of AI deployment and its implications for businesses globally.