Firms Must Secure Their AI Use, Regardless of Regulation
AI regulation is woefully lagging behind adoption. More than three-quarters of the UK’s financial services firms are already utilizing AI technology for various applications, including customer service and fraud detection.
However, alongside the potential gains come new risks. The recent report from the Treasury Committee, while offering necessary scrutiny, presents a disappointing lack of scope. It fails to address the emerging AI risks that are already surfacing in the sector, representing a missed opportunity to enhance awareness and improve cyber resilience.
Some Baby Steps Forward
Not all news is negative. The committee correctly identifies that AI can be exploited by adversaries to enhance fraud campaigns and increase the scale of cyberattacks against the financial services sector. It also critiques the Financial Conduct Authority and Bank of England for their “wait-and-see approach” to AI regulation, which exposes consumers and the financial system to serious harm.
The committee calls for clearer guidance from the FCA, AI-specific stress tests, and the rapid designation of major AI and cloud providers as critical third parties, all of which are reasonable suggestions.
Mind the Gap
The most significant issue lies in what remains unaddressed. AI presents a vast corporate attack surface for threat actors. Researchers have repeatedly demonstrated that vulnerabilities exist across the ecosystem, which can be exploited to steal sensitive data, disrupt critical services, and extort companies.
The emergence of agentic systems capable of performing tasks autonomously exacerbates these problems. With minimal human oversight, hackers could infiltrate and manipulate an agent without triggering alarms, potentially leading to irreversible consequences. If an attacker compromises a tool an agent relies on or injects false information into its memory, the agent might make unsafe decisions, such as approving fraudulent transactions or issuing incorrect risk assessments.
Risk also arises from the vast, hidden AI supply chain that includes various third-party components like open-source libraries, APIs, and hosted model platforms. This landscape provides opportunities for motivated adversaries to implant back doors that can be activated once a system is operational within a financial services organization. Some components may not even require authentication for access, facilitating an attacker’s task.
Open-source frameworks often undergo multiple updates daily, increasing the likelihood of introducing new vulnerabilities that IT security teams must address. Honest mistakes, such as misconfigurations in third-party components, can also create direct security and compliance risks.
Start with Governance
In the absence of clear regulatory mandates, financial services firms must seek ways to mitigate the growing AI risks. Efforts should begin with governance.
Many companies are rapidly adopting AI technology, often struggling to understand how extensively it accesses and uses sensitive data. The challenge of shadow AI use adds significant costs, with the average data breach costing organizations an additional $670,000 (£498,000), according to a 2025 IBM report. The study titled “Cost of a Data Breach: The AI Oversight Gap” indicates that 20% of global organizations experienced a breach last year due to unmanaged AI use.
To reduce the likelihood of model misuse or data leakage, establishing clear policies for AI adoption, enforcing strict identity and access controls, and monitoring all AI interactions is essential.
Next, firms should assess their supply chains, scanning AI components, including APIs and open-source libraries, for vulnerabilities and misconfigurations. Implementing automated security checks and continuously monitoring for exposed endpoints will help prevent tampering and unauthorized access.
For financial services firms developing their own AI services, the principle of security by design should be adopted. This entails ensuring that models, data stores, and deployment workflows are monitored in real time for potential compromises or drift from their original configurations.
Given the rapid emergence of vulnerabilities in these environments, continuous assessment of security posture and prompt, risk-based remediation is critical.
For financial services firms, these challenges are not theoretical; the risks are real and stem from both financially motivated and state actors. If the sector wishes to enjoy the benefits of AI, it must prioritize security, irrespective of existing regulations.