Securing AI in Finance: Addressing Risks Beyond Regulation

Firms Must Secure Their AI Use, Regardless of Regulation

AI regulation is woefully lagging behind adoption. More than three-quarters of the UK’s financial services firms are already utilizing AI technology for various applications, including customer service and fraud detection.

However, alongside the potential gains come new risks. The recent report from the Treasury Committee, while offering necessary scrutiny, presents a disappointing lack of scope. It fails to address the emerging AI risks that are already surfacing in the sector, representing a missed opportunity to enhance awareness and improve cyber resilience.

Some Baby Steps Forward

Not all news is negative. The committee correctly identifies that AI can be exploited by adversaries to enhance fraud campaigns and increase the scale of cyberattacks against the financial services sector. It also critiques the Financial Conduct Authority and Bank of England for their “wait-and-see approach” to AI regulation, which exposes consumers and the financial system to serious harm.

The committee calls for clearer guidance from the FCA, AI-specific stress tests, and the rapid designation of major AI and cloud providers as critical third parties, all of which are reasonable suggestions.

Mind the Gap

The most significant issue lies in what remains unaddressed. AI presents a vast corporate attack surface for threat actors. Researchers have repeatedly demonstrated that vulnerabilities exist across the ecosystem, which can be exploited to steal sensitive data, disrupt critical services, and extort companies.

The emergence of agentic systems capable of performing tasks autonomously exacerbates these problems. With minimal human oversight, hackers could infiltrate and manipulate an agent without triggering alarms, potentially leading to irreversible consequences. If an attacker compromises a tool an agent relies on or injects false information into its memory, the agent might make unsafe decisions, such as approving fraudulent transactions or issuing incorrect risk assessments.

Risk also arises from the vast, hidden AI supply chain that includes various third-party components like open-source libraries, APIs, and hosted model platforms. This landscape provides opportunities for motivated adversaries to implant back doors that can be activated once a system is operational within a financial services organization. Some components may not even require authentication for access, facilitating an attacker’s task.

Open-source frameworks often undergo multiple updates daily, increasing the likelihood of introducing new vulnerabilities that IT security teams must address. Honest mistakes, such as misconfigurations in third-party components, can also create direct security and compliance risks.

Start with Governance

In the absence of clear regulatory mandates, financial services firms must seek ways to mitigate the growing AI risks. Efforts should begin with governance.

Many companies are rapidly adopting AI technology, often struggling to understand how extensively it accesses and uses sensitive data. The challenge of shadow AI use adds significant costs, with the average data breach costing organizations an additional $670,000 (£498,000), according to a 2025 IBM report. The study titled “Cost of a Data Breach: The AI Oversight Gap” indicates that 20% of global organizations experienced a breach last year due to unmanaged AI use.

To reduce the likelihood of model misuse or data leakage, establishing clear policies for AI adoption, enforcing strict identity and access controls, and monitoring all AI interactions is essential.

Next, firms should assess their supply chains, scanning AI components, including APIs and open-source libraries, for vulnerabilities and misconfigurations. Implementing automated security checks and continuously monitoring for exposed endpoints will help prevent tampering and unauthorized access.

For financial services firms developing their own AI services, the principle of security by design should be adopted. This entails ensuring that models, data stores, and deployment workflows are monitored in real time for potential compromises or drift from their original configurations.

Given the rapid emergence of vulnerabilities in these environments, continuous assessment of security posture and prompt, risk-based remediation is critical.

For financial services firms, these challenges are not theoretical; the risks are real and stem from both financially motivated and state actors. If the sector wishes to enjoy the benefits of AI, it must prioritize security, irrespective of existing regulations.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...