AI in Finance: Balancing Innovation and Compliance

Financial Firms Embrace AI Tools and Face New Compliance Tests

In boardrooms across financial services, the pressure to use more tech is no longer abstract. It’s urgent. AI can surface patterns humans miss. Cloud tools can cut costs and speed up launches. New computing models promise breakthroughs. But every one of those gains comes with a familiar question that now has sharper edges: if a tool helps you decide faster, who is responsible when the decision goes wrong?

That is the tension running through new guidance from Herbert Smith Freehills Kramer on decision-making in modern financial services. The firm’s core point is simple: technology is expanding the amount and variety of information leaders can use, which can produce better calls. But it can also magnify risk, especially regulatory risk, if governance does not keep up.

Staying Within the Lines

Herbert Smith frames the challenge as an exercise in “staying within the lines.” Adopt tools that improve outcomes while meeting supervisory expectations that have not gone away just because the inputs are now digital. The authors focus on three areas where this balance is getting harder: AI agents, cloud-based AI, and the “near yet far” reality of quantum computing.

AI Agents

For AI agents, the warning is not that regulators are anti-AI. It’s that regulators expect firms to understand what the systems are doing, and to manage the risks that come with speed and scale. The guidance notes that AI can improve tasks like credit assessment by analyzing more data, faster, but a flawed model can also amplify losses across a wider book of business. It also lays out practical pitfalls, from “black box” outputs that are hard to explain, to biased training data, to dependence on third-party providers outside a regulator’s perimeter.

Cloud-Based AI

On cloud-based AI, the guidance argues the upside is real—scalability, efficiency, and reduced in-house costs—but so is the risk profile, especially when sensitive data sits in infrastructure you do not control. The authors point to the pace of adoption: Hong Kong’s monetary authority has said cloud-related projects represent about 80% of reportable technology outsourcing initiatives by banks, with a meaningful share touching critical systems. They emphasize the basics regulators keep returning to: cyber hygiene, third-party AI risk, and who can access critical systems.

Quantum Computing

Finally, the paper looks ahead to quantum computing. The technology may deliver competitive advantages, but Herbert Smith notes policymakers are concerned it could also stress today’s security foundations, pushing firms toward “quantum-safe” cryptography planning.

Looking Ahead

What comes next, the authors suggest, is more scrutiny—not less—as adoption accelerates. Firms will face continued expectations around documentation, post-deployment reviews, and monitoring once systems become business-as-usual. They should be ready for oversight that tests whether governance is keeping pace with technology-driven decision-making, particularly where consumer impact, outsourcing, and explainability intersect.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...