Regulating AI: The Role of Quality Assurance in Governance

Part II: How Regulators Are Testing Their Way into AI Governance

This article explores the evolving landscape of AI governance in banking, focusing specifically on how regulators are responding to AI risk and the growing importance of testing in compliance frameworks.

The Expansion of Testing Obligations

As AI governance increasingly lands on the desks of Quality Assurance (QA) teams, regulators are not asking banks to cease the use of AI. Instead, they urge firms to demonstrate that their AI systems are controlled, testable, and accountable in practice. This shift places QA and software testing teams at the forefront of regulatory compliance.

The EU AI Act represents a significant effort to regulate AI globally, introducing a risk-based framework that spans multiple sectors, including banking and insurance. Under this Act, many common financial services applications—such as creditworthiness assessments, fraud detection, and customer risk profiling—are classified as high risk. This classification necessitates stringent requirements around risk management, data governance, human oversight, robustness, and post-market monitoring.

Shifting from Policy to Practice

Governance frameworks alone are insufficient; what truly matters is their practical application in real systems. This realization is prompting a move towards lifecycle controls rather than one-time validations. The World Economic Forum has highlighted that many of the most serious AI risks emerge post-deployment, necessitating continuous testing and monitoring.

Jennifer Gold, Chief Information Security Officer at Risk Aperture, emphasizes the need for visibility beyond engineering teams, underscoring the importance of ensuring that AI use maintains the appropriate guardrails and provides visibility to boards. This visibility increasingly relies on outputs from testing, such as metrics and reports that demonstrate system behavior in practice.

The FCA’s Shift to Live Testing

In the UK, the Financial Conduct Authority (FCA) has adopted an experimental approach, allowing for the testing of AI systems in real-world conditions under regulatory supervision. Ed Towers, head of advanced analytics at the FCA, explains that this initiative provides a structured yet flexible space for firms to test AI-driven services, emphasizing that this occurs with regulatory support.

This marks a pivotal change for QA teams, as AI assurance now involves demonstrating system behavior under live operating conditions, rather than merely submitting documentation post-development. The FCA explicitly tests the entire AI system—not just models—highlighting a comprehensive definition that includes deployment context, core risks, governance, and human oversight considerations.

Pragmatic Governance in Singapore

Singapore has emerged as a leader in pragmatic AI governance, promoting human-centricity, transparency, and explainability without imposing rigid rules. S. Iswaran, Singapore’s communications minister, has articulated the country’s commitment to contributing to advanced AI governance by inviting global feedback on its framework.

This approach reinforces the idea that governance is intrinsically linked to engineering discipline, whereby testing mechanisms assess principles like fairness and transparency in practice. As regulatory scrutiny intensifies, accountability is shifting upwards, requiring boards to engage deeply in AI oversight.

The Role of QA in Regulatory Enforcement

Across different jurisdictions, a clear conclusion is emerging: regulators are not expecting QA teams to function as legal experts but rather to translate governance principles into actionable practices. Testing becomes the means by which regulatory principles—such as robustness, fairness, accountability, and resilience—are operationalized.

As banks invest heavily in enhancing their testing capabilities, QA teams are increasingly seen as the enforcement layer of AI governance. Reliable testing is not merely a regulatory requirement; it is essential for informed risk assessments and strategic decision-making regarding AI systems. Without consistent testing evidence, balancing innovation and responsibility becomes mere guesswork.

In the concluding part of this series, we will delve into why AI governance in QA has become a global concern for international banking groups and how organizations like Allianz and AstraZeneca are responding to these challenges. We will also examine the perspective of global bodies and lawmakers who increasingly frame AI risk as a systemic issue requiring proactive testing rather than reactive management.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...