Part II: How Regulators Are Testing Their Way into AI Governance
This article explores the evolving landscape of AI governance in banking, focusing specifically on how regulators are responding to AI risk and the growing importance of testing in compliance frameworks.
The Expansion of Testing Obligations
As AI governance increasingly lands on the desks of Quality Assurance (QA) teams, regulators are not asking banks to cease the use of AI. Instead, they urge firms to demonstrate that their AI systems are controlled, testable, and accountable in practice. This shift places QA and software testing teams at the forefront of regulatory compliance.
The EU AI Act represents a significant effort to regulate AI globally, introducing a risk-based framework that spans multiple sectors, including banking and insurance. Under this Act, many common financial services applications—such as creditworthiness assessments, fraud detection, and customer risk profiling—are classified as high risk. This classification necessitates stringent requirements around risk management, data governance, human oversight, robustness, and post-market monitoring.
Shifting from Policy to Practice
Governance frameworks alone are insufficient; what truly matters is their practical application in real systems. This realization is prompting a move towards lifecycle controls rather than one-time validations. The World Economic Forum has highlighted that many of the most serious AI risks emerge post-deployment, necessitating continuous testing and monitoring.
Jennifer Gold, Chief Information Security Officer at Risk Aperture, emphasizes the need for visibility beyond engineering teams, underscoring the importance of ensuring that AI use maintains the appropriate guardrails and provides visibility to boards. This visibility increasingly relies on outputs from testing, such as metrics and reports that demonstrate system behavior in practice.
The FCA’s Shift to Live Testing
In the UK, the Financial Conduct Authority (FCA) has adopted an experimental approach, allowing for the testing of AI systems in real-world conditions under regulatory supervision. Ed Towers, head of advanced analytics at the FCA, explains that this initiative provides a structured yet flexible space for firms to test AI-driven services, emphasizing that this occurs with regulatory support.
This marks a pivotal change for QA teams, as AI assurance now involves demonstrating system behavior under live operating conditions, rather than merely submitting documentation post-development. The FCA explicitly tests the entire AI system—not just models—highlighting a comprehensive definition that includes deployment context, core risks, governance, and human oversight considerations.
Pragmatic Governance in Singapore
Singapore has emerged as a leader in pragmatic AI governance, promoting human-centricity, transparency, and explainability without imposing rigid rules. S. Iswaran, Singapore’s communications minister, has articulated the country’s commitment to contributing to advanced AI governance by inviting global feedback on its framework.
This approach reinforces the idea that governance is intrinsically linked to engineering discipline, whereby testing mechanisms assess principles like fairness and transparency in practice. As regulatory scrutiny intensifies, accountability is shifting upwards, requiring boards to engage deeply in AI oversight.
The Role of QA in Regulatory Enforcement
Across different jurisdictions, a clear conclusion is emerging: regulators are not expecting QA teams to function as legal experts but rather to translate governance principles into actionable practices. Testing becomes the means by which regulatory principles—such as robustness, fairness, accountability, and resilience—are operationalized.
As banks invest heavily in enhancing their testing capabilities, QA teams are increasingly seen as the enforcement layer of AI governance. Reliable testing is not merely a regulatory requirement; it is essential for informed risk assessments and strategic decision-making regarding AI systems. Without consistent testing evidence, balancing innovation and responsibility becomes mere guesswork.
In the concluding part of this series, we will delve into why AI governance in QA has become a global concern for international banking groups and how organizations like Allianz and AstraZeneca are responding to these challenges. We will also examine the perspective of global bodies and lawmakers who increasingly frame AI risk as a systemic issue requiring proactive testing rather than reactive management.