Financial Regulators Need to Build Ethics into Their AI Systems
As artificial intelligence increasingly plays a role in the regulation of banks and other financial services firms, regulators need to ensure that these new systems do not import old biases into modern oversight.
What’s at Stake
Unchecked supervisory AI threatens market integrity, financial inclusion, and public trust.
Expert Insight
“Supervisory decisions must remain explainable and accountable,” requiring a “human in the loop” for all significant interventions, states Marlene Amstad of FINMA.
Supporting Data
Sixty-seven percent of agencies utilize AI; however, 37% report having no formal governance or ethical framework.
The Disconnect
More than two-thirds of the world’s financial authorities rely on AI to manage economies. Alarmingly, over half do so without ethical guidelines. Less than 9% of financial authorities currently recognize algorithmic bias as a challenge worth addressing. This raises the question: Can a technology-intensive financial landscape remain inclusive and transparent if the oversight tools operate without governance?
Current Applications of AI
Financial authorities, including central banks and market conduct regulators, are using AI for various critical functions. For instance:
- The European Central Bank’s Athena platform employs large-scale textual analytics.
- The U.K.’s Financial Conduct Authority uses agentic AI for market abuse detection.
- In Finland, visual language models interpret financial charts.
- AI is also predicting nonperforming loans in Namibia and assessing board functionality in India.
- Meanwhile, sentiment analysis in Egypt decodes consumer complaints.
Governance Gaps
Data from the State of SupTech Report 2025 highlights a significant gap between technological ambition and institutional oversight. Over half of surveyed authorities lack clear governance structures for AI-enabled supervisory technology (suptech). Key statistics include:
- 37% report no formal governance or ethical framework for AI in supervision.
- Only 3% have developed dedicated internal frameworks for suptech applications.
- A mere 4% align with international standards like the OECD AI Principles.
- Only 6% conduct regular ethical audits.
- 5% publish transparency reports regarding AI’s impact on supervisory decisions.
Ethical Risks
Alarmingly, only 8.8% of authorities identify ethical concerns as a barrier to deployment. Even fewer (8.1%) acknowledge risks of algorithmic bias or discrimination. This underreporting may stem from lack of governance. Where bias audits are absent, risks remain invisible.
Data Integrity
Among financial authorities, 64% cite fragmented or inconsistent data as a key challenge, undermining AI-enabled supervisory decisions. Poor-quality data increases the risk of biased outputs, particularly in consumer protection and financial inclusion.
Building Ethical Infrastructure
To create a robust ethical framework, strong data governance is essential. This includes:
- Clear data ownership.
- Documentation of data provenance.
- Ongoing quality controls.
- Consideration of underrepresented groups in supervisory datasets.
Marlene Amstad emphasizes that “supervisory decisions must remain explainable and accountable,” requiring a human element for significant interventions.
Examples of Ethical Integration
Some authorities are successfully embedding ethics into their AI systems:
- The U.K.’s FCA has established a data and AI risk hub and ethics framework mandating independent evaluations for each use case.
- The Bank of Tanzania has created a dedicated AI innovation hub focused on transparency and accountability.
Moving Forward
To close the accountability gap, authorities must prioritize operational policies ensuring transparency in AI usage and clear liability for model errors. Ethical impact assessments are crucial to evaluate AI’s real effects on supervision.
The tipping point for supervisory transformation depends not on tool availability but on the governance and trust that underpin them. If over 60% of authorities rush toward an AI-driven future without accountability frameworks, the risks become systemic. Deploying these systems without adequate governance could lead to discriminatory outcomes and catastrophic loss of public trust.
For financial authorities to remain trusted guardians of stability, ethical governance must become a core component of supervisory infrastructure.