Data-Driven Governance: Shaping AI Regulation in Singapore

The Ministry of Algorithms: When Data Scientists Drive Policy Making

Recently, a pivotal conversation took place in Singapore, one that could significantly reshape the global discourse on AI regulation. This discussion featured two prominent figures: Thomas Roehm, vice president of corporate marketing at SAS, and Frankie Phua, managing director and head of group risk management at United Overseas Bank. The core of their dialogue revolved around a pressing question: how to govern AI that evolves faster than the regulations intended to contain it?

From Principles to Practice: The Data-Driven Regulatory Framework

This conversation was part of a broader exploration of Singapore’s innovative Project MindForge, driven by the Veritas Initiative of the Monetary Authority of Singapore (MAS). This initiative aims to assess the risks and opportunities posed by AI technology within the financial services sector, building on Singapore’s systematic approach to AI governance, which commenced in 2019 as part of the Singapore National AI Strategy.

Phua expressed pride in Singapore’s proactive involvement in the AI journey, stating, “I must say, I’m very proud to be a Singaporean in Singapore working in the Singapore banking industry.” He oversees various risk management functions, emphasizing the necessity of engaging experienced practitioners from the outset.

The journey began with the establishment of the FEAT principles: Fairness, Ethics, Accountability, and Transparency. Phua articulated the importance of integrating these principles into governance frameworks, stating, “When we look at any governance, it must be measured against certain principles.” However, translating these principles into actionable frameworks proved to be a formidable challenge.

Regulatory Agility in the Age of GenAI

The landscape shifted dramatically with the advent of ChatGPT in 2022, prompting discussions around GenAI. Traditional AI governance frameworks began to feel outdated, leading to the creation of the MindForge consortium, which aims to scrutinize the risks and opportunities associated with generative AI in the financial services industry.

MindForge distinguishes itself by adopting a collaborative approach where practitioners from banks, insurers, and technology companies collectively author the governance handbook. Phua noted, “MAS is leaving it to the financial institutions in Singapore to write this handbook for the industry,” allowing for practical governance solutions rather than top-down regulations.

The Governance-as-Code Paradigm

Roehm provided context on the implications of AI across various industries, stating, “Today, we help banks predict and prevent fraud as we analyze billions of transactions across the world.” The regulatory challenge lies not in preparing for AI adoption but in governing AI systems that are already making critical decisions.

In sectors such as public welfare and urban planning, the urgency for effective governance is heightened. When AI systems influence decisions related to child welfare or flood mitigation, the traditional slow-paced regulatory approach becomes untenable.

The Taxonomy Challenge: Defining AI for Regulatory Compliance

Phua identified the biggest challenge in AI governance as the definition of AI itself, highlighting ongoing debates within Project MindForge. “Because you’re trying to govern AI, you need to know what AI is,” he emphasized. The complexity increases with vendor-embedded AI solutions, which can create governance gaps.

Such definitional challenges underline the necessity of Singapore’s collaborative governance model. Regulations crafted in isolation may overlook practical complexities faced by practitioners daily.

Data Stewardship and Cognitive Governance

Phua addressed concerns regarding the potential erosion of human cognitive capabilities due to AI systems. He noted, “Even without AI, a lot of us will not think because we are lazy.” However, his experience with GenAI suggests a different dynamic, where effective use of AI tools can enhance critical thinking skills.

Phua’s insight reframes governance, suggesting that rather than shielding humans from AI, the focus should be on fostering human-AI collaboration.

Federated Regulatory Architecture: Scaling the Singapore Model

The second phase of MindForge is yielding concrete outputs, including the imminent release of a governance handbook that addresses 44 identified AI risks with specific mitigation strategies. This collaborative framework offers a valuable template for other jurisdictions.

Phua clarified, “We are not validating the GenAI model itself; we are applying the GenAI model to use cases that we want to use.” This distinction emphasizes the importance of validating applications rather than merely models, showcasing a nuanced understanding of governance in the era of foundational AI models.

As data leaders worldwide face similar challenges, Singapore’s MindForge project serves as more than a policy example. It illustrates the emergence of a new regulatory paradigm where the rapid pace of technological change necessitates collaborative and iterative governance approaches. This model underscores the intersection of regulatory vision and practitioner expertise, where data not only informs the rules but actively participates in their formulation.

The adaptability of Singapore’s model to other jurisdictions remains an open question, one that could significantly impact regional and national governance frameworks.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...