Data-Driven Governance: Shaping AI Regulation in Singapore

The Ministry of Algorithms: When Data Scientists Drive Policy Making

Recently, a pivotal conversation took place in Singapore, one that could significantly reshape the global discourse on AI regulation. This discussion featured two prominent figures: Thomas Roehm, vice president of corporate marketing at SAS, and Frankie Phua, managing director and head of group risk management at United Overseas Bank. The core of their dialogue revolved around a pressing question: how to govern AI that evolves faster than the regulations intended to contain it?

From Principles to Practice: The Data-Driven Regulatory Framework

This conversation was part of a broader exploration of Singapore’s innovative Project MindForge, driven by the Veritas Initiative of the Monetary Authority of Singapore (MAS). This initiative aims to assess the risks and opportunities posed by AI technology within the financial services sector, building on Singapore’s systematic approach to AI governance, which commenced in 2019 as part of the Singapore National AI Strategy.

Phua expressed pride in Singapore’s proactive involvement in the AI journey, stating, “I must say, I’m very proud to be a Singaporean in Singapore working in the Singapore banking industry.” He oversees various risk management functions, emphasizing the necessity of engaging experienced practitioners from the outset.

The journey began with the establishment of the FEAT principles: Fairness, Ethics, Accountability, and Transparency. Phua articulated the importance of integrating these principles into governance frameworks, stating, “When we look at any governance, it must be measured against certain principles.” However, translating these principles into actionable frameworks proved to be a formidable challenge.

Regulatory Agility in the Age of GenAI

The landscape shifted dramatically with the advent of ChatGPT in 2022, prompting discussions around GenAI. Traditional AI governance frameworks began to feel outdated, leading to the creation of the MindForge consortium, which aims to scrutinize the risks and opportunities associated with generative AI in the financial services industry.

MindForge distinguishes itself by adopting a collaborative approach where practitioners from banks, insurers, and technology companies collectively author the governance handbook. Phua noted, “MAS is leaving it to the financial institutions in Singapore to write this handbook for the industry,” allowing for practical governance solutions rather than top-down regulations.

The Governance-as-Code Paradigm

Roehm provided context on the implications of AI across various industries, stating, “Today, we help banks predict and prevent fraud as we analyze billions of transactions across the world.” The regulatory challenge lies not in preparing for AI adoption but in governing AI systems that are already making critical decisions.

In sectors such as public welfare and urban planning, the urgency for effective governance is heightened. When AI systems influence decisions related to child welfare or flood mitigation, the traditional slow-paced regulatory approach becomes untenable.

The Taxonomy Challenge: Defining AI for Regulatory Compliance

Phua identified the biggest challenge in AI governance as the definition of AI itself, highlighting ongoing debates within Project MindForge. “Because you’re trying to govern AI, you need to know what AI is,” he emphasized. The complexity increases with vendor-embedded AI solutions, which can create governance gaps.

Such definitional challenges underline the necessity of Singapore’s collaborative governance model. Regulations crafted in isolation may overlook practical complexities faced by practitioners daily.

Data Stewardship and Cognitive Governance

Phua addressed concerns regarding the potential erosion of human cognitive capabilities due to AI systems. He noted, “Even without AI, a lot of us will not think because we are lazy.” However, his experience with GenAI suggests a different dynamic, where effective use of AI tools can enhance critical thinking skills.

Phua’s insight reframes governance, suggesting that rather than shielding humans from AI, the focus should be on fostering human-AI collaboration.

Federated Regulatory Architecture: Scaling the Singapore Model

The second phase of MindForge is yielding concrete outputs, including the imminent release of a governance handbook that addresses 44 identified AI risks with specific mitigation strategies. This collaborative framework offers a valuable template for other jurisdictions.

Phua clarified, “We are not validating the GenAI model itself; we are applying the GenAI model to use cases that we want to use.” This distinction emphasizes the importance of validating applications rather than merely models, showcasing a nuanced understanding of governance in the era of foundational AI models.

As data leaders worldwide face similar challenges, Singapore’s MindForge project serves as more than a policy example. It illustrates the emergence of a new regulatory paradigm where the rapid pace of technological change necessitates collaborative and iterative governance approaches. This model underscores the intersection of regulatory vision and practitioner expertise, where data not only informs the rules but actively participates in their formulation.

The adaptability of Singapore’s model to other jurisdictions remains an open question, one that could significantly impact regional and national governance frameworks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...