Building Thailand’s AI Governance Clinic: Lessons and Insights

Insights from Practice: Building an AI Governance Clinic in Thailand

Thailand’s AI Governance Clinic (AIGC) was established in 2022 by the Electronic Transactions Development Agency (ETDA) as part of a national effort to build capacity in the field of artificial intelligence (AI). Operating under the Ministry of Information and Communication Technology, ETDA aims to promote and safeguard Thailand’s transition to a digital economy and society. The AIGC was launched in collaboration with the Digital Asia Hub Thailand, a non-profit think tank associated with Harvard’s Berkman Klein Center for Internet & Society, and the TUM Think Tank from the Technical University of Munich.

Core Activities of AIGC

In its initial year, the AIGC undertook several key activities:

  • Community Building: The AIGC aims to enhance capacity in AI governance by fostering communities of knowledge and practice. This includes a fellowship program that brings together local experts from various fields who serve as liaisons to their organizations. They meet regularly to share use cases and engage in collaborative research. An international policy advisory panel (IPAP) complements this network, providing mentorship and resources in areas such as health, education, and entrepreneurship.
  • AI Ethics Implementation Toolkit: To support the adoption of international best practices, such as the UNESCO’s Recommendation on the Ethics of AI, AIGC members, including fellows and IPAP experts, developed actionable guidelines. One significant output was the release of the “AI Governance Guidelines for Executives”, which aims to equip managers with tools to operationalize principles of good AI governance within their organizations.
  • Life-long Learning: The AIGC, in partnership with local universities, launched the AiX AI Executive Program, offering a specialized introduction to the Guidelines for healthcare administrators. This program included an AI readiness assessment framework and practical advice for evaluating and mitigating risks associated with AI in healthcare.

Interdisciplinary Clinical Component

In its second operational year, the AIGC aims to enhance its interdisciplinary “clinical” component. This initiative provides a unique forum for participants from both the private and public sectors to tackle real-world challenges in implementing AI governance. It facilitates access to knowledge from local and global experts and encourages the sharing of insights into applying high-level policies and guidelines across various sectors, including healthcare, finance, and governance.

Early Insights

Three key lessons have emerged from the AIGC’s first year of operation:

  • Commitment and Trust: Establishing a collaborative platform to support the translation of AI principles into practice requires significant commitment and mutual trust. The AIGC’s successful launch was facilitated by strong support from ETDA’s senior leadership and pre-existing relationships built on trust.
  • Prioritization of Governance Issues: Given the rapidly evolving landscape of AI governance, it is essential to prioritize governance issues according to local context and needs rather than attempting to address everything at once. For instance, the AIGC identified healthcare and finance as initial thematic priorities, allowing for focused resource allocation and the leveraging of existing expertise.
  • Long-term Capacity Building: Transforming international guidelines into practical applications requires substantial time and resources, particularly in developing countries. Initiatives like the AIGC can inspire coordinated efforts locally, supported by strategic commitments from international organizations such as UNESCO.

In conclusion, the establishment of the AI Governance Clinic in Thailand serves as a compelling case study in fostering AI governance capacity. Its multi-stakeholder approach not only addresses immediate governance challenges but also builds a foundation for sustainable development in the rapidly advancing field of artificial intelligence.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...