Regulating AI in APAC MedTech: Current Trends and Future Directions

Regulatory Landscape for AI-enabled MedTech in APAC

The regulation of artificial intelligence (AI) in Asia Pacific remains largely undeveloped, primarily governed by existing frameworks designed for other technologies and products. This area is evolving as authorities begin to address the unique challenges posed by AI.

AI techniques such as machine learning (ML), deep learning, and natural language processing are transformative and increasingly prevalent. However, their deployment raises significant challenges, including bias, discrimination, fake content, misinformation, privacy, security, and ethical issues. As a result, regulators are beginning to take action.

Asia Pacific Overview

Currently, there is no comprehensive law governing AI across Asia Pacific, but this is expected to change soon. Here’s an overview of the regulatory approach in various countries:

  • China: The National People’s Congress has urged the State Council to draft a comprehensive statute. The existing regulations are mainly administrative.
  • Japan: AI regulation is sector-specific, particularly in healthcare and life sciences, where laws such as the 2023 Next-Generation Medical Infrastructure Act facilitate the use of AI in research and development of diagnostic tools.
  • Australia: The government intends to adopt a principles-based approach to define “high-risk AI,” similar to the European Union’s framework.
  • Singapore: Established frameworks guide AI deployment and promote responsible use, including ethical principles and standardized tests. The National Artificial Intelligence Strategy 2.0 demonstrates Singapore’s commitment to a trusted AI ecosystem.
  • South Korea: The Digital Medical Products Act (January 2025) provides a regulatory framework for digital medical devices, while the Basic AI Act (December 2024) will classify AI based on risk starting January 2026.

Europe’s Approach

The European Union’s AI Act, set to be implemented in 2024, establishes harmonized rules for AI, categorizing systems based on risk levels from unacceptably high to minimal. AI-enabled medical devices will likely fall under the high-risk classification, which entails specific obligations overlapping with existing medical device regulations.

The AI Act focuses on risk assessment, compliance, record-keeping, oversight, and cybersecurity as core components of safe AI deployment in the medical field.

Regulatory Landscape in the U.S.

In the United States, there is currently no AI-specific legislation. The Food and Drug Administration (FDA) regulates AI-enabled medical products under its traditional framework as “Software as a Medical Device” (SaMD). This presents challenges, as the existing regulatory structure is not tailored for adaptive AI technologies.

The FDA has adopted a flexible approach to regulation, focusing on good machine learning practices (GMLPs) to facilitate innovation while ensuring safety and effectiveness. Key considerations include:

  • Documenting changes to AI products
  • Ensuring transparency
  • Monitoring real-world performance data

The FDA collaborates with Health Canada and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) to develop internationally harmonized GMLPs, reaffirming their commitment to balancing innovation with patient safety.

This ongoing evolution in regulatory frameworks across Asia Pacific, Europe, and North America highlights the pressing need for comprehensive legislation that addresses the unique challenges posed by AI in the MedTech sector.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...