Taiwan’s Forward-Thinking AI Regulations and Strategies

Taiwan’s AI Strategy and Regulatory Framework

The government of Taiwan has adopted a proactive approach to support the AI industry, promoting industrial development through policy measures and corresponding legal frameworks. In the latter half of 2024, the National Science and Technology Council (NSTC) introduced the draft AI Basic Act, which was submitted to the Executive Yuan (Taiwan’s cabinet) for review in early 2025.

In parallel, Taiwan has amended laws to address AI-driven fraud, deepfake activities, and election manipulation. The government also plans to enact new legislation on data governance and open data to address the data-driven characteristics of AI.

AI Government Policies

Taiwan’s government actively supports the development of professional AI chips, AI hardware, and large-scale language models to promote the comprehensive growth of AI research and applications. Meanwhile, sectors such as manufacturing, finance, healthcare, agriculture, and retail are encouraged to integrate AI for digital transformation.

Using the TAIWANIA 2 supercomputer, the National Applied Research Laboratories launched TAIDE, a large-scale localized language model tailored to Taiwanese data. TAIDE utilizes public data (including judgments, Constitutional Court interpretations, and other court decisions from Taiwan’s Judicial Yuan) to refine traditional Chinese-language models. The model supports languages such as Taiwanese and Hakka, aiming to integrate AI into the agriculture, education, and automation industries.

This center will establish certification mechanisms and guidelines for AI products, as well as systems to ensure safer and more interpretable AI applications.

Legal Responses to AI Challenges

Despite rapid advances in AI technology, legal challenges remain. The Legislative Yuan is prioritizing cases where AI or deepfake technology is used for fraudulent or election manipulation purposes. Meanwhile, the Ministry of Digital Affairs (MODA) is drafting and revising legal frameworks for data governance. The NSTC’s draft AI Basic Act is intended to lay the groundwork for interagency collaboration and unified regulation of AI. These efforts fall into three core areas:

  • Recognizing AI Risks: Relevant amendments have been made to the Criminal Code, the Fraud Crime Hazard Prevention Act, and other laws to establish criminal liability for disseminating false information or committing crimes using deepfake technology. Online advertising platforms are required to disclose instances where such technology is employed.
  • New Legislation on Data Innovation: MODA is drafting the Act for the Promotion of Data Innovation and Utilisation, aimed at increasing the accessibility of open data and establishing cross-industry data-sharing mechanisms.
  • Mitigating Data Use Risks: The use of data is subject to the Copyright Act and the Personal Data Protection Act. The Intellectual Property Office has ruled that using AI technology to generate output without the copyright holder’s consent may constitute reproduction of others’ works.

In parallel, the draft amendments to the Personal Data Protection Act will be rolled out with substantially updated provisions, enhancing personal data protection in an era where “data is the new oil of the digital economy.”

Draft Fundamental Act on Artificial Intelligence

To ensure that AI technology aligns with human rights, privacy, industrial competitiveness, and the public interest, the NSTC introduced the draft AI Basic Act in 2024, which is expected to be enacted in 2025. Key elements of the draft include:

  • Definition and Scope of AI: The definition of AI is crucial as it determines the scope of regulation. The draft ensures broad coverage of AI techniques, from basic knowledge-based algorithms to sophisticated neural networks.
  • Guiding Principles of AI: The draft sets out guiding principles for AI R&D, including sustainability, human autonomy, privacy, data governance, security, transparency, explainability, fairness, and accountability.
  • Risk-Based Management: MODA will classify AI risks in line with international standards, promoting AI innovation within safety parameters.
  • Data Privacy and Openness: Data openness and governance must be mandated to ensure the availability of adequate data for AI models while protecting personal data.
  • Adaptive Legislation and Cross-Agency Collaboration: Each ministry must review its regulatory framework to ensure alignment with the rapid technological evolution of AI.

Conclusion

Taiwan plays a critical role in the global AI landscape with its advanced ICT and semiconductor industries, demonstrating the government’s effective policymaking from nurturing AI talent to refining AI laws. Taiwan is diligently building a comprehensive policy and legal framework around AI, showcasing its commitment as it advances into the AI era.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...