Taiwan’s Emerging AI Governance: Building from the Basic Act to the Hard Questions of Data and Copyright
With the AI Basic Act entering into force earlier this year, Taiwan has crossed the threshold from theoretical debate over the regulation of artificial intelligence to the much harder reality of operational governance. The statute provides a polished constitutional logic to guide Taiwan’s AI governance, but it lands in an ecosystem teeming with unresolved conflicts—from the murky legality of using copyrighted materials as training data to the impending enforcement shock of a revamped Personal Data Protection Act (PDPA).
Taiwan’s emerging approach is best understood as institution-building under fire. Unlike jurisdictions that implemented meaningful regulations before the AI boom, Taiwan is retrofitting its governance architecture in real time. This involves a high-stakes experiment in parallel processing relying on private ordering and contract law to hold the line on copyright, while the state rushes to build centralized infrastructure to assert digital sovereignty—its practical capacity to govern data, core digital systems, and cross-border flows without relying on foreign jurisdictions or private gatekeepers.
Consequently, the defining characteristic of this new regime is not the harmony of its principles but the friction of its implementation. With new liability risks looming over the private sector as regulators scramble to define their boundaries, the success of Taiwan’s AI governance strategy will depend less on the text of the Basic Act and more on the institutional plumbing being laid beneath it. The definitive test now lies in the capacity of a fragmented state apparatus to translate these statutory words into binding reality.
A Framework Statute Arrives—without Ending the Debate
On December 23, 2025, Taiwan’s Legislative Yuan passed the AI Basic Act in its third reading. Promulgated and entered into force on January 14, 2026, the act is designed to be a governance spine rather than an exhaustive rulebook. To balance AI innovation with trustworthy deployment, the Basic Act codifies seven key governance principles: sustainability and well-being, human autonomy, privacy protection and data governance, cybersecurity and safety, transparency and explainability, fairness and nondiscrimination, and accountability. These principles aim to ensure that AI development remains compatible with fundamental rights and democratic norms.
Importantly, Taiwan’s Basic Act arrives at a moment when the AI policy system is still incomplete. Taiwan is not done legislating. The purpose of the AI Basic Act is to create a statutory platform that delegates much of the detail to subsequent institutional design, risk frameworks, and sectoral practice.
This is evident in two features of the act’s design and Taiwan’s AI governance landscape. First, the Basic Act consolidates a national strategy to promote “responsible AI” while leaving room for iterative, executive-led specification. Taiwan’s National Science and Technology Council (NSTC) previously framed the draft Basic Act as a vehicle for promoting innovation while addressing human rights and risk, with an explicit commitment to risk management and governance principles. However, the Basic Act’s practical force will depend less on its abstract principles than on the subordinate rules and enforcement routines that follow.
Second, institutional coordination poses a persistent challenge. One particular concern is that, even with some formal allocations on paper, practical division of responsibilities can often prove elusive. The NSTC serves as the central competent authority and provides staff support for the National AI Strategy Special Committee, which steers cross-ministerial AI strategy and policy coordination at the cabinet level. Taiwan’s Ministry of Digital Affairs sets the risk-classification framework and supplies evaluation tools, while the Executive Yuan coordinates regulatory adaptation across sectoral authorities. Nevertheless, which agency is empowered to finalize high-risk designations, dictate procurement standards, and police cross-sector boundaries has not yet been definitely established.
This lack of clarity on institutional responsibilities matters because Taiwan’s AI ecosystem is not governed by one regulator; it spans telecoms and digital platforms, health and biomedicine, finance, education, public administration, and defense-adjacent supply chains. The Basic Act can set a common vocabulary, but it cannot, by itself, resolve interagency boundary problems. Instead, the law provides the high-level principles of AI governance while leaving implementation details to future legislation and sectoral frameworks.
Data Protection Is Becoming the Enforcement Backbone
If the AI Basic Act provides a constitutional logic for AI governance, Taiwan’s data-protection regime is becoming its likely enforcement backbone, especially for real-world harms. Taiwan’s Personal Data Protection Act originated in the 1990s and was reworked into an economy-wide privacy statute that has been in force since 2012, extending beyond the public sector. It applies to both government and private entities that collect, process, or use personal data, and it establishes baseline duties and data-subject rights that can be operationalized through administrative enforcement.
On March 27, 2025, the Executive Yuan submitted an amendment package to establish the Personal Data Protection Commission (PDPC) and amend the PDPA. The package was explicitly presented as necessary to build “data governance in the era of comprehensive AI application.” The Legislative Yuan passed the amendments on October 17, 2025, equipping the future PDPC with an enforcement toolkit (e.g., breach notification, baseline security standards, cross-border transfer controls, and inspection powers). Full implementation now hinges on the separate passage of the PDPC Organic Act and an Executive Yuan order setting the enforcement date after administrative preparations.
These developments have two important implications for the enforcement of Taiwan’s data governance (and in practice its AI governance). First, Taiwan is shifting from fragmented supervision to a more centralized model. The Executive Yuan’s design is explicitly aimed at creating an independent supervisory mechanism with enforcement authority rather than leaving oversight dispersed across sectoral ministries. Fragmented ministry-led oversight produces inconsistent standards and weak, uneven enforcement. A central independent supervisor concentrates authority and capacity, enabling uniform rules and credible penalties across sectors.
The PDPC Preparatory Office has also described a staged transition in which the new authority will prioritize government agencies and sectors lacking a clear competent authority and then gradually assume wider supervisory functions as the system matures.
Second, the “AI governance” challenge is being reframed as a “data governance” challenge in order to make use of the PDPA’s enforcement toolkit. In a potential dispute, PDPA enforcement is more operational than the Basic Act. This is most evident in data breaches and model inversion and re-identification risks, as well as in cross-border transfers, public-sector surveillance anxieties, and vendor accountability. Because data is the substrate of most AI development and deployment, harms often materialize through data practices, even when AI and privacy fall under distinct legal regimes. This makes it more likely that the PDPA will be increasingly utilized to enforce the principles of the Basic Act. Taiwan’s challenge in moving beyond the Basic Act is to ensure that the PDPA’s coming enforcement ecosystem (e.g., PDPC authority, incident reporting routines, audit capacities, and cross-sector compliance) can handle AI-specific risk patterns at scale.
Sovereign Data as Governance Technology, Not Just Industrial Policy
In addition to enforcement through data-protection frameworks, Taiwan is also building public-interest data infrastructure as a governance lever for AI development. On December 24, 2025, the Ministry of Digital Affairs launched a beta version of the Taiwan Sovereign AI Training Corpus—a repository of high-quality Traditional Chinese datasets intended to support model training aligned with Taiwan’s linguistic and cultural context. It stated that more than 200 government agencies have contributed over 2,000 datasets and more than 600 million tokens, for which both public- and private-sector users may apply.
This initiative has at least three governance functions. First, it reduces reliance on opaque training sources. A government-backed corpus can reduce dependence on scraped or uncertainly licensed datasets, lowering legal and reputational risk for domestic model development. Second, it creates an accountability baseline for public-sector AI. If government agencies procure or deploy language models, a “known” corpus can support documentation, traceability, and standardized evaluation. Third, it embeds values through infrastructure. Taiwan is effectively treating data infrastructure as a constitutional layer of governance: if privacy and data governance is a principle of AI development, then corpus design (licensing terms, dataset provenance, curation standards, and access controls) becomes a values-implementation mechanism.
However, this state-led infrastructure invites critical scrutiny. First, there is the challenge of defining “sovereign” data without drifting into data localization for its own sake. In practice, the key question is whether sovereignty is understood as lawful control and accountable use of data or is reduced to restrictive rules that limit cross-border access without clear gains in security or capability. Second, care must be taken to ensure that corpus governance does not inadvertently privilege government-centric narratives. Without deliberate safeguards, the structure of contribution and access could tilt the corpus toward administrative language and official viewpoints, narrowing the representational range of the training data.
Perhaps most critically, tension exists between sovereign intent and market performance. The ultimate test is whether a model trained on this corpus can compete not just abroad but at home. If there is a significant performance gap, even domestic stakeholders may continue to rely on advanced foreign models, leaving the sovereign AI ecosystem as a symbolic project with little practical adoption.
Copyright as the Policy Front Line for Generative AI
Taiwan’s copyright system is becoming a decisive arena for AI governance because generative AI’s economic model depends on large-scale ingestion of expressive works. In 2025, the Taiwan Intellectual Property Office (TIPO) intensified its public-facing analysis and interpretive work on AI-related copyright issues, including the legality of training and the downstream risk of infringing outputs. TIPO has published explanatory materials explicitly framing “AI training data collection” as a live copyright controversy and discussing the possibility and limits of fair use in this context.
More concretely, its public interpretations warn that use of others’ works at the training stage can create infringement risk and that users of generative AI outputs may still face liability if outputs are substantially similar to protected works. This is especially a risk where the use is commercial and does not fall within statutory fair-use provisions.
TIPO has addressed several specific AI training scenarios directly. Training on protected works (including historical photographs) is not automatically lawful simply because the output is just a set of numerical values that make up a portion of the model. TIPO indicates that in practice training often requires copying the material during collection, preprocessing, and optimization, which may constitute reproduction. Unless fair use or another statutory exception applies, permission or a license is needed; otherwise, infringement risks include civil and criminal liability.
A controversy that suggests the direction of Taiwan’s copyright system as it relates to AI surfaced around the “fineweb-zhtw” dataset—a large-scale collection of Traditional Chinese data for training large language models that a doctoral student from National Taiwan University made freely available via a Facebook group. It was later reported that the dataset contained approximately 140,000 CNA news articles spanning 2011–21 without authorization, and CNA pursued a criminal complaint. The dispute ended in a settlement announced on July 11, 2025, after the dataset was taken down and the student acknowledged CNA’s copyright position.
From a governance perspective, Taiwan’s copyright posture reflects two realities. First, under Taiwan’s current copyright laws, there is no bespoke text-and-data-mining exception tailored to generative AI. Taiwan’s law is being stretched through existing doctrines and administrative interpretation rather than rewritten around a dedicated AI training exception. That makes near-term governance feasible but leaves questions about scalable licensing models and cross-border platform compliance.
Second, uncertainty shifts onto firms and institutions. In practice, the absence of clear, definitive training rules compels the industry to rely on mechanisms of private ordering for governance. Stakeholders attempt to manage compliance through a combination of provenance tracking, rights clearance, filtering tools, and contractual risk allocation between providers and deployers. However, these measures operate under a precarious shadow, as rights holders retain the leverage to escalate disputes into civil or even criminal proceedings at their discretion.
From Law to Practice: Three Implementation Tests for Taiwan’s AI Governance
Taiwan’s AI governance will be judged less by the existence of the Basic Act than by whether implementation becomes coherent across ministries and legal domains, including data governance, digital sovereignty, and copyright. Three near-term choices are likely to define this trajectory.
First, Taiwan needs clarity on how high-risk AI is governed in practice. The immediate test lies not in the abstract role of allocation but in how risk classification translates into responsibility and control. Under the Basic Act, the Ministry of Digital Affairs sets the risk-classification framework and provides evaluation and verification tools, while sectoral authorities apply that framework to develop risk-based management rules. The law further requires the government to define responsibility and attribution for high-risk AI applications and to establish mechanisms for remedy, compensation, or insurance. The unresolved question is how these distributed functions will operate under stress, particularly when risk assessments are contested, sectoral rules diverge, or harms cut across regulatory boundaries. Without a clear escalation path and decision sequence, risk-based governance may remain formally specified but operationally fragmented.
Second, Taiwan needs to translate PDPA reform into AI-capable enforcement. The PDPC transition creates an opportunity to build supervisory capacity suited to AI-related compliance. This includes oversight of data pipelines, governance of cross-border data flows, and investigation of AI-enabled profiling or discriminatory outcomes. The Executive Yuan establishes its policy intent by framing the initiative as data governance for the “AI comprehensive application” era. Yet, the practical challenge lies in the institutional capacity and independence required to execute this vision.
Third, Taiwan needs to move from copyright ambiguity toward a workable market settlement. TIPO’s interpretive work shows that copyright risk is being treated as a critical governance issue rather than ignored. The near-term priority is to reduce uncertainty through mechanisms that can scale. This includes standard licensing templates, credible collective licensing options, clearer guidance on fair use at the training stage, and defined transparency expectations for model providers operating in Taiwan.