Vietnam’s First Standalone AI Law: An Overview of Key Provisions and Future Implications
Vietnam is accelerating its push to emerge as a regional technology powerhouse, with strategic investments in artificial intelligence and semiconductors underscoring this ambition. Government officials have highlighted these technologies as twin engines for innovation and global competitiveness, aiming to nurture talent and build intelligent manufacturing centers.
To realize this goal, the country passed a principle-based framework under the Law on Digital Technology Industry in mid-2025, and within less than three months expedited a draft Law on Artificial Intelligence to replace that framework.
Enactment and Effective Date
Enacted on December 10, 2025, and effective from March 1, 2026, Vietnam’s first standalone AI law positions the country among early adopters in the region, emphasizing a pro-innovation stance that balances growth with safeguards. As a clear manifestation of the Brussels effect, the law focuses on risk-based management with risk classification and concepts similar to those under the EU’s AI Act.
Fundamental Principles
The AI Law’s foundational principles, outlined in Article 4, prioritize human-centered AI that safeguards human rights, privacy, national interests, and security while ensuring compliance with Vietnam’s Constitution and laws. Key tenets include:
- Maintaining human control over AI decisions
- Promoting fairness, transparency, non-bias, and accountability
- Aligning with ethical standards and the country’s cultural values
- Encouraging green, inclusive, and sustainable AI development, focusing on energy efficiency and environmental protection
Prohibited Acts
Article 7 of the AI Law establishes a list of unlawful AI-related activities, prohibiting:
- Exploiting AI for unlawful purposes
- Infringing on rights
- Simulating real people or events to deceive or manipulate perceptions
- Exploiting vulnerable groups
- Disseminating harmful forged material that threatens national security or public order
- Unlawful data processing
These prohibitions serve as a baseline standard for all activities, regardless of the stage of the AI life cycle or the risk classification of the system.
Risk-Based Classification and Governance
AI systems are classified into:
- High-risk — potential significant harm to life, health, rights, or national security
- Medium-risk — risk of user confusion from undisclosed AI interactions or generated content
- Low-risk — all others
Classification criteria in Article 9 include impact on rights and safety, user scope, influence scale, and application fields such as essential sectors like health care. Providers must self-classify their AI systems before putting them into use and notify the Ministry of Science and Technology for medium- or high-risk systems.
Role-Driven Accountability
Article 3 defines roles across the AI supply chain:
- Developers — design and training
- Providers — market placement
- Deployers — professional use
- Users — direct interaction
- Affected persons — impacted parties
This chain-of-responsibility parallels the EU AI Act but isolates research and development roles, exempting nonmarket efforts to incentivize innovation.
AI Incident Response
All stakeholders share responsibility for maintaining system safety, security, and reliability, including proactive detection and remediation of potential harms. In the event of a serious incident, defined as events causing or risking significant damage, developers and providers must implement fixes, suspend operations, or withdraw the system while notifying state authorities.
Transparency Responsibilities
Both providers and deployers must uphold transparency obligations throughout the life cycle of AI systems. Providers must enable clear recognition of the artificial nature of AI systems designed for direct human interaction, while deployers must provide notifications when content created or edited by AI risks misleading users.
Management of High-Risk AI Systems
High-risk systems require rigorous compliance measures, including risk assessments and human oversight. Providers can opt for self-assessment or hire registered organizations for conformity certification, potentially easing administrative burdens.
Incentive Policies
To spur innovation, the law offers support like the National AI Development Fund for research grants, regulatory sandboxes, and AI clusters in high-tech parks with tax breaks. These incentives clearly demonstrate Vietnam’s objective of attracting investments amid global AI competition.
Grace Periods for Pre-existing AI Systems
Transitional provisions grant grace periods for AI systems being put into the market before the law’s effective date. Existing systems in health care, education, and finance have an 18-month grace period, while others have 12 months. Systems may operate during this time unless deemed to pose serious risks.
Expanded Legal Landscape on the Horizon
In the coming months, local lawmakers will release draft versions of several key implementing documents for public comments, clarifying risk criteria, procedures, and penalties. Stakeholders should closely track the development of these regulations to prepare for upcoming requirements.