Vietnam’s AI Law: Balancing Innovation and Control

Vietnam’s New AI Law: Balancing Innovation and State Control

On March 1, Vietnam became the first country in Southeast Asia to implement a comprehensive AI law. This significant legislation, known as the Law on Artificial Intelligence, draws inspiration from previous frameworks, particularly the EU’s AI Act, which emphasizes risk-based management of AI technologies.

Key Features of the AI Law

According to Tran Van Son, the deputy director of the National Institute of Digital Technology and Digital Transformation, the law aims to ensure a safety level that surpasses that of South Korea while fostering robust development akin to Japan.

The law is introduced during a transformative phase in Vietnam, identified as the “era of national rise”, with ambitions for the country to evolve into a high-income developed nation by 2045. Technology is seen as a critical driver of this transformation, complemented by efficient institutions that support growth while ensuring digital sovereignty, safety, and security.

Consultation and Implementation Challenges

The AI law was drafted in a remarkably short span of three months, undergoing multiple consultations with various stakeholders, including AI companies, industry groups, and international experts. However, critiques arise regarding the expedited timeline, which some argue did not allow sufficient opportunity for thorough analysis or feedback.

Core Principles and Liability

One of the fundamental principles of the law is that AI serves as a support tool, and that final decisions on significant societal matters must be made by humans. Minister of Science and Technology Nguyen Manh Hung emphasized, “We can’t let AI freely develop outside of a legal framework.”

In contrast to the EU’s harm-based liability approach, Vietnam has established rules for fault-based liability. This could compel companies to remain accountable for the functioning of autonomous AI systems, despite the presence of human oversight.

Prohibitions and Enforcement Powers

The Vietnamese law outlines several prohibited acts, such as the exploitation of AI for unlawful purposes, the creation of deepfakes intended to deceive, and the dissemination of materials that threaten national security. This broad approach empowers local authorities with extensive enforcement capabilities, allowing for flexible interpretation and application of the law.

According to experts, while AI companies could be held liable for unintended consequences, this poses complexities, especially when users may exploit products contrary to their intended use.

Operational Requirements for AI Companies

AI companies operating in Vietnam must self-classify their products into high, medium, or low-risk categories and notify the Ministry of Science and Technology prior to deploying medium or high-risk AI systems. Additionally, routine audits will be conducted for these higher-risk categories.

Another critical requirement mandates that both providers and deployers label AI-generated content, which aligns with the updated Cybersecurity Law set to take effect in July. This law prohibits the use of AI for creating unlawful deepfakes online.

Uncertainties and Future Considerations

Despite the establishment of the new AI law, uncertainties linger regarding its practical enforcement. The law sets forth general principles, with much reliance on subsequent directives for implementation. Industry groups have expressed concerns about the burdens posed by high-risk classifications, particularly for smaller startups.

However, proponents of the law argue that it provides substantial support for small and medium-sized enterprises (SMEs), including plans for national AI infrastructure and financial incentives through the AI Development Fund.

As Vietnam navigates this new regulatory landscape, experts suggest that companies must adapt to the evolving global trend of diverse AI regulations, preparing for compliance and the potential impacts on innovation.

In summary, while Vietnam’s AI law marks a significant step towards regulating artificial intelligence, the challenges of implementation, enforcement, and the balancing act between innovation and state control remain key considerations for the future.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...