Vietnam’s New AI Law: Balancing Innovation and State Control
On March 1, Vietnam became the first country in Southeast Asia to implement a comprehensive AI law. This significant legislation, known as the Law on Artificial Intelligence, draws inspiration from previous frameworks, particularly the EU’s AI Act, which emphasizes risk-based management of AI technologies.
Key Features of the AI Law
According to Tran Van Son, the deputy director of the National Institute of Digital Technology and Digital Transformation, the law aims to ensure a safety level that surpasses that of South Korea while fostering robust development akin to Japan.
The law is introduced during a transformative phase in Vietnam, identified as the “era of national rise”, with ambitions for the country to evolve into a high-income developed nation by 2045. Technology is seen as a critical driver of this transformation, complemented by efficient institutions that support growth while ensuring digital sovereignty, safety, and security.
Consultation and Implementation Challenges
The AI law was drafted in a remarkably short span of three months, undergoing multiple consultations with various stakeholders, including AI companies, industry groups, and international experts. However, critiques arise regarding the expedited timeline, which some argue did not allow sufficient opportunity for thorough analysis or feedback.
Core Principles and Liability
One of the fundamental principles of the law is that AI serves as a support tool, and that final decisions on significant societal matters must be made by humans. Minister of Science and Technology Nguyen Manh Hung emphasized, “We can’t let AI freely develop outside of a legal framework.”
In contrast to the EU’s harm-based liability approach, Vietnam has established rules for fault-based liability. This could compel companies to remain accountable for the functioning of autonomous AI systems, despite the presence of human oversight.
Prohibitions and Enforcement Powers
The Vietnamese law outlines several prohibited acts, such as the exploitation of AI for unlawful purposes, the creation of deepfakes intended to deceive, and the dissemination of materials that threaten national security. This broad approach empowers local authorities with extensive enforcement capabilities, allowing for flexible interpretation and application of the law.
According to experts, while AI companies could be held liable for unintended consequences, this poses complexities, especially when users may exploit products contrary to their intended use.
Operational Requirements for AI Companies
AI companies operating in Vietnam must self-classify their products into high, medium, or low-risk categories and notify the Ministry of Science and Technology prior to deploying medium or high-risk AI systems. Additionally, routine audits will be conducted for these higher-risk categories.
Another critical requirement mandates that both providers and deployers label AI-generated content, which aligns with the updated Cybersecurity Law set to take effect in July. This law prohibits the use of AI for creating unlawful deepfakes online.
Uncertainties and Future Considerations
Despite the establishment of the new AI law, uncertainties linger regarding its practical enforcement. The law sets forth general principles, with much reliance on subsequent directives for implementation. Industry groups have expressed concerns about the burdens posed by high-risk classifications, particularly for smaller startups.
However, proponents of the law argue that it provides substantial support for small and medium-sized enterprises (SMEs), including plans for national AI infrastructure and financial incentives through the AI Development Fund.
As Vietnam navigates this new regulatory landscape, experts suggest that companies must adapt to the evolving global trend of diverse AI regulations, preparing for compliance and the potential impacts on innovation.
In summary, while Vietnam’s AI law marks a significant step towards regulating artificial intelligence, the challenges of implementation, enforcement, and the balancing act between innovation and state control remain key considerations for the future.