National AI Ethics Framework: Ensuring Safe and Responsible Innovation

National AI Ethics Framework Issued to Guide Safe, Responsible Rollout

The Minister of Science and Technology has signed a circular issuing the National Artificial Intelligence Ethics Framework, designed to steer the research, development, and deployment of AI systems toward outcomes that are safe, responsible, and beneficial to individuals, communities, and society at large.

Framework Overview

According to the circular taking effect on March 10, the framework imposes specific obligations on entities and individuals involved in AI activities. In particular, the AI use must ensure safety, reliability, while preventing harm to human life, health, dignity, honour, and mental well-being.

Responsibilities of Developers and Operators

Developers and operators bear responsibility for embedding safety features from the design stage, anticipating potential harmful scenarios, and adopting suitable preventive controls. They must also establish clear quality criteria for data, models, and outputs, alongside internal processes for testing, validation, and verification prior to any deployment.

Human Oversight and Security Measures

The framework mandates human oversight and intervention capabilities for all AI-driven decisions and actions, calibrated to the system’s potential impact level. Entities and individuals must set up mechanisms to gather feedback, detect errors, initiate corrections, and maintain contingency plans in cases of malfunction or misuse.

Robust security protocols must detect and mitigate threats, including unauthorized access, system hijacking, data or model poisoning, adversarial attacks, vulnerability exploitation, data breaches, or other forms of misuse, thereby ensuring the confidentiality, integrity, and availability of data, models, algorithms, and supporting infrastructure.

Human and Civil Rights Considerations

Emphasis is placed on respect for human and civil rights, with commitments to fairness, transparency, and non-discrimination throughout AI development and use. Entities and individuals must apply appropriate review processes to prevent infringements on privacy, personal data protection, freedom of choice, access to information, the right to equal treatment, and other rights enshrined in law.

Bias Detection and Mitigation

Efforts are required to detect and mitigate biases in data, models, and operations, with particular attention to effects on vulnerable groups such as children, the elderly, those with disabilities, and other disadvantaged groups. Entities and individuals must provide clear notifications about AI involvement, delivering reasonable details on system goals, scope, data sources, general operating principles, and known limitations to prevent misconceptions about capabilities.

Encouraging Social Welfare and Sustainability

Moreover, the framework encourages AI use that advances social welfare, inclusivity, and sustainable progress. Entities and individuals should evaluate energy use, computing resources, and environmental footprints across the full AI lifecycle, favoring energy-efficient technologies and low-emission processes. AI system design must follow social ethical norms and cultural identity, while avoiding discriminatory outputs or adverse impacts on community interests.

Innovation and Corporate Social Responsibility

Innovation and corporate social responsibility receive encouragement under the framework. Responsible experimentation is endorsed, along with open research and knowledge dissemination in accordance with legal regulations, and protection of intellectual property rights.

Periodic Review and Implementation

The framework will undergo periodic review and updates every three years, or sooner in response to major changes in technology, legislation, or management practices.

This issuance reinforces the implementation of the Politburo’s Resolution No. 57-NQ/TW on breakthroughs in sci-tech, innovation, and national digital transformation. It also supports the enforcement of the Law on AI, which entered into force on March 1, 2026.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...