National AI Ethics Framework Issued to Guide Safe, Responsible Rollout
The Minister of Science and Technology has signed a circular issuing the National Artificial Intelligence Ethics Framework, designed to steer the research, development, and deployment of AI systems toward outcomes that are safe, responsible, and beneficial to individuals, communities, and society at large.
Framework Overview
According to the circular taking effect on March 10, the framework imposes specific obligations on entities and individuals involved in AI activities. In particular, the AI use must ensure safety, reliability, while preventing harm to human life, health, dignity, honour, and mental well-being.
Responsibilities of Developers and Operators
Developers and operators bear responsibility for embedding safety features from the design stage, anticipating potential harmful scenarios, and adopting suitable preventive controls. They must also establish clear quality criteria for data, models, and outputs, alongside internal processes for testing, validation, and verification prior to any deployment.
Human Oversight and Security Measures
The framework mandates human oversight and intervention capabilities for all AI-driven decisions and actions, calibrated to the system’s potential impact level. Entities and individuals must set up mechanisms to gather feedback, detect errors, initiate corrections, and maintain contingency plans in cases of malfunction or misuse.
Robust security protocols must detect and mitigate threats, including unauthorized access, system hijacking, data or model poisoning, adversarial attacks, vulnerability exploitation, data breaches, or other forms of misuse, thereby ensuring the confidentiality, integrity, and availability of data, models, algorithms, and supporting infrastructure.
Human and Civil Rights Considerations
Emphasis is placed on respect for human and civil rights, with commitments to fairness, transparency, and non-discrimination throughout AI development and use. Entities and individuals must apply appropriate review processes to prevent infringements on privacy, personal data protection, freedom of choice, access to information, the right to equal treatment, and other rights enshrined in law.
Bias Detection and Mitigation
Efforts are required to detect and mitigate biases in data, models, and operations, with particular attention to effects on vulnerable groups such as children, the elderly, those with disabilities, and other disadvantaged groups. Entities and individuals must provide clear notifications about AI involvement, delivering reasonable details on system goals, scope, data sources, general operating principles, and known limitations to prevent misconceptions about capabilities.
Encouraging Social Welfare and Sustainability
Moreover, the framework encourages AI use that advances social welfare, inclusivity, and sustainable progress. Entities and individuals should evaluate energy use, computing resources, and environmental footprints across the full AI lifecycle, favoring energy-efficient technologies and low-emission processes. AI system design must follow social ethical norms and cultural identity, while avoiding discriminatory outputs or adverse impacts on community interests.
Innovation and Corporate Social Responsibility
Innovation and corporate social responsibility receive encouragement under the framework. Responsible experimentation is endorsed, along with open research and knowledge dissemination in accordance with legal regulations, and protection of intellectual property rights.
Periodic Review and Implementation
The framework will undergo periodic review and updates every three years, or sooner in response to major changes in technology, legislation, or management practices.
This issuance reinforces the implementation of the Politburo’s Resolution No. 57-NQ/TW on breakthroughs in sci-tech, innovation, and national digital transformation. It also supports the enforcement of the Law on AI, which entered into force on March 1, 2026.