Legal Tools Essential for AI Regulation
Strengthening the rule of law in science and technology is crucial for fostering innovation and is a strategic task in advancing China’s path to modernization, experts say.
Recommendations for China’s 15th Five-Year Plan
The recommendations for formulating China’s 15th Five-Year Plan (2026-30) for economic and social development emphasize technological advancement, particularly in artificial intelligence. Adopted at the fourth plenary session of the 20th Communist Party of China Central Committee in October, these recommendations call for enhanced law-based governance, ethical guidelines, and security measures for science and technology. They also outline the need to strengthen AI governance by improving laws, regulations, policies, standards, and ethical norms.
“The plan positions AI as a key driver for industrial upgrading and new quality productive forces, while highlighting the need for a supportive legal environment,” said a research fellow at the Institute of Law of the Chinese Academy of Social Sciences.
Synergy Between Law and Intelligent Technology
Emphasizing the synergy between the rule of law and intelligent technology, experts describe a sound legal framework as a “safety valve” for AI’s healthy development across sectors. They warn that without it, the technology’s potential could become a risk.
Government Regulations and AI Management
In late 2025, China’s Cyberspace Administration released a draft for governing anthropomorphic AI interaction services for public consultation. This draft aims to curb AI misuse by imposing penalties on online accounts that use AI to mislead the public, especially in marketing content.
As early as 2023, the authority issued the country’s first AI management regulation, mandating the use of legally sourced data and technology models, emphasizing that AI applications must not infringe on legitimate rights. In October, China unveiled its revised Cybersecurity Law, which supports basic AI research and development while improving AI ethics rules and risk monitoring.
Legal Actions Against AI Misuse
In September, the Ministry of Public Security announced the detention of a netizen accused of using AI to fabricate and spread false information online. Authorities stated that such actions severely disrupted public order, leading to the closure of the individual’s online account.
Additionally, two individuals in Shanghai were sentenced to prison for developing an AI-powered app to produce obscene content for profit. Their case is currently under review by the Shanghai No 1 Intermediate People’s Court.
Parallel Development and Governance of AI
Experts advocate for the parallel development and governance of AI, stating that creating perfect rules during the early stages of a technology is unrealistic and potentially stifling. “We should use legal tools to secure safety while allowing room for innovation,” said a judge at the Beijing Internet Court.
Defining the Regulatory Bottom Line
Experts emphasize the importance of defining the “bottom line” for technology-related regulations, addressing core issues such as national security, social order, and the protection of personal rights and interests. In areas like technical standards, industry practices, and public services, legal frameworks should set benchmarks to guide healthy innovation.
The Need for a Dedicated AI Law
Calls for a dedicated AI law have been made, as current regulations are deemed scattered and lacking coordinated oversight. The development of AI involves complex issues such as liability, ethics, and rights protection, requiring a unified legal approach.
Judicial rulings in AI cases are crucial for forming governance rules, helping authorities understand technological principles and risks. Such adjudications can establish guiding precedents and principles, providing a foundation for future legislation and regulation.