Establishing an AI Governance Framework for Shanghai’s Future

Establishing an AI Governance System in Shanghai

The rapid advancement of generative AI has introduced a range of secondary risks, including misinformation and data security challenges. In light of these issues, there is a pressing need to strengthen the security framework for AI development.

Urgent Need for Comprehensive Governance

During the recent Shanghai Two Sessions, a representative of the Shanghai Municipal People’s Congress emphasized the necessity of building an agile, scientific, and systematic governance system to address the negative impacts associated with AI technology. This initiative aims to balance high-quality development with high-level security.

The misuse of technology has evolved, exhibiting industrialization, concealment, and cross-sectoral impacts. Current governance structures fall short in tackling challenges related to technological confrontation, data security, and market order risks. Issues such as the misuse of deepfake technology and threats to data security and intellectual property rights highlight the need for urgent action.

Creating an AI Content Security Testing Center

To mitigate these risks effectively, it is suggested that relevant departments collaborate with research institutions and leading enterprises to establish a ‘Municipal AI Content Security Testing Center’. This center would serve as a technical platform for monitoring, early warning, and traceability, shifting from ‘passive blocking’ to ‘active defense’ methodologies.

Additionally, enforcing national digital watermarking standards is crucial. This would involve requiring the integration of non-removable digital watermarks into AI-generated content services, ensuring traceability of all generated content.

Improving Legal Framework

There is a strong push to expedite legislation in emerging fields by formulating local regulations that align with Shanghai’s technological and industrial needs, while also setting global standards for AI governance. This legislative effort is essential for maintaining international influence in the domain of artificial intelligence.

In light of the increasing exposure of minors to AI products, it is recommended to promote the establishment of a ‘Regulation on the Protection of Minors in Artificial Intelligence Applications’. This regulation would explicitly prohibit the generation or promotion of content that involves violence, bias, or harmful inducements.

Interdepartmental Collaboration

To combat new types of risks, such as AI manipulation of financial markets, a shared ‘AI False Information and Anomalous Data Feature Database’ should be established. This database would enhance the capability to detect and respond to emerging financial risks while using AI technology to identify and eliminate rumors.

Furthermore, stricter security audits on companies utilizing third-party AI tools are necessary to prevent core business data leakage through public AI models, thereby building a robust data security firewall.

Building a Societal Defense Against AI Risks

It is also vital to create a ‘psychological defense’ against AI risks within society. Incorporating AI safety education into citywide digital literacy programs can help raise awareness among vulnerable groups, including the elderly and children. Key messages should focus on educating citizens about risks such as ‘video forgery’ and teaching practical techniques to identify deepfakes.

By focusing on these strategies, Shanghai can establish a comprehensive AI governance system that not only addresses immediate risks but also positions the city as a leader in responsible AI development.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...