China’s Evolving AI Governance: A Collaborative and Multifaceted Model

AI Governance in China: A Multifaceted Approach

LONDON — Recent research challenges the prevailing notion that Beijing’s artificial intelligence (AI) controls are solely a product of its authoritarian regime. According to new findings, traditional Chinese values and commercial interests also play significant roles in the self-regulatory frameworks shaping AI governance.

Challenging Stereotypes

Xuechen Chen, an associate professor in politics and international relations, asserts that the view of AI governance as a strictly top-down system fails to capture the complexities involved. Rather than merely following orders from the state, various stakeholders, including the private sector and society at large, actively participate in establishing norms and regulatory mechanisms.

The Stakeholders

In the governance debate, three main players emerge:

  • The State: The national government oversees the overall regulatory framework.
  • The Private Sector: Companies like TikTok’s owner ByteDance and DeepSeek are pivotal in shaping AI applications and policies.
  • Society: Public opinion and cultural norms significantly influence how AI technologies are developed and regulated.

Chen emphasizes that these stakeholders collaborate to create a more nuanced approach to governance, which is not simply dictated by the state.

Market Dynamics and Innovation

According to a study by Tech Buzz China and Unique Research, 23 of the 100 largest AI products globally by annual recurring revenue originate from Chinese developers, with many aimed at overseas markets. Notably, the four largest Chinese firms — Glority, Plaud, ByteDance, and Zuoyebang — generated an impressive $447 million collectively, although this still lags behind major U.S. players like OpenAI and Anthropic, which boast revenues of around $17 billion and $7 billion, respectively.

Regulatory Landscape

While China lacks formalized AI legislation akin to the European Union’s AI Act, it adheres to a more market-led regulatory model. The Cyberspace Administration of China, the nation’s internet regulator, leads AI governance efforts. Recently, the agency has bolstered its campaign against “negative” content, threatening strict penalties for social media platforms that fail to comply.

Self-Regulation Among AI Developers

Despite the absence of comprehensive AI legislation, Chinese AI developers have proactively sought to self-regulate. This initiative stems from two main motivations:

  • Compliance: Companies wish to avoid conflicts with stringent government censorship laws.
  • Market Forces: Cultural norms, particularly Confucian values emphasizing family hierarchy, drive companies to regulate content proactively to maintain consumer trust.

For instance, DeepSeek, China’s counterpart to OpenAI’s ChatGPT, avoids responding to prompts critical of the government, reflecting both compliance and market-driven strategies.

Protecting Minors and Contemporary Concerns

China has established one of the most rigorous systems for protecting minors in cyberspace. Recent updates to the Minors Protection Law impose restrictions on online activity for young users, limiting their screen time and mandating child-friendly modes on smartphones. These measures reflect societal concerns and cultural values that prioritize the well-being of youth.

Conclusion

While the role of non-state actors in an authoritarian context remains a subject for further research, the findings underscore that various stakeholders contribute actively to shaping the regulations and standards governing AI in China. This collaborative effort highlights a more intricate governance model than the typically assumed top-down approach.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...