China’s New Human-like AI Regulations: Key Insights

China’s Draft Regulations on Human-like AI: Key Takeaways

On December 27, 2025, the Cyberspace Administration of China (CAC) released draft regulations designed to govern the burgeoning field of human-like interactive AI services. The proposed rules aim to strike a balance between fostering innovation and ensuring that these services operate safely and align with national values within mainland China.

Key Requirements for Service Providers

Among other things, providers must:

  • Uphold core socialist values
  • Implement robust user protections
  • Meet government reporting standards
  • Ensure high-quality training data

Upholding National Values and Prohibited Content

The proposed regulations mandate that all AI-generated content must conform to China’s core socialist values. While encouraging positive applications like cultural dissemination and elderly companionship, the rules establish a clear set of prohibitions:

  • Content that endangers national security
  • Spreading rumors
  • Promoting illegal activities (such as obscenity or gambling)
  • Defaming others
  • Harming users’ physical and mental well-being through manipulation or deception

Comprehensive User Protection Measures

For all users, providers must:

  • Identify signs of extreme emotional distress or addiction and intervene with measures such as offering comfort, suggesting professional help, or allowing for a manual takeover of the conversation.
  • Clearly label AI-generated content and remind users they are interacting with AI, not a human.
  • Implement automatic break reminders after 2 hours of continuous use.
  • Provide an easy way for users to exit conversations, particularly in emotional companionship scenarios.

Protections for Minors

A dedicated minor mode is required, which should be activated automatically when a minor is detected. This mode includes:

  • Periodic reality reminders
  • Usage time limits
  • Guardian control functions

Providers must obtain explicit consent from a guardian before offering emotional companionship services to a minor.

Protections for the Elderly

For elderly users, the proposed regulations emphasize safety. If a service detects a potential threat to an elderly user’s life, health, or property, it must:

  • Notify their designated emergency contact
  • Provide channels for assistance

Mandatory Security Assessments and Reporting

Providers must conduct security assessments and submit reports to provincial-level internet information departments under several circumstances, including:

  • Launching a new human-like interactive service
  • Implementing major technological changes
  • Reaching significant user milestones, such as 1 million registered users or 100,000 monthly active users

Assessments are also required if potential risks to national security, public interest, or individual rights are identified.

Ensuring High-Quality Training Data

The proposed regulations place a strong emphasis on the quality and legality of data used for training AI models. Datasets must:

  • Align with core socialist values
  • Be legally sourced and traceable
  • Use clean, labeled data
  • Include negative samples in their training to prevent harmful outputs
  • Conduct routine inspections of their datasets to maintain compliance

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...