What China’s Emotional AI Rules Get Right About Chatbot Design
In December 2025, China’s Cyberspace Administration released draft regulations targeting what it calls “human-like interactive AI services” – systems that simulate personality and engage users emotionally through text, images, or voice. The rules require mandatory reminders after two hours of continuous use, immediate human intervention when suicide is mentioned, and strict limits on using emotional interaction data for training. Public comment on the proposed regulations closes late January.
The draft rules follow a series of high-profile cases in the United States that have exposed the real-world risks of chatbots, particularly for adolescents. In January 2026, Character.AI and Google settled multiple lawsuits from families whose teenagers died by suicide after extended chatbot interactions. The most prominent one involved 14-year-old Sewell Setzer, who formed an obsessive attachment to a Character.AI bot before his death—revealing the company had no systematic way to detect when simulated intimacy crossed into psychological harm. Prior to the settlement, in October 2025, Character.AI banned minors entirely from its platform.
What distinguishes China’s response is not its recognition of those risks, but the regulatory tools it is willing to deploy. As a result, China’s draft isn’t a model that the US should copy wholesale. The regulations embed content controls tied to “socialist core values” and national security that would likely be unconstitutional in the US. However, the technical mechanisms the CAC proposes—circuit breakers for extended use, mandatory crisis escalation, and data quarantine for emotional logs—address problems US regulators haven’t seriously grappled with.
Design Challenges in Conversational AI
Work on conversational AI models makes the design challenge obvious. These systems are optimized for engagement. User retention is the metric. When someone talks to a chatbot for three hours straight at 2 AM, that looks like success in the dashboard. However, when a lonely teenager forms an attachment to an AI character that “remembers” personal details and responds with simulated empathy, that’s the product working as intended.
The legal and ethical problem emerges when that teenager is in crisis. Current US platforms handle this with pattern matching: if the user types something flagged as self-harm content, they generate a canned response with the 988 hotline link. This approach has two failure modes:
- First, someone determined to avoid the filter can phrase distress in ways that evade keyword detection.
- Second, the AI’s interaction style up to that point—validating, agreeable, emotionally affirming—runs counter to clinical crisis intervention principles, which require therapists to take a directive role.
Character.AI’s solution was to remove the age group where this tension is most acute. Although the settlement terms of lawsuits against the company remain undisclosed, prior to the announcement, the company had implemented several changes: enhanced detection for self-harm content, improved crisis resource referrals, and a complete ban on users under 18. However, Character.AI didn’t—and perhaps couldn’t—fundamentally alter the engagement optimization that makes extended emotional interactions the product’s core value proposition.
China’s Regulatory Innovations
The Chinese Cybersecurity Administration’s regulations do three things that no US proposal appears to have attempted:
- Mandatory usage interruption: After two consecutive hours of interaction, systems must generate a pop-up reminder to take a break. This requirement directly conflicts with engagement optimization, targeting the core monetization strategy.
- Human escalation for crisis content: When systems detect suicide or self-harm language, providers must involve human moderators. This treats the platform as having a duty of care, not merely a duty to disclose.
- Data quarantine for emotional interactions: Training datasets must undergo provenance checks, and emotional interaction logs cannot be used for future training without explicit, separate consent, recognizing that this data is sensitive and deserving of protection.
In contrast, California’s SB 1047 and SB 243 require AI companion chatbots to implement safety protocols for minors and address suicidal ideation. However, these efforts primarily focus on disclosure and access control, not interaction design. The implicit theory is that if users know they’re talking to an AI and minors are kept out, the market will sort out the rest.
Constitutional Constraints in the US
The constitutional constraint is real. In the US, government mandates on how AI systems must respond based on conversational content could face First Amendment scrutiny. While product safety regulations generally survive constitutional review, content-based requirements could be challenged as viewpoint discrimination.
Despite these barriers, there are mechanisms from the Chinese draft rules that could translate to a US context:
- Crisis escalation standards: Rather than requiring platforms to continuously assess users’ psychological state, regulators could establish duty-of-care expectations for detecting crisis language.
- Data fiduciary requirements: If a platform offers ongoing conversational AI that accumulates personal information over time, that relationship could trigger fiduciary duties analogous to those in financial advising or healthcare.
- Voluntary adoption of circuit breakers: Platforms could implement usage reminders and session limits as best practices, potentially receiving liability protection in return.
The urgency of these options depends on whether US policymakers see the problem clearly enough. Meanwhile, China’s regulations will provide the first large-scale test of technical interventions. By November 2025, AI social interaction apps in China had 70.3 million active users, and as these rules take effect, they will generate real-world data on their effectiveness.
US policymakers will have the chance to learn from this experiment if they are willing to look. The alternative is to continue treating emotionally responsive AI as just another app category, waiting for more deaths and lawsuits to reveal what the technology’s creators already know: these systems are designed to form attachments, and attachment without safeguards produces predictable harm.
In conclusion, the barriers to action are not only constitutional. Tech industry lobbying has directed policy toward voluntary frameworks, and political forces explain why US policy lags behind the harms. Effective regulation, inspired by China’s proactive measures, could significantly enhance the safety and efficacy of emotionally responsive AI.