China AI Companion Rules
Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services
Draft CAC regulation targeting AI services that simulate human personality and engage users emotionally. Mandates crisis intervention protocols, minor protection modes with parental controls, two-hour usage circuit breakers, opt-in consent for training data use, and prohibitions on emotional manipulation. First regulation globally to specifically target AI companion addiction and emotional dependency.
Jurisdiction
China
Enacted
Pending
Effective
TBD
Enforcement
Cyberspace Administration of China (CAC) at national level; provincial/municipal cyberspace administration bureaus at local level
Draft published December 27, 2025; public comment period ended January 25, 2026. Not yet finalized as of February 2026. TC260 developing accompanying technical standards. Expected to convert to binding regulation mid-2026.
CAC Official Notice (Chinese)Why It Matters
First regulation globally to specifically target AI companion services and emotional dependency. Notable for requiring opt-in (not opt-out) consent for training data from conversations, mandatory human takeover during crisis situations, and explicit prohibition on designing AI to replace social interaction or induce addiction. Follows high-profile cases of AI companion-related harm in the US and growing domestic concern about chatbot usage among Chinese youth.
Recent Developments
Draft published December 27, 2025. Public comment period closed January 25, 2026. As of February 2026, TC260 (National Technical Committee 260 on Cybersecurity) has issued a call for submissions of an accompanying technical standard to define key thresholds including what constitutes 'emotional interaction.'
At a Glance
Applies to
Harms addressed
Who Must Comply
- AI products and services that simulate human personality traits, thinking patterns, and communication styles
- Services that engage users in emotional interaction via text, images, audio, or video
- Services offered to the public within China's borders
Safety Provisions
- Providers must identify user emotional states and assess dependency levels
- Human operator takeover required when users express self-harm or suicide intent
- Emergency contact/guardian notification in crisis situations
- Pre-set response templates required for high-risk situations
- Two-hour continuous use circuit breaker with mandatory reminders
- Regular pop-up warnings that user is interacting with AI, not a human
- Mandatory minor protection mode with usage time limits and reality reminders
- Parental consent required for emotional companionship services to minors
- Guardian control functions: real-time risk alerts, usage summaries, character blocking, spending prevention
- Providers must detect suspected minors and auto-switch to minor mode
- Prohibited from simulating relatives of elderly users
- Opt-in consent required for using interaction data in model training
- Prohibited design goals: replacing social interaction, controlling user psychology, inducing addiction
Exemptions
Purely Functional Chatbots
Purely functional chatbots lacking emotional engagement capabilities are excluded from scope
- • No simulation of human personality
- • No emotional interaction design
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
View on map
China
Focus Areas
Cite This
APA
China. (n.d.). Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services.
Related Regulations
ID Conversational AI Safety
Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.
OR SB 1546
Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.
GA AI Chatbot Child Safety
Requires disclosures related to conversational AI services, prohibits emotional manipulation of minors, and mandates crisis response protocols for suicide and self-harm detection.
China Minor Content Classification Measures
Establishes a four-category classification framework for online content that may harm minors' physical and mental health. Prohibits platforms from displaying classified harmful content in prominent positions (homepage, pop-ups, trending, recommendations). Requires preventive measures against content risks from algorithmic recommendations and generative AI.
China Minor Platform Identification Measures
Establishes quantified thresholds and assessment criteria for identifying internet platforms with massive minor user bases or significant impact on minors. Specifies identification procedures and delisting rules for platforms that no longer meet criteria. Platforms meeting thresholds face enhanced obligations for minor protection.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
Last updated March 13, 2026. Verify against primary sources before relying on this information.