China Minor Platform Identification Measures
Identification Measures for Internet Platform Service Providers with Massive Minor Users or Significant Impact on Minors
Establishes quantified thresholds and assessment criteria for identifying internet platforms with massive minor user bases or significant impact on minors. Specifies identification procedures and delisting rules for platforms that no longer meet criteria. Platforms meeting thresholds face enhanced obligations for minor protection.
Jurisdiction
China
Enacted
Feb 28, 2026
Effective
Apr 1, 2026
Enforcement
Cyberspace Administration of China (CAC)
Enacted February 28, 2026. Effective April 1, 2026.
Cyberspace Administration of ChinaWhy It Matters
Creates a formal threshold-based system for identifying which platforms face enhanced child protection obligations, moving China from general requirements to targeted obligations based on minor user scale.
Recent Developments
Enacted February 28, 2026. Takes effect April 1, 2026.
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- Internet platform service providers with large numbers of minor users
- Platforms with significant impact on minors in China
Obligations fall on:
Applicability thresholds:
Safety Provisions
- Quantified thresholds for identifying platforms with massive minor user bases
- Assessment criteria for determining significant impact on minors
- Formal identification procedures for covered platforms
- Delisting rules when platforms no longer meet criteria
- Enhanced minor protection obligations for identified platforms
Compliance & Enforcement
Key Dates
Apr 1, 2026
All provisions take effect
Penalties
Subject to existing CAC enforcement framework
View on map
China
Focus Areas
Cite This
APA
China. (2026). Identification Measures for Internet Platform Service Providers with Massive Minor Users or Significant Impact on Minors.
Related Regulations
China Minor Content Classification Measures
Establishes a four-category classification framework for online content that may harm minors' physical and mental health. Prohibits platforms from displaying classified harmful content in prominent positions (homepage, pop-ups, trending, recommendations). Requires preventive measures against content risks from algorithmic recommendations and generative AI.
China AI Companion Rules
Draft CAC regulation targeting AI services that simulate human personality and engage users emotionally. Mandates crisis intervention protocols, minor protection modes with parental controls, two-hour usage circuit breakers, opt-in consent for training data use, and prohibitions on emotional manipulation. First regulation globally to specifically target AI companion addiction and emotional dependency.
AU Social Media Age Ban
World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.
GUARD Act
Would require age verification, disclosures, and broader child protections for AI chatbots. Part of emerging federal focus on companion AI safety for minors.
SG Online Safety Code
Under Broadcasting Act framework, requires major social media services to implement systems reducing exposure to harmful content. Child safety is key driver.
AU OSA Phase 2 Codes
Phase 2 industry codes under Australia's Online Safety Act extending age-restricted material obligations to AI companion chatbots, generative AI services, search engines, app stores, and gaming platforms. Requires robust age assurance, prohibits AI-generated sexually explicit conversations with minors, and mandates suicide/self-harm content safeguards.
Last updated April 16, 2026. Verify against primary sources before relying on this information.