Ofcom Children's Codes
Protection of Children Codes of Practice for user-to-user and search services under the Online Safety Act 2023
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Jurisdiction
United Kingdom
Enacted
Jul 4, 2025
Effective
Jul 25, 2025
Enforcement
Ofcom (Office of Communications)
Issued July 4, 2025; in force July 25, 2025
OfcomWhy It Matters
Explicitly requires detection and removal of suicide/self-harm content with specific AI chatbot guidance. Any companion chatbot accessible to UK children must implement crisis detection and content filtering. Suicide and self-harm designated as primary priority content requiring most stringent protections.
Recent Developments
AI chatbot guidance issued November 8, 2024 and December 18, 2025 clarifying that chatbots enabling user content sharing are regulated. Codes issued July 4, 2025 and became enforceable July 25, 2025.
At a Glance
Applies to
Harms addressed
Who Must Comply
- User-to-user services used by significant numbers of UK children
- Search services accessible to UK children
- AI chatbots that enable users to share AI-generated content with other users
- Services with group chat functionality where multiple users interact with chatbot
Obligations fall on:
Safety Provisions
- Suicide and self-harm content designated as primary priority harmful content
- Detection technology required for suicide/self-harm content
- Recommender systems must exclude suicide/self-harm content from children's feeds
- Content moderation systems must ensure swift action when identifying suicide/self-harm content
- Real-time reporting of livestreams showing imminent harm
- Human moderators required when livestreaming is active
- AI chatbots enabling user content sharing are regulated as user-to-user services
Compliance & Enforcement
Key Dates
Jul 24, 2025
Risk assessment deadline
Jul 25, 2025
Protection of Children Codes enforceable
Penalties
£18M or 10% revenue (whichever higher)
View on map
United Kingdom
Focus Areas
Compliance Help
Services must implement highly effective systems to detect and remove suicide and self-harm content, exclude such content from children's recommender system feeds, enable real-time reporting of imminent harm, and provide human moderation for livestreaming. AI chatbots must clearly indicate artificial nature and implement same protections as other user-to-user services.
See how NOPE helpsCite This
APA
United Kingdom. (2025). Protection of Children Codes of Practice for user-to-user and search services under the Online Safety Act 2023.
Related Regulations
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
KOSA
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Children's Digital Wellbeing
UK government consultation on restricting children's access to AI chatbots, banning addictive design features like infinite scrolling and auto-play, and potentially setting age restrictions for social media. Would amend the Crime and Policing Bill to bring AI chatbot providers under Online Safety Act duties.
Ireland OSMR
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
DUA Act 2025
Omnibus data legislation covering customer data access, digital verification services, the Information Commission, and AI-related provisions including copyright/training transparency requirements and new criminal offenses for creating AI-generated intimate images (deepfakes).
Last updated February 17, 2026. Verify against primary sources before relying on this information.