Children's Digital Wellbeing
UK Children's Digital Wellbeing Consultation
UK government consultation on restricting children's access to AI chatbots, banning addictive design features like infinite scrolling and auto-play, and potentially setting age restrictions for social media. Would amend the Crime and Policing Bill to bring AI chatbot providers under Online Safety Act duties.
Jurisdiction
United Kingdom
Enacted
Pending
Effective
TBD
Enforcement
Ofcom
Announced 16 Feb 2026. Consultation launching March 2026. Enabling powers via Children's Wellbeing and Schools Bill; data preservation via Crime and Policing Bill amendment.
GOV.UK - PM announcement (16 Feb 2026)Why It Matters
Would explicitly bring AI chatbots under UK Online Safety Act regulation for the first time. Companion AI and character chatbot providers serving UK users would need to implement age verification and comply with illegal content duties. Signals UK intent to regulate AI-specific child safety risks.
Recent Developments
Announced by PM Keir Starmer 16 Feb 2026. Government closing OSA 'loophole' to bring AI chatbots (ChatGPT, Gemini, Copilot) under illegal content duties. Also launching 'You Won't Know until You Ask' campaign for parents.
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- AI chatbot providers
- Social media platforms
- Online platforms serving children
- Gaming platforms with social features
Safety Provisions
- Age restrictions on AI chatbot access for children
- Ban on infinite scrolling features for children
- Ban on auto-play video features for children
- Restrictions on VPN use to bypass safety systems
- Potential changes to age of digital consent
- Automatic data-preservation orders when a child dies
- Powers to curb stranger pairing on gaming consoles
- Powers to block sending/receiving nude images
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
View on map
United Kingdom
Focus Areas
Compliance Help
AI chatbot providers would need to comply with OSA duties to protect users from illegal content; platforms must remove addictive design features for child users
See how NOPE helpsCite This
APA
United Kingdom. (n.d.). UK Children's Digital Wellbeing Consultation.
Related Regulations
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
FL AI Bill of Rights
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
DUA Act 2025
Omnibus data legislation covering customer data access, digital verification services, the Information Commission, and AI-related provisions including copyright/training transparency requirements and new criminal offenses for creating AI-generated intimate images (deepfakes).
UK Children's Code
UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Finland AI Act
Finland's EU AI Act implementation using decentralized supervision model. Traficom serves as single point of contact and coordination authority. Ten market surveillance authorities share enforcement across sectors. New Sanctions Board handles fines over EUR 100,000.
Last updated February 17, 2026. Verify against primary sources before relying on this information.