Skip to main content

Children's Digital Wellbeing

UK Children's Digital Wellbeing Consultation

UK government consultation on restricting children's access to AI chatbots, banning addictive design features like infinite scrolling and auto-play, and potentially setting age restrictions for social media. Would amend the Crime and Policing Bill to bring AI chatbot providers under Online Safety Act duties.

Jurisdiction

United Kingdom

Enacted

Pending

Effective

TBD

Enforcement

Ofcom

Announced 16 Feb 2026. Consultation launching March 2026. Enabling powers via Children's Wellbeing and Schools Bill; data preservation via Crime and Policing Bill amendment.

GOV.UK - PM announcement (16 Feb 2026)

Why It Matters

Would explicitly bring AI chatbots under UK Online Safety Act regulation for the first time. Companion AI and character chatbot providers serving UK users would need to implement age verification and comply with illegal content duties. Signals UK intent to regulate AI-specific child safety risks.

Recent Developments

Announced by PM Keir Starmer 16 Feb 2026. Government closing OSA 'loophole' to bring AI chatbots (ChatGPT, Gemini, Copilot) under illegal content duties. Also launching 'You Won't Know until You Ask' campaign for parents.

At a Glance

Applies to

AI CompanionCharacter ChatbotGeneral ChatbotSocial PlatformGaming Platform

Who Must Comply

  • AI chatbot providers
  • Social media platforms
  • Online platforms serving children
  • Gaming platforms with social features

Safety Provisions

  • Age restrictions on AI chatbot access for children
  • Ban on infinite scrolling features for children
  • Ban on auto-play video features for children
  • Restrictions on VPN use to bypass safety systems
  • Potential changes to age of digital consent
  • Automatic data-preservation orders when a child dies
  • Powers to curb stranger pairing on gaming consoles
  • Powers to block sending/receiving nude images

Compliance & Enforcement

Penalties

Penalties pending regulatory determination

View on map

United Kingdom

Focus Areas

Mental health & crisis
Child safety
Algorithmic accountability
Active safeguards required

Cite This

APA

United Kingdom. (n.d.). UK Children's Digital Wellbeing Consultation.

Related Regulations

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

Pending GB

UK AI Chatbot OSA Extension

Amends the Crime and Policing Bill to bring standalone AI chatbot providers (ChatGPT, Grok, Gemini, etc.) within scope of Online Safety Act illegal content duties, closing the loophole where AI-only chatbots were exempt from OSA.

Pending US-FL

FL AI Bill of Rights

Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.

In Effect BR

Brazil ECA Digital

Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.

Pending US-ID

ID Conversational AI Safety

Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.

Enacted US-OR

OR SB 1546

Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.

Last updated February 17, 2026. Verify against primary sources before relying on this information.