Skip to main content

UK OSA

Online Safety Act 2023

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

Jurisdiction

United Kingdom

Enacted

Oct 26, 2023

Effective

Oct 26, 2023

Enforcement

Ofcom

Phased enforcement through 2026

legislation.gov.uk

Why It Matters

Most explicit regulation of mental health content globally. Suicide, self-harm, and eating disorders are "primary priority" content requiring prevention, not just mitigation. AI chatbots that enable content sharing between users are explicitly in scope.

Recent Developments

Feb 2025: Ofcom open letter clarified AI chatbot scope—services are in scope if they enable user-to-user content sharing or search multiple websites; pure 1-to-1 AI companions without these features may fall outside OSA. Government "considering possible changes" to close this gap. SI 2025/1352 (Oct 21, 2025, in force Jan 8, 2026): Added cyberflashing and encouraging/assisting serious self-harm (OSA s.184) as priority offenses. Super-complaints regime entered force Jan 1, 2026. First enforcement: 4chan fined £20,000 (Oct 2025) for failing to respond to info requests. 76+ active investigations including suicide forums (first launched Apr 9, 2025). Feb 2026: Ofcom launched formal investigation into X over Grok AI generating non-consensual sexualized imagery. Key limitation identified: OSA currently doesn't cover standalone AI chatbots—only platforms hosting user-generated content. Assessment underway on whether Ofcom can gain direct jurisdiction over xAI. Enforcement expected to intensify throughout 2026.

At a Glance

Applies to

Online PlatformSocial PlatformChatbot Service

Who Must Comply

  • User-to-user services
  • Search services
  • Services likely accessed by children

Safety Provisions

  • Primary priority content for children: must PREVENT access to suicide, self-harm, eating disorder content
  • New criminal offense (Section 184): encouraging serious self-harm
  • Risk assessments required for illegal content and children's access
  • Recommender systems must exclude harmful content from children's feeds
  • Services must use "highly effective" age assurance

Exemptions

1-to-1 AI Companion (No User Content Sharing)

Pure 1-to-1 AI chatbot where provider controls output and no content is shared between users.

  • • Provider controls AI output (not user-generated)
  • • No user content shared with other users
  • • No public characters or shared chats
  • • No user-to-user features

Compliance & Enforcement

Key Dates

Mar 17, 2025

Illegal content safety duties enforceable

Jul 25, 2025

Child protection safety duties enforceable

Jan 1, 2026

Categorised services duties (estimated)

Penalties

£18M or 10% revenue (whichever higher); criminal liability

Criminal liability

View on map

United Kingdom

Focus Areas

Mental health & crisis
Child safety
Algorithmic accountability
Active safeguards required

Compliance Help

Services in scope (user-to-user or search) must implement proactive detection of suicide, self-harm, and eating disorder content; highly effective age assurance; recommender systems that filter harmful content from children's feeds. Keyword filters explicitly insufficient - Ofcom expects contextual understanding. Note: Pure 1-to-1 AI companions without user sharing features may fall outside OSA scope.

See how NOPE helps

Cite This

APA

United Kingdom. (2023). Online Safety Act 2023.

Last updated February 17, 2026. Verify against primary sources before relying on this information.