Skip to main content

UK OSA

Online Safety Act 2023

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

Jurisdiction

United Kingdom

UK

Enacted

Oct 26, 2023

Effective

Unknown

Enforcement

Ofcom

Phased enforcement through 2026

Who Must Comply

This law applies to:

  • User-to-user services
  • Search services
  • Services likely accessed by children

Capability triggers:

Search indexing (increases)
Recommender system (increases)
User-to-user content (required)
Live chat or communities (increases)
Required Increases applicability

Who bears obligations:

Exemptions

1-to-1 AI Companion (No User Content Sharing)

medium confidence

Pure 1-to-1 AI chatbot where provider controls output and no content is shared between users.

Conditions:

  • • Provider controls AI output (not user-generated)
  • • No user content shared with other users
  • • No public characters or shared chats
  • • No user-to-user features

Parliamentary debate (Nov 2025) acknowledged this gap. Government "commissioned work" to identify gaps. May change—treat as regulatory uncertainty.

Safety Provisions

  • Primary priority content for children: must PREVENT access to suicide, self-harm, eating disorder content
  • New criminal offense (Section 184): encouraging serious self-harm
  • Risk assessments required for illegal content and children's access
  • Recommender systems must exclude harmful content from children's feeds
  • Services must use "highly effective" age assurance

Compliance Timeline

Mar 17, 2025

Illegal content safety duties enforceable

Jul 25, 2025

Child protection safety duties enforceable

Jan 1, 2026

Categorised services duties (estimated)

Enforcement

Enforced by

Ofcom

Penalties

£18M or 10% revenue (whichever higher); criminal liability

Max fine: £18,000,000
Revenue %: 10%
Criminal liability

£18M or 10% global turnover (whichever higher); criminal liability for senior managers

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
Yes

Why It Matters

Most explicit regulation of mental health content globally. Suicide, self-harm, and eating disorders are "primary priority" content requiring prevention, not just mitigation. AI chatbots that enable content sharing between users are explicitly in scope.

Recent Developments

Feb 2025: Ofcom open letter clarified AI chatbot scope—services are in scope if they enable user-to-user content sharing or search multiple websites; pure 1-to-1 AI companions without these features may fall outside OSA. Government "considering possible changes" to close this gap. SI 2025/1352 (Oct 21, 2025, in force Jan 8, 2026): Added cyberflashing and encouraging/assisting serious self-harm (OSA s.184) as priority offenses. Super-complaints regime entered force Jan 1, 2026. First enforcement: 4chan fined £20,000 (Oct 2025) for failing to respond to info requests. 76+ active investigations including suicide forums (first launched Apr 9, 2025). Enforcement expected to intensify throughout 2026.

What You Need to Comply

If in scope (user-to-user or search service): You need proactive detection of suicide, self-harm, and eating disorder content (not just reactive moderation); "highly effective" age assurance; recommender systems that actively filter harmful content from children's feeds. Keyword filters are explicitly insufficient—Ofcom expects contextual understanding. Note: Pure 1-to-1 AI companions without user sharing features may fall outside OSA scope (see exemptions).

NOPE can help

Cite This

APA

United Kingdom. (2023). Online Safety Act 2023. Retrieved from https://nope.net/regs/uk-osa

BibTeX

@misc{uk_osa,
  title = {Online Safety Act 2023},
  author = {United Kingdom},
  year = {2023},
  url = {https://nope.net/regs/uk-osa}
}

Related Regulations

In Effect AU Online Safety

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.

Failed CA Online Safety

C-63

Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.

In Effect EU Online Safety

DSA

Comprehensive platform regulation with tiered obligations. VLOPs (45M+ EU users) face systemic risk assessments, algorithmic transparency, and independent audits.

In Effect UK Child Protection

UK Children's Code

UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.

Pending US Child Protection

KOSA

Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.

Proposed US-CA Child Protection

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.