UK OSA
Online Safety Act 2023
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Jurisdiction
United Kingdom
UK
Enacted
Oct 26, 2023
Effective
Unknown
Enforcement
Ofcom
Phased enforcement through 2026
What It Requires
Risk Assessment
Must evaluate and document potential harms before deployment
Active Detection
Must proactively identify harmful content or crisis signals
Age Verification
Must verify user age before providing access
Transparency
Must disclose AI nature, data practices, or algorithmic decisions
Incident Reporting
Must notify authorities of specific incidents
Harms Addressed
Suicide
Suicidal ideation, suicide planning, or suicide attempts
Self-Harm
Non-suicidal self-injury or self-harm behaviors
Eating Disorders
Anorexia, bulimia, or other disordered eating
Child Exploitation
CSAM, grooming, or sexual exploitation of minors
Child Exposure
Age-inappropriate content exposure to minors
Harassment
Bullying, stalking, or targeted harassment
Violence
Incitement to violence or violent content
Terrorism
Terrorist content or radicalization
Who Must Comply
This law applies to:
- • User-to-user services
- • Search services
- • Services likely accessed by children
Capability triggers:
Who bears obligations:
Exemptions
1-to-1 AI Companion (No User Content Sharing)
medium confidencePure 1-to-1 AI chatbot where provider controls output and no content is shared between users.
Conditions:
- • Provider controls AI output (not user-generated)
- • No user content shared with other users
- • No public characters or shared chats
- • No user-to-user features
Parliamentary debate (Nov 2025) acknowledged this gap. Government "commissioned work" to identify gaps. May change—treat as regulatory uncertainty.
Safety Provisions
- • Primary priority content for children: must PREVENT access to suicide, self-harm, eating disorder content
- • New criminal offense (Section 184): encouraging serious self-harm
- • Risk assessments required for illegal content and children's access
- • Recommender systems must exclude harmful content from children's feeds
- • Services must use "highly effective" age assurance
Compliance Timeline
Mar 17, 2025
Illegal content safety duties enforceable
Jul 25, 2025
Child protection safety duties enforceable
Jan 1, 2026
Categorised services duties (estimated)
Enforcement
Enforced by
Ofcom
Penalties
£18M or 10% revenue (whichever higher); criminal liability
£18M or 10% global turnover (whichever higher); criminal liability for senior managers
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- Yes
Why It Matters
Most explicit regulation of mental health content globally. Suicide, self-harm, and eating disorders are "primary priority" content requiring prevention, not just mitigation. AI chatbots that enable content sharing between users are explicitly in scope.
Recent Developments
Feb 2025: Ofcom open letter clarified AI chatbot scope—services are in scope if they enable user-to-user content sharing or search multiple websites; pure 1-to-1 AI companions without these features may fall outside OSA. Government "considering possible changes" to close this gap. SI 2025/1352 (Oct 21, 2025, in force Jan 8, 2026): Added cyberflashing and encouraging/assisting serious self-harm (OSA s.184) as priority offenses. Super-complaints regime entered force Jan 1, 2026. First enforcement: 4chan fined £20,000 (Oct 2025) for failing to respond to info requests. 76+ active investigations including suicide forums (first launched Apr 9, 2025). Enforcement expected to intensify throughout 2026.
What You Need to Comply
If in scope (user-to-user or search service): You need proactive detection of suicide, self-harm, and eating disorder content (not just reactive moderation); "highly effective" age assurance; recommender systems that actively filter harmful content from children's feeds. Keyword filters are explicitly insufficient—Ofcom expects contextual understanding. Note: Pure 1-to-1 AI companions without user sharing features may fall outside OSA scope (see exemptions).
NOPE can helpCite This
APA
United Kingdom. (2023). Online Safety Act 2023. Retrieved from https://nope.net/regs/uk-osa
BibTeX
@misc{uk_osa,
title = {Online Safety Act 2023},
author = {United Kingdom},
year = {2023},
url = {https://nope.net/regs/uk-osa}
} Related Regulations
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
C-63
Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.
DSA
Comprehensive platform regulation with tiered obligations. VLOPs (45M+ EU users) face systemic risk assessments, algorithmic transparency, and independent audits.
UK Children's Code
UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.
KOSA
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.