UK OSA
Online Safety Act 2023
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Jurisdiction
United Kingdom
Enacted
Oct 26, 2023
Effective
Oct 26, 2023
Enforcement
Ofcom
Phased enforcement through 2026
legislation.gov.ukWhy It Matters
Most explicit regulation of mental health content globally. Suicide, self-harm, and eating disorders are "primary priority" content requiring prevention, not just mitigation. AI chatbots that enable content sharing between users are explicitly in scope.
Recent Developments
Feb 2025: Ofcom open letter clarified AI chatbot scope—services are in scope if they enable user-to-user content sharing or search multiple websites; pure 1-to-1 AI companions without these features may fall outside OSA. Government "considering possible changes" to close this gap. SI 2025/1352 (Oct 21, 2025, in force Jan 8, 2026): Added cyberflashing and encouraging/assisting serious self-harm (OSA s.184) as priority offenses. Super-complaints regime entered force Jan 1, 2026. First enforcement: 4chan fined £20,000 (Oct 2025) for failing to respond to info requests. 76+ active investigations including suicide forums (first launched Apr 9, 2025). Feb 2026: Ofcom launched formal investigation into X over Grok AI generating non-consensual sexualized imagery. Key limitation identified: OSA currently doesn't cover standalone AI chatbots—only platforms hosting user-generated content. Assessment underway on whether Ofcom can gain direct jurisdiction over xAI. Enforcement expected to intensify throughout 2026.
At a Glance
Applies to
Harms addressed
Who Must Comply
- User-to-user services
- Search services
- Services likely accessed by children
Obligations fall on:
Safety Provisions
- Primary priority content for children: must PREVENT access to suicide, self-harm, eating disorder content
- New criminal offense (Section 184): encouraging serious self-harm
- Risk assessments required for illegal content and children's access
- Recommender systems must exclude harmful content from children's feeds
- Services must use "highly effective" age assurance
Exemptions
1-to-1 AI Companion (No User Content Sharing)
Pure 1-to-1 AI chatbot where provider controls output and no content is shared between users.
- • Provider controls AI output (not user-generated)
- • No user content shared with other users
- • No public characters or shared chats
- • No user-to-user features
Compliance & Enforcement
Key Dates
Mar 17, 2025
Illegal content safety duties enforceable
Jul 25, 2025
Child protection safety duties enforceable
Jan 1, 2026
Categorised services duties (estimated)
Penalties
£18M or 10% revenue (whichever higher); criminal liability
View on map
United Kingdom
Focus Areas
Compliance Help
Services in scope (user-to-user or search) must implement proactive detection of suicide, self-harm, and eating disorder content; highly effective age assurance; recommender systems that filter harmful content from children's feeds. Keyword filters explicitly insufficient - Ofcom expects contextual understanding. Note: Pure 1-to-1 AI companions without user sharing features may fall outside OSA scope.
See how NOPE helpsCite This
APA
United Kingdom. (2023). Online Safety Act 2023.
Related Regulations
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Ireland OSMR
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
C-63
Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.
UK Children's Code
UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.
KOSA
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
Last updated February 17, 2026. Verify against primary sources before relying on this information.