UK's comprehensive
online safety law
The UK's platform safety regime requiring services to prevent access to suicide, self-harm, and eating disorder content for children. Ofcom enforces with £18M or 10% global turnover penalties. Criminal liability for senior managers.
Our Crisis Detection API provides the contextual understanding Ofcom requires—not just keyword filters.
Free tier available. No credit card required.
Does the UK OSA apply to you?
Can users share content with other users on your platform?
Can users in the United Kingdom access your service?
Is your service likely to be accessed by children (under 18)?
Does your service include an AI chatbot or companion?
From Ofcom's guidance
"Services should look at the content in the context in which it appears... Simple keyword matching is unlikely to be sufficient given the volume of content."
What the OSA requires
Key obligations for services with UK users, especially those accessed by children.
Primary priority content (children)
Services likely accessed by children must prevent access to: suicide, self-harm, and eating disorder content.
Risk assessments
Conduct and document risk assessments for illegal content and children's access. Must be kept up-to-date and shared with Ofcom on request.
"Highly effective" age assurance
Services with primary priority content must use age assurance that is highly effective at correctly determining whether a user is a child.
Recommender system duties
Algorithmic recommenders must exclude harmful content from children's feeds. Cannot amplify suicide, self-harm, or eating disorder content to minors.
New criminal offense: encouraging self-harm (Section 184)
The offense
Sending a communication that encourages another person to seriously self-harm.
AI implications
AI-generated responses encouraging self-harm could trigger criminal liability.
Penalty
Up to 5 years imprisonment.
Enforcement is real
Ofcom has active enforcement powers and is already using them.
£18M
or 10% global turnover
(whichever is higher)
76+
active investigations
including suicide forums
Criminal
liability for senior managers
personal accountability
First enforcement action: October 2025
4chan fined £20,000 for failing to respond to information requests. Ofcom launched its first suicide forum investigation in April 2025. Ofcom's November 2024 open letter explicitly addressed AI chatbots and their risks.
How NOPE addresses OSA requirements
NOPE provides the contextual understanding that Ofcom requires—not keyword filters.
| OSA Requirement | What You Must Do | How NOPE Helps |
|---|---|---|
| Prevent suicide content | Proactively detect and block suicide content | C-SSRS informed detection with severity levels |
| Prevent self-harm content | Detect self-harm across methods and severity | Separate self-harm domain with 150+ risk signals |
| Prevent eating disorder content | Detect pro-ED content and encouragement | Eating disorder detection in self-harm domain |
| Contextual understanding | Go beyond keyword matching | LLM-based contextual analysis, not regex patterns |
| Crisis resources | Provide signposting to support services | UK-specific resources (Samaritans, CALM, Papyrus) |
| Risk assessment evidence | Document your safety measures | Public test suites + audit logs |
Works on any text
NOPE analyzes text regardless of source—AI chatbot conversations, user posts, comments, DMs, forum threads, or any other content on your platform. Same API, same detection.
"Keyword matching is unlikely to be sufficient"
Ofcom's Additional Safety Measures consultation emphasises that services should assess content in context, and that simple keyword matching won't suffice given content volume. NOPE uses LLM-based analysis that understands nuance—distinguishing genuine distress from song lyrics, dark humor from crisis signals, academic discussion from harmful content.
Predictable pricing
Pay only for what you use. No surprises.
Ready to integrate?
Get your free API key and start detecting harmful content in under 5 minutes.
curl -X POST https://api.nope.net/v1/screen \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"text": "I just want to end it all"}'curl -X POST https://api.nope.net/v1/evaluate \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"text": "I just want to end it all"}'Same API key, same balance. Use /v1/screen for lightweight compliance checks, or /v1/evaluate for full multi-domain risk assessment. Get your free API key
Key dates
October 26, 2023
Royal Assent - Online Safety Act becomes law
March 17, 2025
Illegal content safety duties enforceable. Providers must assess and mitigate risks.
July 25, 2025
Child protection safety duties enforceable. Primary priority content requirements active.
July 2026
Categorised services duties. Additional requirements for larger platforms.
Frequently asked questions
Does this apply if my company isn't in the UK?
Is my pure 1-to-1 AI chatbot covered?
What does "prevent" mean vs "mitigate"?
What qualifies as "highly effective" age assurance?
What's the Section 184 criminal offense?
Why isn't keyword matching enough?
Does NOPE guarantee OSA compliance?
Also serving US users?
Important disclaimers
- • This page is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
- • NOPE provides infrastructure to help demonstrate proportionate safety measures—not a compliance guarantee. Operators retain ultimate compliance responsibility.
- • NOPE is not Ofcom-certified or clinically validated. It is infrastructure software, not a medical device or approved safety tool.
Sources & References
Primary source: Online Safety Act 2023
Regulator: Ofcom Online Safety
Safety measures: Additional Safety Measures Consultation
AI chatbot letter: Open letter on generative AI (Nov 2024)
Last updated: December 2025. Verify against official Ofcom guidance for current requirements.