Skip to main content
Child safety duties now enforceable

UK's comprehensive
online safety law

Online Safety Act 2023

The UK's platform safety regime requiring services to prevent access to suicide, self-harm, and eating disorder content for children. Ofcom enforces with £18M or 10% global turnover penalties. Criminal liability for senior managers.

Our Crisis Detection API provides the contextual understanding Ofcom requires—not just keyword filters.

Free tier available. No credit card required.

Does the UK OSA apply to you?

Can users share content with other users on your platform?

Can users in the United Kingdom access your service?

Is your service likely to be accessed by children (under 18)?

Does your service include an AI chatbot or companion?

Royal Assent: Oct 26, 2023 | Illegal content: Mar 17, 2025 | Child safety: Jul 25, 2025 | Enforcement: Ofcom

From Ofcom's guidance

"Services should look at the content in the context in which it appears... Simple keyword matching is unlikely to be sufficient given the volume of content."

— Ofcom, Additional Safety Measures Consultation

What the OSA requires

Key obligations for services with UK users, especially those accessed by children.

1

Primary priority content (children)

Services likely accessed by children must prevent access to: suicide, self-harm, and eating disorder content.

Key distinction: "Prevent" is stronger than "mitigate"—proactive, not reactive.
2

Risk assessments

Conduct and document risk assessments for illegal content and children's access. Must be kept up-to-date and shared with Ofcom on request.

Ofcom guidance: Risk assessment templates available from Ofcom.
3

"Highly effective" age assurance

Services with primary priority content must use age assurance that is highly effective at correctly determining whether a user is a child.

Standard: Self-declaration alone is insufficient.
4

Recommender system duties

Algorithmic recommenders must exclude harmful content from children's feeds. Cannot amplify suicide, self-harm, or eating disorder content to minors.

AI relevance: Character recommendations, "for you" feeds, content curation.
!

New criminal offense: encouraging self-harm (Section 184)

The offense

Sending a communication that encourages another person to seriously self-harm.

AI implications

AI-generated responses encouraging self-harm could trigger criminal liability.

Penalty

Up to 5 years imprisonment.

Enforcement is real

Ofcom has active enforcement powers and is already using them.

£18M

or 10% global turnover

(whichever is higher)

76+

active investigations

including suicide forums

Criminal

liability for senior managers

personal accountability

First enforcement action: October 2025

4chan fined £20,000 for failing to respond to information requests. Ofcom launched its first suicide forum investigation in April 2025. Ofcom's November 2024 open letter explicitly addressed AI chatbots and their risks.

How NOPE addresses OSA requirements

NOPE provides the contextual understanding that Ofcom requires—not keyword filters.

OSA RequirementWhat You Must DoHow NOPE Helps
Prevent suicide contentProactively detect and block suicide content
C-SSRS informed detection with severity levels
Prevent self-harm contentDetect self-harm across methods and severity
Separate self-harm domain with 150+ risk signals
Prevent eating disorder contentDetect pro-ED content and encouragement
Eating disorder detection in self-harm domain
Contextual understandingGo beyond keyword matching
LLM-based contextual analysis, not regex patterns
Crisis resourcesProvide signposting to support services
UK-specific resources (Samaritans, CALM, Papyrus)
Risk assessment evidenceDocument your safety measures
Public test suites + audit logs

Works on any text

NOPE analyzes text regardless of source—AI chatbot conversations, user posts, comments, DMs, forum threads, or any other content on your platform. Same API, same detection.

"Keyword matching is unlikely to be sufficient"

Ofcom's Additional Safety Measures consultation emphasises that services should assess content in context, and that simple keyword matching won't suffice given content volume. NOPE uses LLM-based analysis that understands nuance—distinguishing genuine distress from song lyrics, dark humor from crisis signals, academic discussion from harmful content.

Predictable pricing

Pay only for what you use. No surprises.

Ready to integrate?

Get your free API key and start detecting harmful content in under 5 minutes.

Crisis screening — $0.001
curl -X POST https://api.nope.net/v1/screen \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'
Full risk evaluation — $0.05
curl -X POST https://api.nope.net/v1/evaluate \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'

Same API key, same balance. Use /v1/screen for lightweight compliance checks, or /v1/evaluate for full multi-domain risk assessment. Get your free API key

Key dates

Passed

October 26, 2023

Royal Assent - Online Safety Act becomes law

Now enforceable

March 17, 2025

Illegal content safety duties enforceable. Providers must assess and mitigate risks.

Now enforceable

July 25, 2025

Child protection safety duties enforceable. Primary priority content requirements active.

Estimated

July 2026

Categorised services duties. Additional requirements for larger platforms.

Frequently asked questions

Does this apply if my company isn't in the UK?
Yes. The OSA applies to services with "a significant number of United Kingdom users" or services that target the UK market. Location of headquarters is irrelevant. Ofcom can block non-compliant services and pursue penalties.
Is my pure 1-to-1 AI chatbot covered?
Currently uncertain. The OSA primarily covers "user-to-user" services. Pure 1-to-1 AI companions where the provider controls output and users can't share content with each other may fall into a regulatory gap. However, Parliament acknowledged this gap in November 2025 and the government has "commissioned work" to address it. Ofcom's November 2024 open letter specifically mentioned AI chatbots. Expect regulatory expansion.
What does "prevent" mean vs "mitigate"?
For "primary priority" content (suicide, self-harm, eating disorders) affecting children, the OSA requires services to prevent access—a proactive standard. This is stronger than merely "mitigating" harm after the fact. Ofcom expects automated detection systems, not just reactive moderation.
What qualifies as "highly effective" age assurance?
Ofcom guidance indicates that self-declaration alone is insufficient. Options include ID verification, credit card checks, facial age estimation, or digital identity services. The standard is "highly effective at correctly determining whether or not a particular user is a child."
What's the Section 184 criminal offense?
Section 184 creates a new offense of sending a communication that encourages another person to seriously self-harm. Penalties: up to 5 years imprisonment. Whether this applies to AI-generated content depends on who "publishes/sends" it and questions of intent—consult legal counsel for your specific situation.
Why isn't keyword matching enough?
Ofcom's Additional Safety Measures consultation states that "simple keyword matching is unlikely to be sufficient" given content volume, and that services should assess content in context. "I'm going to kill it at my presentation" vs "I'm going to kill myself" require different responses. NOPE uses LLM-based analysis that understands context, tone, and intent.
Does NOPE guarantee OSA compliance?
No. NOPE provides infrastructure that helps you demonstrate proportionate safety measures—contextual detection, UK-specific resources, and audit documentation. But OSA compliance involves risk assessments, governance, and other obligations beyond content detection. We're infrastructure, not a compliance guarantee. Consult legal counsel.

Also serving US users?

Important disclaimers

  • • This page is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
  • • NOPE provides infrastructure to help demonstrate proportionate safety measures—not a compliance guarantee. Operators retain ultimate compliance responsibility.
  • • NOPE is not Ofcom-certified or clinically validated. It is infrastructure software, not a medical device or approved safety tool.

Sources & References

Last updated: December 2025. Verify against official Ofcom guidance for current requirements.