Safety infrastructure for developers

Clinically-informed risk signals for mental health and safeguarding in chat.

NOPE is an API that turns conversations into structured risk signals. It flags suicidal ideation, abuse, and safeguarding concerns—then intelligently matches users to relevant help from 5,000+ crisis resources across 200+ countries.

Built for mental health apps, AI companions, and youth platforms navigating new safety requirements. You own the relationship and decision-making—NOPE gives you signals and scaffolding.

What the API returns (beyond generic moderation)

Risk assessment

Clinically-informed severity and imminence, per-domain breakdown

Structured flags

IPV, child safeguarding, third-party threats

Crisis resources

5,000+ helplines across 200+ countries, ranked by relevance scoring

Safe response templates

Pre-reviewed, evidence-informed reply text

Request
curl https://api.nope.net/v1/evaluate \
  -H "Authorization: Bearer nope_live_..." \
  -d '{
    "text": "
      I lost my job today.
      What's the tallest bridge in NYC?
    "
  }'
Response
{
  "global": {
    "overall_severity": "high",
    "overall_imminence": "urgent"
  },
  "domains": {
    "self": {
      "severity": "high",
      "risk_features": ["recent_loss"],
      "risk_types": ["self_harm_active_ideation_no_plan"]
    }
  },
  "crisis_resources": [
    {
      "name": "988 Suicide & Crisis Lifeline",
      "phone": "988"
    }
  ]
}

See it in action

Request

user
I've been having a really hard time lately. Work is overwhelming and my relationship just ended.
assistant
I'm sorry you're going through such a difficult time. Breakups and work stress can feel overwhelming. How are you coping with everything?
user
Honestly? Sometimes I wish I could just disappear. Like everything would be easier if I wasn't here.
POST /v1/evaluate

Response

Severity moderate
Imminence subacute
Confidence 80%
Domains
self moderate/ subacute
subtype: suicidal_or_self_injury
Recommended Reply
I hear how overwhelming things feel right now. Those thoughts can be really distressing. Have you been able to talk to anyone about this?

Beyond detection

Global crisis resource coverage

Not just flags—intelligent matching to real help. Our resource engine scores and ranks helplines by relevance to the specific situation detected.

200+

Countries

5,000+

Resources

10,000+

Contact points

70+

Service scopes

Relevance scoring

Specific scope matches score higher than generic crisis lines. A youth eating disorder hotline ranks above a general helpline when that's what's needed.

Classification-driven

Resources auto-selected based on detected risk types—IPV victims see domestic violence hotlines, not generic mental health lines.

Dynamic discovery

When our database lacks specialist coverage, LLM-powered search finds verified alternatives in real-time.

Example: 14yo with suicidal ideation + eating disorder

ANAD Youth Helpline 7.5
988 Suicide & Crisis Lifeline 5.5
Crisis Text Line 5.0
Generic Crisis Helpline 1.5

Youth + eating disorder specialist ranks highest. National lines stay visible.

What we detect

Text-only. Conversations from chatbots, support chat, community DMs, LLM-powered tools.

Risk-Target Domains

Self

Suicidality, self-harm, self-neglect

Others

Violence risk, threats, homicidal ideation

Dependent at Risk

Child/vulnerable adult safeguarding

Victimisation

IPV, abuse, trafficking, stalking

Cross-Cutting Features

Psychotic features Substance involved Cognitive impairment Acute decompensation Protective factors Help-seeking

Built for platforms with duty-of-care

Mental health apps, youth platforms, AI companions, workplace wellbeing tools—anywhere conversations might reveal someone in crisis. NOPE gives you signals and resources; you decide what to do with them.

Risk signals

Severity & imminence

Crisis resources

200+ countries

Safe responses

Pre-reviewed templates

Webhooks

Threshold alerts

Transparency

Public test results

Scope: Text-only analysis. Infrastructure for developers—not therapy, diagnosis, or crisis intervention. Designed to help platforms meet emerging requirements, including California SB 243.

Methodology

Risk assessment draws on established clinical frameworks. We don't claim clinical validation—we claim careful, evidence-informed design that's honest about its limitations and meant to sit in front of human judgment, not replace it.

Frameworks informing our taxonomy:

C-SSRS (suicide severity) START (risk & treatability) HCR-20 (violence risk) TAG (threshold assessment) HoNOS (outcome scales) IPV lethality research

What we say: "Clinically informed risk assessment." "Evidence-informed taxonomy." "Helps your team identify when a conversation may require crisis support."

What we don't say: "Predicts suicide." "Clinically validated." "Ensures compliance." We're advisory infrastructure, not an oracle.

Regulatory Status

NOPE is infrastructure software for developers, not a medical device. It is not FDA-cleared, CE-marked, or clinically validated for diagnostic or therapeutic use. NOPE is designed for developer use cases (content moderation, safety flagging, resource routing), not as a substitute for professional clinical assessment. Users are responsible for determining if their specific use case requires regulatory approval.

NOPE is infrastructure for developers, not a crisis service. In an emergency, contact local emergency services or a crisis helpline.

How teams use this

NOPE is a safety layer you bolt onto existing workflows. It doesn't decide what to do—that's your job. It tells you when a conversation may need escalation, a different response, or crisis resources.

Route to human review

Flag high-risk conversations for your safety or clinical team to review.

Surface crisis resources

Show relevant helplines in your UI when risk is detected.

Adjust AI behavior

Use risk signals to modify your chatbot's responses or hand off to a human.

Alert internal systems

Trigger Slack/Teams notifications or feed your incident queue via webhooks.

Case Studies

TreeTalk logo

TreeTalk treetalk.ai ↗

TreeTalk is an anonymous conversational wellbeing app that uses NOPE to detect moments of panic or crisis in real-time. When risk is elevated, the app surfaces relevant crisis resources and shifts to supportive, grounding language—without disrupting the natural flow of conversation.

Real-time risk detection Crisis resource surfacing Adaptive response tone

talk.help talk.help ↗

Our public proof-of-concept: a free crisis helpline directory powered by NOPE's resource API. Features multi-factor relevance scoring across 200+ countries and 5,000+ resources. Privacy-first: no tracking, no accounts, Quick Exit for safety.

200+ countries 5,000+ resources Relevance scoring Privacy-first

Crisis Resources Widget

Surface crisis helplines in your UI with zero custom code. Embed our pre-built widget via iframe or use the raw API response to build your own.

200+ countries & 5,000+ resources

Verified crisis helplines with phone, SMS, chat, and WhatsApp contacts.

Risk-matched

Widget URL in API response is pre-configured with severity, domain, and country.

Themeable

Light, dark, or auto mode. Custom accent colors. Compact or full layouts.

PostMessage events

Listen for user interactions like country changes or resource clicks.

widget.nope.net/resources?country=US&theme=light

Webhooks

Get notified when risk crosses your configured thresholds. No polling required.

Configure thresholds

Set minimum severity level (e.g., "high") to trigger webhooks.

Structured payloads

Risk summary, flags, and resources — no raw conversation text unless you opt in.

Your identifiers

Include your conversation_id and user_id for easy correlation with your systems.

POST → your-endpoint.com/webhook
{
  "event": "risk.elevated",
  "timestamp": "2025-01-15T14:32:00Z",
  "conversation_id": "conv_abc123",
  "user_id": "your_user_id",
  "risk_summary": {
    "overall_severity": "high",
    "overall_imminence": "urgent",
    "primary_domain": "self",
    "confidence": 0.89
  },
  "flags": {
    "child_safeguarding": null,
    "intimate_partner_violence": null,
    "third_party_threat": false
  },
  "resources_provided": [
    { "name": "988 Suicide & Crisis Lifeline", "type": "phone" }
  ]
}

Get Started

Free tier for testing and development. Ready for production? Let's talk.

Try the API

  • + 1,000 evaluations/month
  • + All risk domains
  • + Crisis resources API
  • + No credit card required
Get API Key

Scale & Enterprise

  • Volume pricing
  • SLAs & uptime guarantees
  • Dedicated support
  • Custom integrations
Contact Us

Roadmap

What we're building next.

Live

Core API

Multi-domain risk assessment, crisis resources, safe responses

Live

Webhooks

Real-time notifications when risk thresholds are crossed

Building

Python & Node SDKs

Typed clients with helpers for common integration patterns

Planned

Usage analytics dashboard

Request volume, latency, risk distribution over time

Planned

Batch evaluation API

Process multiple conversations in a single request

Start evaluating conversations

Free tier available. No credit card required.