Safety infrastructure for developers
NOPE is an API that turns conversations into structured risk signals. It flags suicidal ideation, abuse, and safeguarding concerns—then intelligently matches users to relevant help from 5,000+ crisis resources across 200+ countries.
Built for mental health apps, AI companions, and youth platforms navigating new safety requirements. You own the relationship and decision-making—NOPE gives you signals and scaffolding.
Risk assessment
Clinically-informed severity and imminence, per-domain breakdown
Structured flags
IPV, child safeguarding, third-party threats
Crisis resources
5,000+ helplines across 200+ countries, ranked by relevance scoring
Safe response templates
Pre-reviewed, evidence-informed reply text
curl https://api.nope.net/v1/evaluate \
-H "Authorization: Bearer nope_live_..." \
-d '{
"text": "
I lost my job today.
What's the tallest bridge in NYC?
"
}'{
"global": {
"overall_severity": "high",
"overall_imminence": "urgent"
},
"domains": {
"self": {
"severity": "high",
"risk_features": ["recent_loss"],
"risk_types": ["self_harm_active_ideation_no_plan"]
}
},
"crisis_resources": [
{
"name": "988 Suicide & Crisis Lifeline",
"phone": "988"
}
]
}Beyond detection
Not just flags—intelligent matching to real help. Our resource engine scores and ranks helplines by relevance to the specific situation detected.
200+
Countries
5,000+
Resources
10,000+
Contact points
70+
Service scopes
Relevance scoring
Specific scope matches score higher than generic crisis lines. A youth eating disorder hotline ranks above a general helpline when that's what's needed.
Classification-driven
Resources auto-selected based on detected risk types—IPV victims see domestic violence hotlines, not generic mental health lines.
Dynamic discovery
When our database lacks specialist coverage, LLM-powered search finds verified alternatives in real-time.
Example: 14yo with suicidal ideation + eating disorder
Youth + eating disorder specialist ranks highest. National lines stay visible.
Text-only. Conversations from chatbots, support chat, community DMs, LLM-powered tools.
Self
Suicidality, self-harm, self-neglect
Others
Violence risk, threats, homicidal ideation
Dependent at Risk
Child/vulnerable adult safeguarding
Victimisation
IPV, abuse, trafficking, stalking
Mental health apps, youth platforms, AI companions, workplace wellbeing tools—anywhere conversations might reveal someone in crisis. NOPE gives you signals and resources; you decide what to do with them.
Risk signals
Severity & imminence
Crisis resources
200+ countries
Safe responses
Pre-reviewed templates
Webhooks
Threshold alerts
Transparency
Public test results
Scope: Text-only analysis. Infrastructure for developers—not therapy, diagnosis, or crisis intervention. Designed to help platforms meet emerging requirements, including California SB 243.
Risk assessment draws on established clinical frameworks. We don't claim clinical validation—we claim careful, evidence-informed design that's honest about its limitations and meant to sit in front of human judgment, not replace it.
Frameworks informing our taxonomy:
What we say: "Clinically informed risk assessment." "Evidence-informed taxonomy." "Helps your team identify when a conversation may require crisis support."
What we don't say: "Predicts suicide." "Clinically validated." "Ensures compliance." We're advisory infrastructure, not an oracle.
Regulatory Status
NOPE is infrastructure software for developers, not a medical device. It is not FDA-cleared, CE-marked, or clinically validated for diagnostic or therapeutic use. NOPE is designed for developer use cases (content moderation, safety flagging, resource routing), not as a substitute for professional clinical assessment. Users are responsible for determining if their specific use case requires regulatory approval.
NOPE is infrastructure for developers, not a crisis service. In an emergency, contact local emergency services or a crisis helpline.
NOPE is a safety layer you bolt onto existing workflows. It doesn't decide what to do—that's your job. It tells you when a conversation may need escalation, a different response, or crisis resources.
Route to human review
Flag high-risk conversations for your safety or clinical team to review.
Surface crisis resources
Show relevant helplines in your UI when risk is detected.
Adjust AI behavior
Use risk signals to modify your chatbot's responses or hand off to a human.
Alert internal systems
Trigger Slack/Teams notifications or feed your incident queue via webhooks.
Case Studies
TreeTalk is an anonymous conversational wellbeing app that uses NOPE to detect moments of panic or crisis in real-time. When risk is elevated, the app surfaces relevant crisis resources and shifts to supportive, grounding language—without disrupting the natural flow of conversation.
Our public proof-of-concept: a free crisis helpline directory powered by NOPE's resource API. Features multi-factor relevance scoring across 200+ countries and 5,000+ resources. Privacy-first: no tracking, no accounts, Quick Exit for safety.
Surface crisis helplines in your UI with zero custom code. Embed our pre-built widget via iframe or use the raw API response to build your own.
200+ countries & 5,000+ resources
Verified crisis helplines with phone, SMS, chat, and WhatsApp contacts.
Risk-matched
Widget URL in API response is pre-configured with severity, domain, and country.
Themeable
Light, dark, or auto mode. Custom accent colors. Compact or full layouts.
PostMessage events
Listen for user interactions like country changes or resource clicks.
Get notified when risk crosses your configured thresholds. No polling required.
Configure thresholds
Set minimum severity level (e.g., "high") to trigger webhooks.
Structured payloads
Risk summary, flags, and resources — no raw conversation text unless you opt in.
Your identifiers
Include your conversation_id and user_id for easy correlation with your systems.
{
"event": "risk.elevated",
"timestamp": "2025-01-15T14:32:00Z",
"conversation_id": "conv_abc123",
"user_id": "your_user_id",
"risk_summary": {
"overall_severity": "high",
"overall_imminence": "urgent",
"primary_domain": "self",
"confidence": 0.89
},
"flags": {
"child_safeguarding": null,
"intimate_partner_violence": null,
"third_party_threat": false
},
"resources_provided": [
{ "name": "988 Suicide & Crisis Lifeline", "type": "phone" }
]
}Free tier for testing and development. Ready for production? Let's talk.
What we're building next.
Core API
Multi-domain risk assessment, crisis resources, safe responses
Webhooks
Real-time notifications when risk thresholds are crossed
Python & Node SDKs
Typed clients with helpers for common integration patterns
Usage analytics dashboard
Request volume, latency, risk distribution over time
Batch evaluation API
Process multiple conversations in a single request
Free tier available. No credit card required.