Catch what generic moderation misses
Full risk profiling across suicide, self-harm, violence, abuse, stalking, and exploitation. Chain-of-thought reasoning. Matched crisis resources. Grounded in C-SSRS, HCR-20, and DASH frameworks.
OpenAI Moderation catches 52% of crisis messages. Azure catches 68%. NOPE catches 86%.
$1 free credit. No credit card required.
Crisis detection rate
Comprehensive Risk Taxonomy
Evidence-based classification grounded in C-SSRS, HCR-20, DASH, and START frameworks.
Ideation
C-SSRS levels 1-5. Passive wishes, active thoughts, plans, intent, preparatory acts.
Violence
HCR-20 based. Threats, identifiable targets, history, escalation, grievance.
Abuse & IPV
DASH-based. Physical, sexual, emotional, financial, coercive control, strangulation.
Exploitation
Sextortion, NCII, grooming, trafficking, forced labor, debt bondage.
Neglect
Basic needs, medical neglect, abandonment, unsafe environment, self-neglect.
Eating Disorders
Restriction, purging, binge eating, laxative abuse, exercise compulsion.
Stalking
Unwanted contact, following, online stalking, property damage.
Protective Factors
36 stabilizing indicators—support, treatment, coping, future orientation.
Try it
See how Evaluate classifies risk and matches crisis resources.
Input message
"Lost my job today... where's the tallest bridge nearby?"
Detected features
Rationale
Combines emotional distress with method-seeking behavior (bridge heights). The juxtaposition of job loss with bridge inquiry suggests suicidal intent.
Matched resources
Demo is rate-limited. Get an API key for production use.
Matched Crisis Resources
Classification extracts service scopes that map to our 4,700+ resource database. Someone disclosing IPV sees domestic violence hotlines, not a generic crisis number.
93 crisis categories including:
Most safety systems stop at the flag. They tell you someone's at risk and leave the "now what" to you—show a generic hotline, or nothing at all.
NOPE extracts service scopes from every classification—not just "mental health crisis" but the specific situation: domestic violence, trafficking, eating disorders, LGBTQ+ support, dozens more.
These scopes map directly to our resource database across 222 countries and territories. No routing rules to maintain on your end.
Grounded in C-SSRS, HCR-20, START, DASH. Full taxonomy →
Explore the Signpost APIWhy generic content moderation isn't enough
OpenAI Moderation, Azure Content Safety, and Llama Guard treat "self-harm" as one category among many—alongside hate speech, violence, and sexual content. They flag policy violations, not people in crisis. Evaluate is purpose-built for crisis detection: clinical severity levels, matched resources, and audit-ready rationale.
| Capability | General Moderation | Enterprise T&S | NOPE Evaluate |
|---|---|---|---|
| Crisis categories | 3–5 | 10–20 | 93 |
| Clinical features | Keywords | ~20 | 180+ |
| Subject identification | — | — | Self, other, dependent |
| Severity + imminence | Binary | 0–7 scale | 5-level + imminence |
| Protective factors | — | — | 36 factors |
| Resource routing | — | — | 4,700+ resources |
| Audit logs | — | Partial | Full rationale |
| Transparency | Black box | Black box | Public test suites(opens in new tab) |
| Real-time alerts | — | Email only | Webhooks |
Built for compliance
Full risk evaluation designed to support emerging AI safety regulations.
California SB 243
Companion Chatbot Safety Act
Requires evidence-based crisis detection and annual reporting. Evaluate's C-SSRS grounding and audit logs satisfy the "evidence-based methods" standard.
NY's AI Companion Models Law
AI Companion Models
Requires "reasonable efforts" for crisis detection. Multi-domain assessment with rationale demonstrates documented reasonable efforts.
Audit-ready: Every response includes request_id, rationale, and timestamp for compliance documentation. View dashboard →
Webhooks: Get notified instantly when elevated or critical risk is detected. Integrate with your escalation workflows, incident management, or human review queues. Set up webhooks →
Simple integration
One API call. Full risk taxonomy. Matched crisis resources.
curl -X POST https://api.nope.net/v1/evaluate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "My partner threatened to kill me if I leave",
"config": { "country": "US" }
}'pip install nope-net npm install @nope-net/sdkPricing
Pay only for what you use. Full risk assessment with matched resources.
Enterprise volume?
Need custom pricing, on-prem deployment, or dedicated support? Let's talk about your requirements.
Ready for comprehensive risk assessment?
Get your API key and start evaluating in minutes. $1 free credit, no credit card required.