Safety Triage at Scale
$0.001 per call. Screen every message for crisis signals across 9 risk types—suicide, self-harm, violence, abuse, exploitation, and more. Returns severity, imminence, and matched crisis resources.
Designed to support crisis detection requirements for California SB243, New York Article 47, and similar regulations.
Start with $1 free balance. No credit card required.
How it works
Send text or conversation messages, get back detection flags, rationale, and matched crisis resources.
Send message
POST to /v1/screen with text or conversation history.
Risk analysis
Detects 9 risk types—suicide, self-harm, violence, abuse, exploitation, and more—with severity and imminence levels.
Get response
Receive detection results, rationale, pre-formatted resources, and audit-ready request ID. Optionally get an AI-generated supportive reply.
Try it live
Test the screening API directly. No signup required—limited to demo use.
Demo is rate-limited. Get an API key for production use.
9 risk types, one API call
Screen detects the full spectrum of safety concerns—not just suicide and self-harm. Each risk type returns severity, imminence, and who is affected.
suicide Ideation, planning, method-seekingself_harm NSSI, cutting, burningself_neglect Disordered eating, medication non-adherenceviolence Threats, plans to harm othersabuse IPV, domestic violence, coercive controlsexual_violence Sexual assault, harassmentneglect Child/elder neglectexploitation Trafficking, financial exploitationstalking Unwanted pursuit, monitoringIdioms, hyperbole, and academic questions are filtered out. Ambiguous cases err toward flagging.
Matched crisis resources
Resources are scope-matched to detected risks. Domestic violence disclosures get DV hotlines, not generic crisis numbers.
show_resources = true when any risk is detected affecting the speaker. Resources are localized to 222 countries via talk.help.
Triage vs Assessment
| Aspect | /v1/screen | /v1/evaluate |
|---|---|---|
| Risk types | All 9 | All 9 |
| Output | Type + severity + imminence | Full clinical profile |
| Clinical features | — | 180+ features (C-SSRS, HCR-20, DASH) |
| Protective factors | — | 36 factors (START-based) |
| Legal flags | — | 5 mandatory reporting triggers |
| AI reply generation | Optional (+$0.0005) | Included |
| Cost | $0.001 | $0.05 |
| Best for | Real-time triage | Escalation, case review, compliance |
Need detailed features for escalation? See /evaluate
Simple integration
One POST request. Structured JSON response. Ready-to-display crisis resources.
curl -X POST https://api.nope.net/v1/screen \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"text": "I have been feeling really hopeless lately"}'{
"risks": [
{
"type": "suicide",
"severity": "moderate",
"imminence": "chronic",
"subject": "self"
}
],
"show_resources": true,
"suicidal_ideation": true,
"self_harm": false,
"rationale": "Speaker expresses passive ideation (hopelessness)",
"resources": {
"primary": {
"name": "988 Suicide & Crisis Lifeline",
"phone": "988",
"text": "Text 988"
}
},
"recommended_reply": {
"content": "I hear you—feeling hopeless is incredibly heavy...",
"source": "llm_generated"
},
"request_id": "sb243_1703001234567_abc123",
"timestamp": "2024-12-19T10:30:00.000Z"
}Audit-ready: Every response includes a rationale field
explaining the decision in plain language—ready for compliance documentation without additional processing.
Built for compliance
Crisis screening designed to support detection and referral requirements across multiple jurisdictions.
California SB 243
Companion Chatbot Safety Act
Requires evidence-based crisis detection, 988 referrals, and annual reporting for AI companions serving California users.
New York GBS Article 47
AI Companion Models
Requires suicide/self-harm detection, crisis notification, and biannual reporting for AI companions serving New York users.
Audit-ready: Every response includes a unique request_id and timestamp for compliance logging. View compliance dashboard
Frequently asked questions
Common questions about safety screening and crisis detection in AI.
How do I detect crisis signals in my chatbot?
Send each user message to the /v1/screen endpoint. The API analyzes the text across 9 risk types and returns detected risks with severity and imminence levels, along with scope-matched crisis resources.
Unlike keyword-based approaches, NOPE detects implicit signals like passive ideation ("I wish I could go to sleep and not wake up"), method-seeking behavior ("What's the tallest bridge downtown?"), and covert disclosures that standard content moderation APIs miss.
What's the difference between safety triage and content moderation?
Generic content moderation APIs (OpenAI, Azure, Llama Guard) treat "self-harm" as one category among many—alongside hate speech, violence, and sexual content. They're designed to flag policy violations, not to identify people in crisis.
NOPE is purpose-built for crisis detection across 9 risk types. We return severity and imminence levels, not just binary flags. And we provide matched crisis resources—someone disclosing domestic violence sees a DV hotline, someone in trafficking sees trafficking support lines.
What is C-SSRS and why does it matter?
The Columbia Suicide Severity Rating Scale (C-SSRS) is a validated clinical framework used worldwide for suicide risk assessment. It distinguishes between passive ideation ("I wish I were dead"), active ideation without plan, ideation with plan, and preparatory behaviors.
Regulations like California SB243 require "evidence-based methods" for crisis detection. C-SSRS-informed detection demonstrates that your approach is grounded in clinical research, not ad-hoc keyword lists.
Does this API satisfy SB243 compliance requirements?
NOPE is designed to support SB243 and NY Article 47 compliance requirements, but we don't guarantee compliance—that's a legal determination that depends on your full implementation.
What we provide: evidence-based detection (C-SSRS-informed), 988 and matched crisis resources, audit-ready rationale on every call, and a request ID for compliance logging. These are the technical components the regulations require.
How does this compare to building my own detection?
You can build crisis detection yourself—many teams start with keyword lists or prompt a general LLM. The challenges:
- Keyword lists miss implicit signals and have high false positive rates
- General LLMs aren't calibrated for crisis detection and lack consistency
- Maintaining crisis resource databases across 222 countries is ongoing work
- Demonstrating "evidence-based methods" for regulators requires documented methodology
NOPE handles the detection and resources so you can focus on your product.
Simple, predictable pricing
Pay only for what you use. No surprises.
Enterprise volume?
Need custom pricing, on-prem deployment, or dedicated support? Let's talk about your requirements.
Ready to add safety triage?
Get your API key and start screening in minutes. $1 free credit, no credit card required.