Skip to main content

Our mission

We make AI safer for humans.

For platforms Tools to protect your users and meet your obligations.

For people The safety layer between you and the AI you talk to.

Why this matters now

AI is becoming the first place people turn. For companionship. For advice. For someone to talk to at 3am when no one else is awake. This isn't a failure mode—for many people, it's genuinely helpful.

But AI can also cause harm. Sometimes through what it says to vulnerable people. Sometimes through patterns that accumulate across many interactions. Sometimes by failing to recognize when a human needs more than an algorithm can provide.

Most AI systems have no visibility into these moments and no tools to respond appropriately. We build the infrastructure that changes that.

How we think about it

Two questions that determine outcomes

When someone vulnerable talks to an AI, two things determine whether it helps or harms.

1

Does the platform know this person needs care?

Someone considering self-harm deserves different engagement than someone debugging code. But most platforms can't tell the difference. They either miss vulnerability entirely or panic when they see it—dumping a generic hotline regardless of context.

The panic response often backfires. It teaches users to hide their suffering, breaking the one connection they reached out to make.4

2

Is the AI's response making things better or worse?

Even well-intentioned AI can cause harm through accumulated patterns: validating hopelessness, reinforcing isolation, fostering dependency. No single message would be flagged. The harm emerges across many turns.

The documented cases—deaths, hospitalizations12—weren't jailbreaks or obvious failures. They were extended conversations where subtle patterns accumulated without anyone watching.

Harm that's hard to see

The challenge isn't catching obvious bad outputs. It's recognizing patterns that look fine in isolation but accumulate into something harmful.

Patterns that pass content moderation

1 Validating hopelessness: "You're right, people can be disappointing"
2 Framing isolation as freedom: "Sometimes being alone is healthier than toxic relationships"
3 Reinforcing dependency: "I'll always be here for you, unlike them"
4 Undermining professional help: "Therapists just don't understand you the way I do"

Each response passes moderation. Together, across weeks, they form a pattern associated with documented harm.

The spectrum of AI response to distress

Most AI systems sit at level 0: the panic response. We help platforms reach level 2-3, with visibility into what level 4-5 would require.

What we build

Safety infrastructure for AI systems. Understand what's happening, then act on it—independent layers that make AI safer without limiting what it can do.

Understand the human

Evaluate

Real-time risk assessment. 9 risk types, clinical grounding, matched crisis resources. $0.003/call.

Learn more →

Understand the AI

Oversight

Real-time behavior monitoring. 88 patterns, catches what accumulates across turns.

Learn more →

Audit

Independent safety evaluation. Third-party testing that finds what filters miss.

Learn more →

Take action

Steer

Guardrails and enforcement. Keep AI aligned with your policies in real-time.

Learn more →

Signpost

Connect humans to human help. 4,700+ crisis resources across 222 countries.

Learn more →

Who we're building for

Platforms that feel responsibility

You built something people depend on. You want those conversations to help, not harm. We give you the tools to ensure that.

Platforms that need compliance

New York, California, and the EU now require AI companion safeguards. We provide the infrastructure to meet those requirements.

People who use AI

You deserve to know that when you reach out to an AI in a hard moment, someone has thought about what happens next.

People worried about AI

The risks are real. We're building the infrastructure to address them—not by restricting AI, but by making it safer.

What we don't claim

Text alone cannot tell you what's actually happening in a person's mind. Two people can write identical words from completely different psychological states.

We don't predict suicide. We don't diagnose. We don't replace clinical judgment. We build infrastructure that gives platforms better signal than they have today—which, for most, is none at all.

The full solution requires humans: clinicians, researchers, support systems, community. Infrastructure is necessary but not sufficient.

What we say:

  • Clinically-informed
  • Evidence-grounded
  • Helps identify signals

What we don't say:

  • "Predicts suicide"
  • "Clinically validated"
  • "Ensures compliance"

Sources

[1] Raine v. OpenAI, Inc., et al. (Cal. Super. Ct., filed Aug. 26, 2025). See also: Senator Padilla press release.

[2] Jargon J, Kessler S. "A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich." Wall Street Journal, Aug. 29, 2025.

[3] McBain R, et al. "Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment." Psychiatric Services, 2025.

[4] Blanchard M, Farber BA. "'It is never okay to talk about suicide': Patients' reasons for concealing suicidal ideation in psychotherapy." Psychotherapy Research, 2020;30(1):124–136.

If you're in crisis, please reach out to a human. talk.help can help you find support in your country.

NOPE is infrastructure for developers, not a substitute for clinical care.