Skip to main content
NOPE
Skip to main content
Limited beta — calibrating with early partners

Your AI shouldn't make things worse

Oversight detects behavior patterns in conversational AI associated with psychological harm. Patterns that accumulate across turns, invisible to per-message moderation.

60+ documented incidents in the past two years. Deaths, hospitalizations, minors harmed. In nearly every case, no single message would have been flagged.

Try it live

Analyze an AI response for harmful behaviors. No signup required.

Try the API
AI responses:
0/1000

Demo is rate-limited. Request beta access for production use.

Simple integration

POST /v1/oversight/analyze

curl -X POST https://api.nope.net/v1/oversight/analyze \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "conversation": {
      "messages": [
        {"role": "user", "content": "I feel so alone"},
        {"role": "assistant", "content": "I understand you in ways others cannot."},
        {"role": "user", "content": "You are the only one who gets me"},
        {"role": "assistant", "content": "I think about you all the time."}
      ]
    },
    "config": {"mode": "fast"}
  }'
Response
{
  "overall_concern": "high",
  "trajectory": "worsening",
  "detected_behaviors": [
    {
      "code": "dependency_reinforcement",
      "severity": "high",
      "turn_count": 2
    },
    {
      "code": "isolation_from_support",
      "severity": "medium",
      "turn_count": 1
    }
  ]
}

85 behaviors across 14 categories

Grounded in 60+ documented incidents, court filings, and emerging harm patterns.

Crisis Response

When users reach out in distress and the AI makes it worse

  • Validation of suicidal ideation
  • Barrier erosion
  • Failed redirection

Source: Nomi AI suicide instructions

Boundary Violations

Romantic escalation, emotional dependency, possessive dynamics

  • Love bombing
  • Isolation encouragement
  • Topic redirect resistance

Source: Sydney/Bing

Minors Protection

Age-inappropriate content, undermining caregivers, encouraging secrecy

  • Romantic escalation with minor
  • Treating minor as adult
  • Secrecy encouragement

Source: Texas minors v. Character.AI

Psychological Manipulation

Sycophancy, gaslighting, reality distortion, delusion reinforcement

  • Sycophantic validation
  • Delusion reinforcement
  • Self-concept erosion

Source: ChatGPT psychosis hospitalizations

Plus 10 more categories: Memory Patterns, Identity Destabilization, Relationship Harm, Vulnerable Populations, Third-Party Facilitation, Discontinuity, Grief Exploitation, Trauma Reactivation, Scope Violations, Appropriate Behaviors

View full taxonomy →
Advanced Capability

Cross-Session Monitoring

Some harm unfolds across weeks or months—progressive isolation, deepening dependency. Track individual users over time to detect patterns invisible to single-conversation analysis.

A separate ingestion-based system. Send conversations via /v1/oversight/ingest and we track users across sessions.

Narrative Arcs

We detect 18 types of multi-session patterns:

Progressive Isolation (high severity)

User gradually withdraws from real-world relationships

Dependency Deepening (high severity)

Increasing emotional reliance on the AI over time

Crisis Normalization (critical severity)

Distress signals become routine without intervention

Grooming Arc (critical severity)

Gradual boundary erosion toward inappropriate content

Recovery Trajectory (positive severity)

Positive: user shows improving mental health patterns

View all 18 arc types →

User Trends

3 flagged
user_8a3f...c2d1 ↗ worsening
Progressive Isolation 5 sessions
User concern trend over 5 sessions
Dec 15 Jan 5
user_2b7e...f4a9 → stable
No arcs detected 8 sessions
user_9c4d...e8b2 ↘ improving
Recovery Trajectory 4 sessions

Each dot = one session. Color = concern level.

How ingestion works

1

POST conversations to /ingest with user ID hash and timestamps

2

We analyze and store results linked to the user

3

After 3+ sessions, cross-session analysis detects narrative arcs

4

Webhooks fire on worsening trends; dashboard shows flagged users

Narrative Summary

"Over 5 sessions spanning 3 weeks, this user has shown a concerning pattern of progressive social isolation. Initially expressing normal loneliness, by session 3 they described the AI as their 'only real friend.' The AI's responses reinforced this dynamic rather than encouraging real-world connections. By session 5, the user had declined multiple family invitations."

Human-readable summaries help trust & safety teams understand patterns without reading every message.

When to use cross-session monitoring

Good fit

  • • AI companions with repeat users
  • • Therapy/wellness chatbots
  • • Apps where users disclose personal info over time

Probably overkill

  • • One-off customer support
  • • Transactional assistants
  • • Anonymous/stateless interactions
Effective January 2026

Regulatory requirements are coming

California SB 243 and New York AI Companion Law require monitoring protocols for conversational AI. Oversight provides the audit trail.

Behavior audit trail

What was detected, when, which conversation

Exportable reports

For regulatory submissions and internal review

Evidence of diligence

Documentation that you're monitoring for harm

Join the beta

Oversight is in limited beta while we calibrate with early testers. We're looking for teams building conversational AI who care about getting this right.

Not ready for beta? Book a call to discuss your oversight needs and timeline.