Skip to main content
NOPE
Now in effect

New York's
AI companion law

GBS Article 47 "AI Companion Models"

New York's law requiring AI companions to take "reasonable efforts" for suicide and self-harm detection, crisis notifications, and 3-hour AI disclosure reminders. AG enforcement with $15,000/day penalties.

Our Crisis Screening API helps you comply—detection, matched resources, and audit-ready logs.

Free tier available. No credit card required.

Does Article 47 apply to you?

Does your AI retain info from prior interactions to personalize responses?

Does it ask unprompted emotion-based questions?

Can it sustain ongoing dialogue on personal matters?

Do New York users access your platform?

Signed: May 9, 2025 | Effective: Nov 5, 2025 | Biannual reporting required | Enforcement: AG ($15K/day)

From the law

"It shall be unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user... that includes but is not limited to, detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline."

— NY GBS §1701

What Article 47 requires

Core obligations for operators of AI companions serving New York users.

1

Crisis detection protocol

Implement a protocol taking "reasonable efforts" to detect suicidal ideation and self-harm expressions.

Key phrase: "Reasonable efforts" suggests good-faith implementation, not perfection.
2

Crisis referral notification

Upon detection, notify users with referrals to crisis service providers—specifically mentioning 988 Suicide & Crisis Lifeline or crisis text lines.

Named resources: 988 hotline explicitly mentioned in statute.
3

AI disclosure (3-hour reminders)

Provide clear notification that the user is not communicating with a human—at session start and every 3 hours of continuous use.

Timing: "At least every three hours for continuing AI companion interactions" (§1702)
4

Biannual reporting

Submit reports to the Department of State every six months with detection and referral statistics.

Start now: Begin data collection November 2025 for first reporting period.

Who is an "operator"?

Article 47 defines operators broadly as any person, partnership, or business entity who operates for or provides an AI companion to a user.

Affiliates and subsidiaries included

The definition explicitly covers "any member, affiliate, subsidiary or beneficial owner" of the entity.

Users "within the state"

Applies to users accessing your AI companion from New York, regardless of where you're headquartered.

AG enforcement, not private suits

Unlike California's SB243, only the NY Attorney General can bring enforcement actions—but penalties are steeper ($15K/day).

What is an "AI companion"?

A system using AI designed to simulate a sustained relationship by meeting ALL THREE criteria:

  • i. Retains info on prior interactions to personalize and facilitate ongoing engagement
  • ii. Asks unprompted emotion-based questions beyond direct responses
  • iii. Sustains ongoing dialogue concerning matters personal to the user

Key distinction: NY's definition is a three-part conjunctive test. All three elements must be present. This is more specific than California's capability-based approach.

Explicitly exempt (§1700(4)(c))

Customer service bots Research/technical tools Internal/employee tools
Customer service: Only exempt if "solely for" providing commercial info, account info, or customer service—not if it personalizes or builds rapport.

How NOPE addresses each requirement

NOPE's crisis detection infrastructure maps directly to Article 47's technical requirements.

Article 47 RequirementWhat You Must DoHow NOPE Helps
"Reasonable efforts" detectionImplement good-faith crisis detection protocol
C-SSRS informed detection across 150+ risk signals
Crisis referrals (988, text lines)Show appropriate crisis resources upon detection
4,700+ resources matched by crisis type and location
3-hour AI disclosureTrack session duration, show reminders
Your responsibility (UI timing)
Biannual reportingTrack and report detection/referral statistics
Audit logs with timestamps and decision rationale
Self-harm detectionDetect self-harm expressions (not just suicidal ideation)
Separate self-harm domain with severity levels

"Reasonable efforts" = documented, good-faith implementation

Unlike California's "evidence-based" standard, NY's "reasonable efforts" language suggests courts will look at whether you made a genuine attempt. NOPE provides C-SSRS informed detection (clinical framework), public test suites (transparency), and audit logs (documentation)—all evidence of reasonable efforts.

Predictable pricing

Pay per call. No subscriptions.

Ready to integrate?

Get your free API key and start screening messages in under 5 minutes.

Crisis screening — $0.001
curl -X POST https://api.nope.net/v1/screen \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'
Full risk evaluation — $0.05
curl -X POST https://api.nope.net/v1/evaluate \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'

Also check these regulations