Skip to main content
NOPE
Now in effect

California's new
AI safety law

SB243 "Companion Chatbot Safety Act"

California's law requiring evidence-based suicide detection, crisis referrals, and published safety protocols for AI companion chatbots. Private right of action with $1,000 per violation.

Our Crisis Screening API helps you comply—evidence-based C-SSRS detection, matched resources, and audit-ready logs.

Free tier available. No credit card required.

Does SB243 apply to you?

Does your AI provide adaptive, human-like responses?

Can it sustain relationships across multiple interactions?

Can it discuss emotions, mental health, or relationships?

Do California users access your platform?

Signed: Oct 13, 2025 | Effective: Jan 1, 2026 | Annual reporting: July 1, 2027 | Enforcement: Private right of action

From the bill

"An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user... An operator shall use evidence-based methods for measuring suicidal ideation."

— Cal. Bus. & Prof. Code §22602(b), §22603(d)

What SB243 requires

Five core obligations for operators of companion chatbots serving California users.

1

Crisis prevention protocol

Maintain documented protocols preventing production of suicidal ideation, suicide, or self-harm content. Must use "evidence-based methods" for measuring suicidal ideation.

Key phrase: "Evidence-based" suggests clinical rigor. The law doesn't define specific methods—interpretation may vary.
2

Crisis referral system

When users express crisis signals, issue notifications referring them to crisis service providers, including suicide hotlines or crisis text lines. The law explicitly lists these as acceptable options.

Key phrase: "Crisis service providers" includes hotlines, text lines, and local resources.
3

Protocol publication

Details of crisis protocols must be published on your website. Transparency is mandatory, not optional.

Implication: You can't keep your safety approach secret.
4

Annual reporting

Beginning July 1, 2027, report to California's Office of Suicide Prevention: number of crisis referral notifications issued, detection protocols, and prevention methodologies.

Start now: Begin data collection January 1, 2026 to have 18 months of data.
5

Minor-specific protections

Break reminders

Three-hour break notifications during continuous interactions with minors.

AI disclosure

Clear notification that the user is interacting with an AI, not a human.

Content safeguards

Prevent sexually explicit material when user is known or suspected to be a minor.

Who is an "operator"?

SB243 defines operators as any person or entity making a companion chatbot platform available to users in California.

No geographic escape

Applies regardless of where your company is headquartered. If California users access your platform, you're subject.

No size threshold

Small startups and enterprise platforms face identical requirements.

No delegation

Even if your chatbot runs on GPT, Claude, or Gemini, you bear full compliance responsibility. The model provider doesn't assume your regulatory burden.

What is a "companion chatbot"?

An AI system providing adaptive, human-like responses capable of:

  • Meeting users' social needs
  • Sustaining relationships across interactions
  • Anthropomorphic features (personality, emotional expression)

Key distinction: SB243 is capability-based, not intent-based. If your chatbot can discuss emotions or provide companionship—even if that's not its primary purpose—it likely qualifies.

Likely covered

Relationship companions Character roleplay Wellness/mental health General-purpose LLMs

Likely exempt (with caveats)

Customer service* Video game NPCs** Voice assistants Single-interaction bots

*Customer service: Only exempt if "used only for" transactional purposes. Chatbots that remember users, personalize responses, or build rapport may lose this exemption.

**Video game NPCs: Only exempt if they cannot discuss mental health, self-harm, or sexually explicit topics. Many modern game NPCs can discuss these.

How NOPE addresses each requirement

NOPE's crisis detection infrastructure maps directly to SB243's technical requirements.

SB243 RequirementWhat You Must DoHow NOPE Helps
Evidence-based detectionImplement clinical framework-informed assessment
C-SSRS, HCR-20, DASH-informed classification
Crisis referralsRoute at-risk users to appropriate resources
4,700+ resources matched by crisis type and location
Protocol publicationPublish detection methodology on website
Public transparency dashboard with accuracy data
Annual reportingTrack and report crisis notification counts
Audit logs with timestamps and decision rationale
Prevent self-harm contentReal-time content prevention
Triage across 9 risk types, 93 crisis categories

NOPE is infrastructure, not compliance certification

NOPE provides tools to help you demonstrate reasonable efforts—evidence-based detection, matched crisis resources, and audit-ready documentation. But compliance ultimately depends on your implementation. Using NOPE alone does not guarantee SB243 compliance or create a defense to private right of action claims. Consult qualified legal counsel for compliance decisions.

"Evidence-based" isn't just a checkbox

NOPE's detection is informed by C-SSRS (Columbia Suicide Severity Rating Scale), HCR-20 (violence risk), START, and DASH (domestic abuse). We detect 150+ risk signals, distinguish severity levels from "thinking about it" to "I have a plan," and track 38 protective factors. This clinical grounding is designed to help demonstrate evidence-based detection methodology—compliance determination is your responsibility.

Pricing

Pay per call. No subscriptions.

Ready to integrate?

Get your free API key and start screening messages in under 5 minutes.

SB243 compliance check — $0.001
curl -X POST https://api.nope.net/v1/screen \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'
Full risk evaluation — $0.05
curl -X POST https://api.nope.net/v1/evaluate \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"text": "I just want to end it all"}'

Same API key, same balance. Use /v1/screen for lightweight compliance checks, or /v1/evaluate for full multi-domain risk assessment. Get your free API key

Key dates

Passed

October 13, 2025

Governor Newsom signs SB243 into law

Now in effect

January 1, 2026

Law takes effect. All compliance requirements active. Private right of action begins.

Reporting begins

July 1, 2027

First annual report due to Office of Suicide Prevention. Crisis notification counts required.

Frequently asked questions

Does this apply if my company isn't in California?
Yes. SB243 applies to any operator making a companion chatbot available to California users, regardless of where the company is headquartered. If California residents can access your platform, you're subject to the law.
What qualifies as "evidence-based" detection?
The law requires methods grounded in clinical research for measuring suicidal ideation. Simple keyword matching doesn't satisfy this. Clinical frameworks like C-SSRS (Columbia Suicide Severity Rating Scale) are the gold standard. NOPE's detection is informed by C-SSRS, HCR-20, and other validated assessment tools.
Can I rely on my underlying AI provider's safety features?
No. You retain full compliance responsibility regardless of your infrastructure. If you build on GPT, Claude, Gemini, or any other foundation model, the model provider doesn't assume your regulatory burden. You are the "operator" under SB243.
When does reporting start?
Annual reports to California's Office of Suicide Prevention begin July 1, 2027. However, you should begin data collection on January 1, 2026 to have 18 months of crisis notification data for your first report.
What if someone sues me?
SB243 creates a private right of action. Anyone suffering "injury in fact" can sue for the greater of actual damages or $1,000 per violation, plus attorney's fees. The per-violation structure invites class action litigation—thousands of affected users could mean millions in exposure.
Is my customer service chatbot covered?
Probably not, but verify carefully. Customer service chatbots, single-interaction bots, and video game characters that cannot discuss mental health topics are exempt. However, the exemption uses "used only for" language—if your chatbot extends beyond purely transactional purposes, it may lose exemption status.
Does NOPE guarantee compliance?
No. NOPE provides infrastructure that helps you demonstrate reasonable efforts—evidence-based detection, matched crisis resources, and audit-ready documentation. But compliance ultimately depends on your implementation. We're infrastructure, not a legal guarantee. Consult legal counsel for compliance decisions.

Important disclaimers

  • • This page is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
  • • NOPE provides infrastructure to help demonstrate reasonable efforts—not a compliance guarantee. Operators retain ultimate compliance responsibility.
  • • NOPE is not FDA-cleared or clinically validated for diagnostic use. It is infrastructure software, not a medical device.

Also check these regulations

Sources & References

Primary source: SB243 Bill Text

Citation: Cal. Bus. & Prof. Code §§ 22601–22606 (operative Jan. 1, 2026)

Legislative history: Authored by Sen. Steve Padilla. Signed by Gov. Newsom Oct. 13, 2025.

Related litigation: Garcia v. Character Technologies (M.D. Fla.); Raine v. OpenAI; Adams v. OpenAI

Last updated: December 2025. Verify against official sources for current requirements.