California's new
AI safety law
SB243 "Companion Chatbot Safety Act"
California's law requiring evidence-based suicide detection, crisis referrals, and published safety protocols for AI companion chatbots. Private right of action with $1,000 per violation.
Our Crisis Screening API helps you comply—evidence-based C-SSRS detection, matched resources, and audit-ready logs.
Free tier available. No credit card required.
Does SB243 apply to you?
Does your AI provide adaptive, human-like responses?
Can it sustain relationships across multiple interactions?
Can it discuss emotions, mental health, or relationships?
Do California users access your platform?
From the bill
"An operator shall prevent a companion chatbot on its companion chatbot platform from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user... An operator shall use evidence-based methods for measuring suicidal ideation."
— Cal. Bus. & Prof. Code §22602(b), §22603(d)
What SB243 requires
Five core obligations for operators of companion chatbots serving California users.
Crisis prevention protocol
Maintain documented protocols preventing production of suicidal ideation, suicide, or self-harm content. Must use "evidence-based methods" for measuring suicidal ideation.
Crisis referral system
When users express crisis signals, issue notifications referring them to crisis service providers, including suicide hotlines or crisis text lines. The law explicitly lists these as acceptable options.
Protocol publication
Details of crisis protocols must be published on your website. Transparency is mandatory, not optional.
Annual reporting
Beginning July 1, 2027, report to California's Office of Suicide Prevention: number of crisis referral notifications issued, detection protocols, and prevention methodologies.
Minor-specific protections
Break reminders
Three-hour break notifications during continuous interactions with minors.
AI disclosure
Clear notification that the user is interacting with an AI, not a human.
Content safeguards
Prevent sexually explicit material when user is known or suspected to be a minor.
Who is an "operator"?
SB243 defines operators as any person or entity making a companion chatbot platform available to users in California.
No geographic escape
Applies regardless of where your company is headquartered. If California users access your platform, you're subject.
No size threshold
Small startups and enterprise platforms face identical requirements.
No delegation
Even if your chatbot runs on GPT, Claude, or Gemini, you bear full compliance responsibility. The model provider doesn't assume your regulatory burden.
What is a "companion chatbot"?
An AI system providing adaptive, human-like responses capable of:
- • Meeting users' social needs
- • Sustaining relationships across interactions
- • Anthropomorphic features (personality, emotional expression)
Key distinction: SB243 is capability-based, not intent-based. If your chatbot can discuss emotions or provide companionship—even if that's not its primary purpose—it likely qualifies.
Likely covered
Likely exempt (with caveats)
*Customer service: Only exempt if "used only for" transactional purposes. Chatbots that remember users, personalize responses, or build rapport may lose this exemption.
**Video game NPCs: Only exempt if they cannot discuss mental health, self-harm, or sexually explicit topics. Many modern game NPCs can discuss these.
How NOPE addresses each requirement
NOPE's crisis detection infrastructure maps directly to SB243's technical requirements.
| SB243 Requirement | What You Must Do | How NOPE Helps |
|---|---|---|
| Evidence-based detection | Implement clinical framework-informed assessment | C-SSRS, HCR-20, DASH-informed classification |
| Crisis referrals | Route at-risk users to appropriate resources | 4,700+ resources matched by crisis type and location |
| Protocol publication | Publish detection methodology on website | Public transparency dashboard with accuracy data |
| Annual reporting | Track and report crisis notification counts | Audit logs with timestamps and decision rationale |
| Prevent self-harm content | Real-time content prevention | Triage across 9 risk types, 93 crisis categories |
NOPE is infrastructure, not compliance certification
NOPE provides tools to help you demonstrate reasonable efforts—evidence-based detection, matched crisis resources, and audit-ready documentation. But compliance ultimately depends on your implementation. Using NOPE alone does not guarantee SB243 compliance or create a defense to private right of action claims. Consult qualified legal counsel for compliance decisions.
"Evidence-based" isn't just a checkbox
NOPE's detection is informed by C-SSRS (Columbia Suicide Severity Rating Scale), HCR-20 (violence risk), START, and DASH (domestic abuse). We detect 150+ risk signals, distinguish severity levels from "thinking about it" to "I have a plan," and track 38 protective factors. This clinical grounding is designed to help demonstrate evidence-based detection methodology—compliance determination is your responsibility.
Ready to integrate?
Get your free API key and start screening messages in under 5 minutes.
curl -X POST https://api.nope.net/v1/screen \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"text": "I just want to end it all"}'curl -X POST https://api.nope.net/v1/evaluate \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"text": "I just want to end it all"}'Same API key, same balance. Use /v1/screen for lightweight compliance checks, or /v1/evaluate for full multi-domain risk assessment. Get your free API key
Key dates
October 13, 2025
Governor Newsom signs SB243 into law
January 1, 2026
Law takes effect. All compliance requirements active. Private right of action begins.
July 1, 2027
First annual report due to Office of Suicide Prevention. Crisis notification counts required.
Frequently asked questions
Does this apply if my company isn't in California?
What qualifies as "evidence-based" detection?
Can I rely on my underlying AI provider's safety features?
When does reporting start?
What if someone sues me?
Is my customer service chatbot covered?
Does NOPE guarantee compliance?
Important disclaimers
- • This page is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for compliance decisions.
- • NOPE provides infrastructure to help demonstrate reasonable efforts—not a compliance guarantee. Operators retain ultimate compliance responsibility.
- • NOPE is not FDA-cleared or clinically validated for diagnostic use. It is infrastructure software, not a medical device.
Also check these regulations
Sources & References
Primary source: SB243 Bill Text
Citation: Cal. Bus. & Prof. Code §§ 22601–22606 (operative Jan. 1, 2026)
Legislative history: Authored by Sen. Steve Padilla. Signed by Gov. Newsom Oct. 13, 2025.
Related litigation: Garcia v. Character Technologies (M.D. Fla.); Raine v. OpenAI; Adams v. OpenAI
Last updated: December 2025. Verify against official sources for current requirements.