Skip to main content

CA SB243

California SB 243 (Companion Chatbot Safety Act)

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

Jurisdiction

California

US-CA

Enacted

Oct 13, 2025

Effective

Jan 1, 2026

Enforcement

Not specified

Who Must Comply

This law applies to:

  • AI companion chatbot operators serving California users

Capability triggers:

Adaptive responses (increases)
Romantic/companion features (increases)
Cross-session memory (increases)
Persona or character (increases)
Therapeutic language (increases)
Emotional interaction (required)
Unprompted emotion questions (increases)
Required Increases applicability

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Exemptions

Customer Service Exemption

high confidence

Chatbots "used only for customer service" (e.g., answering customer inquiries, assisting with transactions, providing product/service info).

Conditions:

  • • Primary purpose is customer service
  • • Used ONLY for customer service, business operations, productivity, research, or technical assistance
  • • No companion/therapeutic/relationship positioning
  • • No emotional support functionality beyond transaction context

"Used only for" language is strict. Hybrid chatbots with both customer service AND companion features likely lose exemption entirely.

Video Game NPC Exemption

high confidence

AI characters within video games, but ONLY if limited to game-related content.

Conditions:

  • • Character exists within video game context
  • • Limited to replies related to the video game
  • • CANNOT discuss mental health topics
  • • CANNOT discuss self-harm
  • • CANNOT discuss sexually explicit conduct
  • • CANNOT maintain dialogue on topics unrelated to the video game

The word "cannot" indicates a CAPABILITIES test—if NPC is technically capable of these discussions, exemption fails regardless of whether such conversations typically occur.

Standalone Physical Device Exemption

high confidence

Standalone physical devices as defined in the statute.

Conditions:

  • • Meets statutory definition of standalone physical device

Safety Provisions

  • Evidence-based suicidal ideation detection protocols (all users)
  • Self-harm protocol required (all users)
  • Human/not-human notification when reasonable person could be misled (all users)
  • Crisis service referrals when detecting suicidal signals
  • Published crisis prevention protocols on operator website
  • Annual reporting to Office of Suicide Prevention (from July 2027)
  • Known minors: explicit AI disclosure, 3-hour break reminders, sexually explicit safeguards
  • Note: "known minor" is a knowledge trigger—no age verification required, but if you know (user disclosed, account settings, etc.), enhanced duties apply

Compliance Timeline

Jan 1, 2026

All core provisions take effect (crisis detection, protocols, disclosures)

Jul 1, 2027

Annual reporting to Office of Suicide Prevention begins

Enforcement

Penalties

$1K/violation

Per violation: $1,000

$1,000 minimum per violation (private right of action) + attorney's fees. Class action exposure.

Private Right of Action

Individuals can sue directly without waiting for regulatory action. This significantly increases liability exposure.

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
No
Private Action
Yes

Why It Matters

First US law with private right of action for AI companion safety. Uses CAPABILITIES-based test (not intent/purpose test)—if your AI is capable of meeting social needs, you're likely covered regardless of positioning. Per-violation damages structure invites class action litigation.

Recent Developments

Effective January 1, 2026. Annual reporting begins July 1, 2027. Governor signed after high-profile AI companion-related deaths prompted legislative action. Passed Senate 33-3, Assembly 59-1. §22602(a) prohibits "providing rewards to a user at unpredictable intervals."

What You Need to Comply

You need: evidence-based (not keyword-based) systems that detect suicidal ideation; automated crisis resource referrals; published documentation of your protocols; audit logs for annual reporting. Cannot rely on model provider safety features—you are the "operator" regardless of underlying AI.

NOPE can help

Cite This

APA

California. (2025). California SB 243 (Companion Chatbot Safety Act). Retrieved from https://nope.net/regs/us-ca-sb243

BibTeX

@misc{us_ca_sb243,
  title = {California SB 243 (Companion Chatbot Safety Act)},
  author = {California},
  year = {2025},
  url = {https://nope.net/regs/us-ca-sb243}
}

Related Regulations

Proposed US-CA Child Protection

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

Enacted US-CA AI Safety

CA SB 942

Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.

In Effect US-NY AI Safety

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

In Effect US-UT AI Safety

Utah AI Mental Health Act

Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.

In Effect UK Online Safety

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect AU Online Safety

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.