Skip to main content

CA SB243

California SB 243 (Companion Chatbot Safety Act)

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

Jurisdiction

California

Enacted

Oct 13, 2025

Effective

Jan 1, 2026

Enforcement

TBD

Bill Text

Why It Matters

First US law with private right of action for AI companion safety. Uses capabilities-based test (not intent/purpose test) - if AI is capable of meeting social needs, likely covered regardless of positioning. Per-violation damages structure enables class action litigation.

Recent Developments

Effective January 1, 2026. Annual reporting begins July 1, 2027. Governor signed after high-profile AI companion-related deaths prompted legislative action. Passed Senate 33-3, Assembly 59-1. §22602(a) prohibits "providing rewards to a user at unpredictable intervals."

At a Glance

Applies to

AI CompanionMental Health AppGeneral ChatbotCharacter Chatbot

Harms addressed

Who Must Comply

  • AI companion chatbot operators serving California users

Safety Provisions

  • Evidence-based suicidal ideation detection protocols (all users)
  • Self-harm protocol required (all users)
  • Human/not-human notification when reasonable person could be misled (all users)
  • Crisis service referrals when detecting suicidal signals
  • Published crisis prevention protocols on operator website
  • Annual reporting to Office of Suicide Prevention (from July 2027)
  • Known minors: explicit AI disclosure, 3-hour break reminders, sexually explicit safeguards
  • Note: "known minor" is a knowledge trigger - no age verification required, but knowledge (user disclosure, account settings, etc.) triggers enhanced duties

Exemptions

Customer Service Exemption

Chatbots "used only for customer service" (e.g., answering customer inquiries, assisting with transactions, providing product/service info).

  • • Primary purpose is customer service
  • • Used ONLY for customer service, business operations, productivity, research, or technical assistance
  • • No companion/therapeutic/relationship positioning
  • • No emotional support functionality beyond transaction context

Video Game NPC Exemption

AI characters within video games, but ONLY if limited to game-related content.

  • • Character exists within video game context
  • • Limited to replies related to the video game
  • • CANNOT discuss mental health topics
  • • CANNOT discuss self-harm
  • • CANNOT discuss sexually explicit conduct
  • • CANNOT maintain dialogue on topics unrelated to the video game

Standalone Physical Device Exemption

Standalone physical devices as defined in the statute.

  • • Meets statutory definition of standalone physical device

Compliance & Enforcement

Key Dates

Jan 1, 2026

All core provisions take effect (crisis detection, protocols, disclosures)

Jul 1, 2027

Annual reporting to Office of Suicide Prevention begins

Penalties

$1K/violation

Private Right of Action

Individuals can sue directly without waiting for regulatory action.

View on map

California

Focus Areas

Mental health & crisis
Child safety
Active safeguards required

Compliance Help

Requires evidence-based (not keyword-based) systems that detect suicidal ideation; automated crisis resource referrals; published documentation of protocols; audit logs for annual reporting. Operators cannot rely solely on model provider safety features.

See how NOPE helps

Cite This

APA

California. (2025). California SB 243 (Companion Chatbot Safety Act).

Related Regulations

Proposed US-CA

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

Enacted US-CA

CA SB 942

Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

In Effect US-NY

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

Pending US-FL

FL AI Bill of Rights

Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.

Last updated February 17, 2026. Verify against primary sources before relying on this information.