Skip to main content

CA SB243

California SB 243 (Companion Chatbot Safety Act)

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

Jurisdiction

California

Enacted

Oct 13, 2025

Effective

Jan 1, 2026

Enforcement

TBD

Bill Text

Why It Matters

First US law with private right of action for AI companion safety. Uses capabilities-based test (not intent/purpose test) - if AI is capable of meeting social needs, likely covered regardless of positioning. Per-violation damages structure enables class action litigation.

Recent Developments

Effective January 1, 2026. Annual reporting begins July 1, 2027. Governor signed after high-profile AI companion-related deaths prompted legislative action. Passed Senate 33-3, Assembly 59-1. §22602(a) prohibits "providing rewards to a user at unpredictable intervals."

At a Glance

Applies to

AI CompanionMental Health AppGeneral ChatbotCharacter Chatbot

Harms addressed

Who Must Comply

  • AI companion chatbot operators serving California users

Safety Provisions

  • Evidence-based suicidal ideation detection protocols (all users)
  • Self-harm protocol required (all users)
  • Human/not-human notification when reasonable person could be misled (all users)
  • Crisis service referrals when detecting suicidal signals
  • Published crisis prevention protocols on operator website
  • Annual reporting to Office of Suicide Prevention (from July 2027)
  • Known minors: explicit AI disclosure, 3-hour break reminders, sexually explicit safeguards
  • Note: "known minor" is a knowledge trigger - no age verification required, but knowledge (user disclosure, account settings, etc.) triggers enhanced duties

Exemptions

Customer Service Exemption

Chatbots "used only for customer service" (e.g., answering customer inquiries, assisting with transactions, providing product/service info).

  • • Primary purpose is customer service
  • • Used ONLY for customer service, business operations, productivity, research, or technical assistance
  • • No companion/therapeutic/relationship positioning
  • • No emotional support functionality beyond transaction context

Video Game NPC Exemption

AI characters within video games, but ONLY if limited to game-related content.

  • • Character exists within video game context
  • • Limited to replies related to the video game
  • • CANNOT discuss mental health topics
  • • CANNOT discuss self-harm
  • • CANNOT discuss sexually explicit conduct
  • • CANNOT maintain dialogue on topics unrelated to the video game

Standalone Physical Device Exemption

Standalone physical devices as defined in the statute.

  • • Meets statutory definition of standalone physical device

Compliance & Enforcement

Key Dates

Jan 1, 2026

All core provisions take effect (crisis detection, protocols, disclosures)

Jul 1, 2027

Annual reporting to Office of Suicide Prevention begins

Penalties

$1K/violation

Private Right of Action

Individuals can sue directly without waiting for regulatory action.

View on map

California

Focus Areas

Mental health & crisis
Child safety
Active safeguards required

Cite This

APA

California. (2025). California SB 243 (Companion Chatbot Safety Act).

Related Regulations

Pending US-CA

CA SB 1119

Comprehensive companion chatbot children's safety framework establishing mandatory design features, default settings, prohibited conduct, parental controls, independent audit requirements, and a private right of action.

Proposed US-CA

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

Enacted US-OR

OR SB 1546

Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.

Pending US-MD

MD HB 952

Regulates companion chatbot operators with mandatory disclosures, harm detection, and crisis referral protocols for self-harm and suicidal ideation, backed by product liability and a private right of action.

Enacted US-NH

NH HB 143

Criminalizes use of AI-generated responsive communications to facilitate, encourage, or solicit harmful acts to children, and creates a private right of action for affected children and their parents.

Proposed US

AI LEAD Act

Classifies AI systems as 'products' under federal law and establishes a federal cause of action for product liability claims against AI developers and deployers, including claims for design defects, failure to warn, and strict liability.

Last updated February 17, 2026. Verify against primary sources before relying on this information.