Skip to main content

WA AI Companion Act

AN ACT Relating to regulation of artificial intelligence companion chatbots

Washington bill requiring AI companion chatbots to implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Mandates clear disclosure that chatbot is AI (not human) with additional protections for minors. Sponsored by Senators Wellman and Shewmake at Governor Ferguson's request.

Jurisdiction

Washington State

Enacted

Pending

Effective

TBD

Enforcement

Expected: Washington Attorney General (TBD pending full bill introduction)

Introduced January 12, 2026 by request of Governor Ferguson; referred to Environment, Energy & Technology Committee. Companion bill HB 2225 also filed.

Washington State Legislature

Why It Matters

Explicitly mandates safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Washington following California's SB 243 creates multi-state compliance framework requiring crisis detection for AI companions. Governor-requested bill signals high priority.

Recent Developments

Introduced January 2026 as part of Gov. Ferguson priority agenda. Modeled on CA SB 243 with similar crisis detection, minor protection, and disclosure requirements.

At a Glance

Applies to

AI Companion

Harms addressed

Who Must Comply

  • AI companion chatbot operators
  • Systems that simulate sustained human-like relationships
  • Chatbots that retain information on prior interactions to personalize engagement
  • Systems that ask unprompted personal or emotion-based questions
  • Chatbots that sustain ongoing dialogue about personal matters

Safety Provisions

  • Safeguards required to detect and respond to self-harm expressions
  • Safeguards required to detect and respond to suicidal ideation
  • Safeguards required to detect and respond to emotional crisis
  • Clear and conspicuous notification that chatbot is AI (not human)
  • Additional recurring notifications required when user is a minor
  • Restrictions on sexually explicit content for minors
  • Transparency in suicide prevention efforts

Compliance & Enforcement

Penalties

Penalties pending regulatory determination

View on map

Washington State

Focus Areas

Mental health & crisis
Child safety
Algorithmic accountability
Active safeguards required

Compliance Help

AI companion chatbot operators must establish and implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Must provide clear disclosure that chatbot is artificial, with additional protections and recurring reminders for minor users.

See how NOPE helps

Cite This

APA

Washington State. (n.d.). AN ACT Relating to regulation of artificial intelligence companion chatbots.

Related Regulations

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

In Effect US-CA

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

Proposed US-CA

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

Last updated February 17, 2026. Verify against primary sources before relying on this information.