WA AI Companion Act
AN ACT Relating to regulation of artificial intelligence companion chatbots
Washington bill requiring AI companion chatbots to implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Mandates clear disclosure that chatbot is AI (not human) with additional protections for minors. Sponsored by Senators Wellman and Shewmake at Governor Ferguson's request.
Jurisdiction
Washington State
US-WA
Enacted
Unknown
Effective
Unknown
Enforcement
Expected: Washington Attorney General (TBD pending full bill introduction)
Pre-filed January 5, 2026 by request of Governor Ferguson; not yet formally introduced or assigned to committee
What It Requires
Harms Addressed
Who Must Comply
This law applies to:
- • AI companion chatbot operators
- • Systems that simulate sustained human-like relationships
- • Chatbots that retain information on prior interactions to personalize engagement
- • Systems that ask unprompted personal or emotion-based questions
- • Chatbots that sustain ongoing dialogue about personal matters
Capability triggers:
Safety Provisions
- • Safeguards required to detect and respond to self-harm expressions
- • Safeguards required to detect and respond to suicidal ideation
- • Safeguards required to detect and respond to emotional crisis
- • Clear and conspicuous notification that chatbot is AI (not human)
- • Additional recurring notifications required when user is a minor
- • Restrictions on sexually explicit content for minors
- • Transparency in suicide prevention efforts
Enforcement
Enforced by
Expected: Washington Attorney General (TBD pending full bill introduction)
Penalties
Penalties pending regulatory determination
Violations treated as unfair/deceptive practices (specific penalties TBD)
Quick Facts
- Binding
- No
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- Yes
Why It Matters
Explicitly mandates 'safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis' - this is NOPE's exact service offering. Washington following California's SB 243 creates multi-state compliance framework requiring crisis detection for AI companions. Governor-requested bill signals high priority.
Recent Developments
Pre-filed January 5, 2026 at request of Governor Ferguson. Follows California SB 243 model in requiring crisis detection protocols for AI companion chatbots.
What You Need to Comply
AI companion chatbot operators must establish and implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Must provide clear disclosure that chatbot is artificial, with additional protections and recurring reminders for minor users.
NOPE can helpCite This
APA
Washington State. (n.d.). AN ACT Relating to regulation of artificial intelligence companion chatbots. Retrieved from https://nope.net/regs/us-wa-sb5984
BibTeX
@misc{us_wa_sb5984,
title = {AN ACT Relating to regulation of artificial intelligence companion chatbots},
author = {Washington State},
year = {n.d.},
url = {https://nope.net/regs/us-wa-sb5984}
} Related Regulations
CA SB243
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
Utah AI Mental Health Act
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.