MD HB 952
Consumer Protection – Companion Chatbots – Regulation (HB 952)
Regulates companion chatbot operators with mandatory disclosures, harm detection, and crisis referral protocols for self-harm and suicidal ideation, backed by product liability and a private right of action.
Jurisdiction
Maryland
Enacted
Pending
Effective
Oct 1, 2026
Enforcement
Maryland Attorney General (MCPA enforcement); private right of action via product liability framework
Passed House of Delegates 123-4. Pending in Senate Finance Committee (hearing held 2026-03-26). Primary sponsor Del. Buckel (R). Effective date October 1, 2026 if enacted.
Maryland General Assembly — HB 952Why It Matters
One of the most comprehensive US state companion-chatbot bills, combining active crisis detection duties with a private right of action grounded in product liability — a stronger liability footing than deceptive-practices frameworks used in other states.
Recent Developments
Passed House 123-4 in March 2026; Senate Finance Committee hearing held 2026-03-26. If enacted, takes effect October 1, 2026 with first reports due March 1, 2027.
At a Glance
Applies to
Harms addressed
Who Must Comply
- Operators of companion chatbots that serve Maryland users
Obligations fall on:
Safety Provisions
- Clear disclosure that chatbot is AI, not human
- Break-recommendation warning every 3 hours of consecutive minor use
- Hourly dynamic warnings for minor users
- Evidence-based harm detection methods for self-harm and suicidal ideation
- Crisis referral protocols
- Annual reports to Maryland Office of Suicide Prevention
- Prohibition on content promoting self-harm or suicide
Compliance & Enforcement
Key Dates
Oct 1, 2026
Effective date if enacted
Mar 1, 2027
First annual operator reports to Office of Suicide Prevention
Penalties
$25K; $25K/violation; criminal liability
Private Right of Action
Individuals can sue directly without waiting for regulatory action.
View on map
Maryland
Focus Areas
Cite This
APA
Maryland. (2026). Consumer Protection – Companion Chatbots – Regulation (HB 952).
Related Regulations
OR SB 1546
Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.
VA AI Chatbots & Minors
Requires AI chatbot operators with 500,000+ monthly users to implement crisis detection safeguards, provide disclosure to users, notify emergency services when imminent harm detected, and report serious incidents to the attorney general.
CA SB 1119
Comprehensive companion chatbot children's safety framework establishing mandatory design features, default settings, prohibited conduct, parental controls, independent audit requirements, and a private right of action.
NH HB 143
Criminalizes use of AI-generated responsive communications to facilitate, encourage, or solicit harmful acts to children, and creates a private right of action for affected children and their parents.
AI LEAD Act
Classifies AI systems as 'products' under federal law and establishes a federal cause of action for product liability claims against AI developers and deployers, including claims for design defects, failure to warn, and strict liability.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
Last updated April 16, 2026. Verify against primary sources before relying on this information.