Skip to main content

MA AI Healthcare Act

Massachusetts Act Relative to the Use of Artificial Intelligence in Healthcare Decision-Making (SB 2632)

Prohibits AI from making independent therapeutic decisions in mental or behavioral health settings. Requires licensed professional review of all AI treatment plans and patient interactions.

Jurisdiction

Massachusetts

Enacted

Pending

Effective

TBD

Enforcement

Massachusetts Department of Public Health; relevant professional licensing boards

Reported favorably by Senate Committee on Advanced Information Technology on October 16, 2025. Referred to Joint Committee on Health Care Financing. 194th General Court.

Massachusetts General Court

Why It Matters

Sets clear boundaries for AI role in mental health. Similar to Pennsylvania HB 1993 and Illinois WOPR Act. Establishes human-in-the-loop requirement for therapeutic AI.

Recent Developments

Committee reported favorably October 2025. Part of broader healthcare AI regulation package. Still in legislative process as of January 2026.

At a Glance

Applies to

AI CompanionMental Health App

Who Must Comply

  • Healthcare providers using AI in Massachusetts
  • Mental health practitioners
  • Behavioral health service providers
  • AI system developers offering mental health services

Safety Provisions

  • AI cannot make independent therapeutic decisions
  • Licensed professional must review all AI treatment plans
  • Human oversight required for all AI patient interactions in behavioral health
  • Applies specifically to mental and behavioral health settings

Compliance & Enforcement

Penalties

license revocation

License revocation

View on map

Massachusetts

Focus Areas

Mental health & crisis
Algorithmic accountability
Active safeguards required

Cite This

APA

Massachusetts. (n.d.). Massachusetts Act Relative to the Use of Artificial Intelligence in Healthcare Decision-Making (SB 2632).

Related Regulations

Pending US-ID

ID Conversational AI Safety

Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.

Enacted US-OR

OR SB 1546

Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.

Pending US-MD

MD HB 952

Regulates companion chatbot operators with mandatory disclosures, harm detection, and crisis referral protocols for self-harm and suicidal ideation, backed by product liability and a private right of action.

Enacted US-NH

NH HB 143

Criminalizes use of AI-generated responsive communications to facilitate, encourage, or solicit harmful acts to children, and creates a private right of action for affected children and their parents.

In Effect BR

Brazil ECA Digital

Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.

Proposed US

AI LEAD Act

Classifies AI systems as 'products' under federal law and establishes a federal cause of action for product liability claims against AI developers and deployers, including claims for design defects, failure to warn, and strict liability.

Last updated January 23, 2026. Verify against primary sources before relying on this information.