MA AI Healthcare Act
Massachusetts Act Relative to the Use of Artificial Intelligence in Healthcare Decision-Making (SB 2632)
Prohibits AI from making independent therapeutic decisions in mental or behavioral health settings. Requires licensed professional review of all AI treatment plans and patient interactions.
Jurisdiction
Massachusetts
Enacted
Pending
Effective
TBD
Enforcement
Massachusetts Department of Public Health; relevant professional licensing boards
Reported favorably by Senate Committee on Advanced Information Technology on October 16, 2025. Referred to Joint Committee on Health Care Financing. 194th General Court.
Massachusetts General CourtWhy It Matters
Sets clear boundaries for AI role in mental health. Similar to Pennsylvania HB 1993 and Illinois WOPR Act. Establishes human-in-the-loop requirement for therapeutic AI.
Recent Developments
Committee reported favorably October 2025. Part of broader healthcare AI regulation package. Still in legislative process as of January 2026.
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- Healthcare providers using AI in Massachusetts
- Mental health practitioners
- Behavioral health service providers
- AI system developers offering mental health services
Safety Provisions
- AI cannot make independent therapeutic decisions
- Licensed professional must review all AI treatment plans
- Human oversight required for all AI patient interactions in behavioral health
- Applies specifically to mental and behavioral health settings
Compliance & Enforcement
Penalties
license revocation
View on map
Massachusetts
Focus Areas
Compliance Help
All AI-assisted mental/behavioral health treatment requires licensed professional review and approval. AI cannot independently make clinical decisions.
See how NOPE helpsCite This
APA
Massachusetts. (n.d.). Massachusetts Act Relative to the Use of Artificial Intelligence in Healthcare Decision-Making (SB 2632).
Related Regulations
NJ AI Mental Health Provider Ban
Prohibits AI systems from advertising or representing themselves as licensed mental health professionals. Violations constitute unlawful practice under NJ Consumer Fraud Act with penalties up to $10,000 first offense, $20,000 subsequent offenses.
PA AI Mental Health Therapy Act
Imposes explicit prohibitions on AI systems making therapeutic judgments, generating treatment plans without human review, or simulating emotional interaction. Violations treated as unprofessional conduct under Commonwealth licensing laws.
IL WOPR Act
Illinois law prohibiting licensed professionals from using AI systems to make independent therapeutic decisions, directly interact with clients in therapeutic communication, or detect emotions/mental states. AI limited to administrative and supplementary support with licensed professional oversight.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
FL AI Bill of Rights
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Last updated January 23, 2026. Verify against primary sources before relying on this information.