Utah AI Mental Health Act
Utah Artificial Intelligence Mental Health Applications Act (HB 452)
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.
Jurisdiction
Utah
US-UT
Enacted
Unknown
Effective
May 7, 2025
Enforcement
Not specified
What It Requires
Harms Addressed
Who Must Comply
Safety Provisions
- • Disclosure requirements for AI mental health applications
- • Consumer protection safeguards
- • Transparency about AI limitations in mental health context
Compliance Timeline
Mar 25, 2025
Signed by Governor Spencer Cox
May 7, 2025
Full compliance required - disclosure, data privacy, advertising restrictions effective
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- No
- Algorithmic Scope
- No
Why It Matters
Specifically targets "mental health chatbots" with disclosure and consumer protection requirements. Different approach than CA/NY (disclosure-focused vs. crisis-detection-focused).
What You Need to Comply
You need: clear disclosures about AI nature and limitations; consumer protection mechanisms for mental health AI applications.
NOPE can helpCite This
APA
Utah. (2025). Utah Artificial Intelligence Mental Health Applications Act (HB 452). Retrieved from https://nope.net/regs/us-ut-hb452
BibTeX
@misc{us_ut_hb452,
title = {Utah Artificial Intelligence Mental Health Applications Act (HB 452)},
author = {Utah},
year = {2025},
url = {https://nope.net/regs/us-ut-hb452}
} Related Regulations
CA SB243
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
UT AI Policy Act
First major US state AI consumer protection law. Requires GenAI disclosure on request (reactive) and at outset for high-risk interactions (proactive). Entity deploying GenAI liable for its consumer protection violations. Creates AI Learning Laboratory sandbox.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.