Utah AI Mental Health Act
Utah Artificial Intelligence Mental Health Applications Act (HB 452)
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.
Jurisdiction
Utah
Enacted
Pending
Effective
May 7, 2025
Enforcement
TBD
Why It Matters
Specifically targets "mental health chatbots" with disclosure and consumer protection requirements. Different approach than CA/NY (disclosure-focused vs. crisis-detection-focused).
At a Glance
Applies to
Requires
Who Must Comply
- Operators of AI mental health chatbots serving Utah users
Obligations fall on:
Safety Provisions
- Disclosure requirements for AI mental health applications
- Consumer protection safeguards
- Transparency about AI limitations in mental health context
Compliance & Enforcement
Key Dates
Mar 25, 2025
Signed by Governor Spencer Cox
May 7, 2025
Full compliance required - disclosure, data privacy, advertising restrictions effective
Penalties
$3K/violation
View on map
Utah
Focus Areas
Compliance Help
Requires clear disclosures about AI nature and limitations; consumer protection mechanisms for mental health AI applications.
See how NOPE helpsCite This
APA
Utah. (2025). Utah Artificial Intelligence Mental Health Applications Act (HB 452).
Related Regulations
FL Companion Chatbot Act
Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.
CA SB243
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Last updated February 17, 2026. Verify against primary sources before relying on this information.