Skip to main content

Utah AI Mental Health Act

Utah Artificial Intelligence Mental Health Applications Act (HB 452)

Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.

Jurisdiction

Utah

US-UT

Enacted

Unknown

Effective

May 7, 2025

Enforcement

Not specified

Who Must Comply

This law applies to:

  • Operators of AI mental health chatbots serving Utah users

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Safety Provisions

  • Disclosure requirements for AI mental health applications
  • Consumer protection safeguards
  • Transparency about AI limitations in mental health context

Compliance Timeline

Mar 25, 2025

Signed by Governor Spencer Cox

May 7, 2025

Full compliance required - disclosure, data privacy, advertising restrictions effective

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
No
Algorithmic Scope
No

Why It Matters

Specifically targets "mental health chatbots" with disclosure and consumer protection requirements. Different approach than CA/NY (disclosure-focused vs. crisis-detection-focused).

What You Need to Comply

You need: clear disclosures about AI nature and limitations; consumer protection mechanisms for mental health AI applications.

NOPE can help

Cite This

APA

Utah. (2025). Utah Artificial Intelligence Mental Health Applications Act (HB 452). Retrieved from https://nope.net/regs/us-ut-hb452

BibTeX

@misc{us_ut_hb452,
  title = {Utah Artificial Intelligence Mental Health Applications Act (HB 452)},
  author = {Utah},
  year = {2025},
  url = {https://nope.net/regs/us-ut-hb452}
}

Related Regulations

In Effect US-CA AI Safety

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY AI Safety

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

In Effect US-UT AI Safety

UT AI Policy Act

First major US state AI consumer protection law. Requires GenAI disclosure on request (reactive) and at outset for high-risk interactions (proactive). Entity deploying GenAI liable for its consumer protection violations. Creates AI Learning Laboratory sandbox.

Proposed US-CA Child Protection

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

In Effect UK Online Safety

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect AU Online Safety

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.