Skip to main content

CA AB 489

Health care professions: deceptive terms or letters: artificial intelligence

Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.

Jurisdiction

California

US-CA

Enacted

Oct 11, 2025

Effective

Jan 1, 2026

Enforcement

Healthcare profession licensing boards

Chapter 615, Statutes of 2025

Who Must Comply

This law applies to:

  • Entities developing or deploying AI technology in healthcare contexts
  • AI chatbots providing health advice or assessments

Capability triggers:

healthAdvice (required)
Therapeutic language (increases)
Required Increases applicability

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Safety Provisions

  • AI cannot misrepresent itself as a licensed healthcare professional
  • Advertising and functionality must not imply human professional providing care
  • Each violation is separately enforceable by licensing boards

Compliance Timeline

Jan 1, 2026

Law takes effect

Enforcement

Enforced by

Healthcare profession licensing boards

Penalties

Penalties pending regulatory determination

License revocation

Each use of prohibited term constitutes separate violation; subject to licensing board jurisdiction

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
Yes

Why It Matters

Directly relevant to companion AI and mental health chatbots. Prevents AI from claiming to be licensed therapists/counselors. Sets precedent for AI capability transparency requirements.

Recent Developments

Signed October 11, 2025; effective January 1, 2026

What You Need to Comply

AI systems must not use language that falsely indicates the system possesses healthcare professional credentials or licenses

NOPE can help

Cite This

APA

California. (2025). Health care professions: deceptive terms or letters: artificial intelligence. Retrieved from https://nope.net/regs/us-ca-ab489

BibTeX

@misc{us_ca_ab489,
  title = {Health care professions: deceptive terms or letters: artificial intelligence},
  author = {California},
  year = {2025},
  url = {https://nope.net/regs/us-ca-ab489}
}

Related Regulations

Enjoined US-CA Child Protection

CA AADC

Would require child-focused risk assessments (DPIA-style), safer defaults, and limits on harmful design patterns. Currently blocked on First Amendment grounds.

Proposed US-CA Child Protection

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

Enacted US-VT Child Protection

VT AADC

Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.

Enacted US-NY AI Safety

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted US-CT AI Safety

CT SB 1295

Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.

Enacted US-CO AI Safety

Colorado AI Act

First comprehensive US state law regulating high-risk AI systems. Modeled partly on EU AI Act with developer and deployer obligations for consequential decisions.