CA AB 489
Health care professions: deceptive terms or letters: artificial intelligence
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
Jurisdiction
California
Enacted
Oct 11, 2025
Effective
Jan 1, 2026
Enforcement
Healthcare profession licensing boards
Chapter 615, Statutes of 2025
California LegislatureWhy It Matters
Directly relevant to companion AI and mental health chatbots. Prevents AI from claiming to be licensed therapists/counselors. Sets precedent for AI capability transparency requirements.
Recent Developments
Signed October 11, 2025; effective January 1, 2026
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- Entities developing or deploying AI technology in healthcare contexts
- AI chatbots providing health advice or assessments
Obligations fall on:
Safety Provisions
- AI cannot misrepresent itself as a licensed healthcare professional
- Advertising and functionality must not imply human professional providing care
- Each violation is separately enforceable by licensing boards
Compliance & Enforcement
Key Dates
Jan 1, 2026
Law takes effect
Penalties
Penalties pending regulatory determination
View on map
California
Focus Areas
Compliance Help
AI systems must not use language that falsely indicates the system possesses healthcare professional credentials or licenses
See how NOPE helpsCite This
APA
California. (2025). Health care professions: deceptive terms or letters: artificial intelligence.
Related Regulations
CA AADC
Would require child-focused risk assessments (DPIA-style), safer defaults, and limits on harmful design patterns. Currently blocked on First Amendment grounds.
CA SB 867
Proposes a 4-year moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 12. During the moratorium, a task force would develop safety standards with input from technologists, parents, and ethicists.
VT AADC
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
FL Companion Chatbot Act
Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.
OH K-12 AI Mandate
First-in-nation mandate requiring all Ohio K-12 public schools to adopt formal AI usage policies by July 1, 2026. Ohio Department of Education and Workforce released model policy on December 30, 2025 covering academic integrity, procurement/privacy, and anti-bullying. Districts can adopt state model or create their own aligned policy.
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
Last updated January 22, 2026. Verify against primary sources before relying on this information.