Skip to main content
In Effect Regulation AI Safety

CA CPPA ADMT

CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations

California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.

Jurisdiction

California

Enacted

Sep 22, 2025

Effective

Jan 1, 2026

Enforcement

California Privacy Protection Agency (CPPA)

Finalized July 24, 2025; approved by OAL September 22, 2025

California Privacy Protection Agency

Why It Matters

Establishes the first comprehensive US consumer-facing AI governance requirements explicitly covering healthcare AI. Mental health AI services using automated decision-making must provide transparency, opt-out rights, and risk assessments. ADMT definition encompasses any technology that replaces or substantially replaces human decision-making using personal information.

Recent Developments

First comprehensive US consumer-facing AI governance regulations. Board adopted July 24, 2025; OAL approved September 22, 2025; effective January 1, 2026.

At a Glance

Applies to

Mental Health AppAI CompanionAutomated Decision System

Harms addressed

Who Must Comply

  • Businesses using ADMT to make significant decisions
  • Healthcare AI systems making or substantially influencing decisions
  • Any technology that processes personal information and replaces/substantially replaces human decision-making

Safety Provisions

  • Pre-use notice required before ADMT used for significant decisions
  • Opt-out rights for California consumers (subject to exceptions)
  • Access rights to information about ADMT logic and how outputs are used
  • Appeal rights for consumers to challenge ADMT results
  • Risk assessments required before implementing ADMT for healthcare decisions
  • Healthcare explicitly covered as 'significant decision'

Compliance & Enforcement

Key Dates

Jan 1, 2026

General effective date, risk assessment compliance begins

Jan 1, 2027

ADMT requirements compliance deadline

Apr 1, 2028

Risk assessment attestation and summary due to CPPA

Penalties

$8K/violation

View on map

California

Focus Areas

Mental health & crisis
Child safety
Algorithmic accountability
Active safeguards required

Compliance Help

Businesses must provide transparency through pre-use notices, enable opt-out rights, provide access to ADMT logic, allow appeals of ADMT decisions, and conduct risk assessments before deploying ADMT for significant decisions including healthcare.

See how NOPE helps

Cite This

APA

California. (2025). CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations.

Related Regulations

In Effect US-CA

CA SB 53

First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.

In Effect US-CA

CA AB 2013

Requires GenAI developers to publish documentation about training datasets including sources, data types, copyright status, personal information inclusion, and processing methods.

Enacted US-NY

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted US-TX

TX Healthcare AI Law

Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.

Pending US-LA

LA Healthcare AI Act

Regulates use of artificial intelligence by healthcare providers in Louisiana. Permits AI for administrative tasks but prohibits AI from making treatment/diagnosis decisions without licensed professional review, directly interacting with patients on treatment matters, or generating therapeutic recommendations without professional approval.

Enacted US-VT

VT AADC

Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.

Last updated January 22, 2026. Verify against primary sources before relying on this information.