Skip to main content

CA CPPA ADMT

CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations

California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.

Jurisdiction

California

US-CA

Enacted

Sep 22, 2025

Effective

Jan 1, 2026

Enforcement

California Privacy Protection Agency (CPPA)

Finalized July 24, 2025; approved by OAL September 22, 2025

Who Must Comply

This law applies to:

  • Businesses using ADMT to make significant decisions
  • Healthcare AI systems making or substantially influencing decisions
  • Any technology that processes personal information and replaces/substantially replaces human decision-making

Capability triggers:

healthcareDecisions (increases)
automatedDecisionMaking (required)
Required Increases applicability

Who bears obligations:

Safety Provisions

  • Pre-use notice required before ADMT used for significant decisions
  • Opt-out rights for California consumers (subject to exceptions)
  • Access rights to information about ADMT logic and how outputs are used
  • Appeal rights for consumers to challenge ADMT results
  • Risk assessments required before implementing ADMT for healthcare decisions
  • Healthcare explicitly covered as 'significant decision'

Compliance Timeline

Jan 1, 2026

General effective date, risk assessment compliance begins

Jan 1, 2027

ADMT requirements compliance deadline

Apr 1, 2028

Risk assessment attestation and summary due to CPPA

Enforcement

Enforced by

California Privacy Protection Agency (CPPA)

Penalties

$8K/violation

Per violation: $7,500

CCPA enforcement penalties apply (up to $7,500 per intentional violation)

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
Yes

Why It Matters

Establishes the first comprehensive US consumer-facing AI governance requirements explicitly covering healthcare AI. Mental health AI services using automated decision-making must provide transparency, opt-out rights, and risk assessments. ADMT definition encompasses any technology that replaces or substantially replaces human decision-making using personal information.

Recent Developments

First comprehensive US consumer-facing AI governance regulations. Board adopted July 24, 2025; OAL approved September 22, 2025; effective January 1, 2026.

What You Need to Comply

Businesses must provide transparency through pre-use notices, enable opt-out rights, provide access to ADMT logic, allow appeals of ADMT decisions, and conduct risk assessments before deploying ADMT for significant decisions including healthcare.

NOPE can help

Cite This

APA

California. (2025). CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations. Retrieved from https://nope.net/regs/us-ca-cppa-admt

BibTeX

@misc{us_ca_cppa_admt,
  title = {CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology (ADMT), and Insurance Regulations},
  author = {California},
  year = {2025},
  url = {https://nope.net/regs/us-ca-cppa-admt}
}

Related Regulations

Enacted US-CA AI Safety

CA SB 942

Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.

In Effect US-CA AI Safety

CA SB 53

First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.

Enacted US-NY AI Safety

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted US-VT Child Protection

VT AADC

Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.

In Effect US-NE Child Protection

NE AADC

Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.

Enacted NZ Data Protection

NZ Biometric Code

Sets specific legal requirements under Privacy Act for collecting and using biometric data such as facial recognition and fingerprint scans. Prohibits particularly intrusive uses including emotion prediction and inferring protected characteristics like ethnicity or sex.