CA SB 53
Transparency in Frontier Artificial Intelligence Act (TFAIA)
First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.
Jurisdiction
California
US-CA
Enacted
Sep 29, 2025
Effective
Jan 1, 2026
Enforcement
California Attorney General (exclusive authority)
Signed September 29, 2025; core provisions effective January 1, 2026
What It Requires
Who Must Comply
This law applies to:
- • Frontier developers (models trained with >10^26 FLOP)
- • Large frontier developers ($500M+ annual revenue)
Capability triggers:
Who bears obligations:
Exemptions
Federal Government Activities
high confidenceLawful federal government activity exempt
Conditions:
- • Federal government entity
Non-AI Information Sources
high confidenceHarm from publicly accessible information not derived from AI
Conditions:
- • Information available from non-AI sources
Safety Provisions
- • Frontier AI Framework publication (governance, risk mitigation, cybersecurity, incident response)
- • Quarterly catastrophic risk assessment reports to Office of Emergency Services
- • Critical safety incident reporting within 15 days (24 hours if imminent danger)
- • Transparency reports before deploying new frontier models
- • Whistleblower protections with anonymous reporting channels
- • Prohibition on materially false statements about catastrophic risk
Compliance Timeline
Jan 1, 2026
Core publication, reporting, truthfulness, and whistleblower obligations take effect
Jan 1, 2027
OES anonymized reporting; CalCompute framework report due to Legislature
Enforcement
Enforced by
California Attorney General (exclusive authority)
Penalties
$1M
Civil penalty up to $1,000,000 per violation based on severity
Quick Facts
- Binding
- Yes
- Mental Health Focus
- No
- Child Safety Focus
- No
- Algorithmic Scope
- Yes
Why It Matters
First-in-nation frontier AI transparency law. Only applies to very large models (10^26 FLOP threshold) and large developers ($500M+). Sets precedent for federal AI legislation. Focus on catastrophic risk (50+ deaths or $1B+ damage), not consumer protection.
Recent Developments
Signed by Governor Newsom September 29, 2025. Successor to vetoed SB 1047 - removed kill switch, mandatory audits, cloud provider liability. Creates CalCompute public computing cluster (unfunded, contingent on appropriation).
What You Need to Comply
Large developers must: publish Frontier AI Framework with governance/risk/cybersecurity practices; submit quarterly risk reports to OES; report critical incidents within 15 days; maintain whistleblower channels.
NOPE can helpCite This
APA
California. (2025). Transparency in Frontier Artificial Intelligence Act (TFAIA). Retrieved from https://nope.net/regs/us-ca-sb53
BibTeX
@misc{us_ca_sb53,
title = {Transparency in Frontier Artificial Intelligence Act (TFAIA)},
author = {California},
year = {2025},
url = {https://nope.net/regs/us-ca-sb53}
} Related Regulations
CA SB 942
Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.
CA CPPA ADMT
California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
VT AADC
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
NE AADC
Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.
AR HB 1958
Requires all Arkansas public entities to create AI policies with mandatory human-in-the-loop for final decisions. Covers state departments, schools, and political subdivisions.