CA SB 53
Transparency in Frontier Artificial Intelligence Act (TFAIA)
First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.
Jurisdiction
California
Enacted
Sep 29, 2025
Effective
Jan 1, 2026
Enforcement
California Attorney General (exclusive authority)
Signed September 29, 2025; core provisions effective January 1, 2026
CA LegislatureWhy It Matters
First-in-nation frontier AI transparency law. Only applies to very large models (10^26 FLOP threshold) and large developers ($500M+). Sets precedent for federal AI legislation. Focus on catastrophic risk (50+ deaths or $1B+ damage), not consumer protection.
Recent Developments
Signed by Governor Newsom September 29, 2025. Successor to vetoed SB 1047 - removed kill switch, mandatory audits, cloud provider liability. Creates CalCompute public computing cluster (unfunded, contingent on appropriation).
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- Frontier developers (models trained with >10^26 FLOP)
- Large frontier developers ($500M+ annual revenue)
Obligations fall on:
Safety Provisions
- Frontier AI Framework publication (governance, risk mitigation, cybersecurity, incident response)
- Quarterly catastrophic risk assessment reports to Office of Emergency Services
- Critical safety incident reporting within 15 days (24 hours if imminent danger)
- Transparency reports before deploying new frontier models
- Whistleblower protections with anonymous reporting channels
- Prohibition on materially false statements about catastrophic risk
Exemptions
Federal Government Activities
Lawful federal government activity exempt
- • Federal government entity
Non-AI Information Sources
Harm from publicly accessible information not derived from AI
- • Information available from non-AI sources
Compliance & Enforcement
Key Dates
Jan 1, 2026
Core publication, reporting, truthfulness, and whistleblower obligations take effect
Jan 1, 2027
OES anonymized reporting; CalCompute framework report due to Legislature
Penalties
$1M
View on map
California
Focus Areas
Compliance Help
Large developers must: publish Frontier AI Framework with governance/risk/cybersecurity practices; submit quarterly risk reports to OES; report critical incidents within 15 days; maintain whistleblower channels.
See how NOPE helpsCite This
APA
California. (2025). Transparency in Frontier Artificial Intelligence Act (TFAIA).
Related Regulations
CA CPPA ADMT
California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.
CA AB 2013
Requires GenAI developers to publish documentation about training datasets including sources, data types, copyright status, personal information inclusion, and processing methods.
MI AI Safety Transparency Act
Creates the AI Safety and Security Transparency Act requiring large AI developers to conduct regular risk assessments, third-party audits, and publicly disclose safety protocols. Targets 'critical risk' scenarios (harm to 100+ people or $100M+ damages). Applies to developers spending $100M+ annually on AI or $5M+ on individual models.
TX Healthcare AI Law
Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.
LA Healthcare AI Act
Regulates use of artificial intelligence by healthcare providers in Louisiana. Permits AI for administrative tasks but prohibits AI from making treatment/diagnosis decisions without licensed professional review, directly interacting with patients on treatment matters, or generating therapeutic recommendations without professional approval.
FL AI Bill of Rights
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
Last updated January 23, 2026. Verify against primary sources before relying on this information.