CA AI Child Safety Ballot
Artificial Intelligence (AI) and Child Safety Initiative
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Jurisdiction
California
US-CA
Enacted
Unknown
Effective
Unknown
Enforcement
California Attorney General, private right of action
Ballot initiative for November 2026 election; currently gathering signatures for qualification. Competing with OpenAI counter-proposal.
What It Requires
Harms Addressed
Who Must Comply
This law applies to:
- • AI product developers and operators serving California children
- • Companion chatbot providers (expanded definition)
- • Schools (device ban requirements)
- • Social media platforms
Capability triggers:
Who bears obligations:
Safety Provisions
- • Expands definition of companion chatbot beyond existing SB243
- • Raises age threshold for consent to sale/sharing of personal information
- • Prohibits certain AI products from being made available to children
- • Establishes new state regulatory structure for certain AI products
- • Allows state and private individuals to seek monetary awards (private right of action)
- • Requires Instructional Quality Commission to review AI literacy content in curriculum frameworks
- • Requires schools to ban internet-enabled devices during instructional time
- • Creates children's AI safety fund to support state oversight and implementation
Enforcement
Enforced by
California Attorney General, private right of action
Penalties
Penalties pending regulatory determination
Monetary awards available to state and private plaintiffs (amounts TBD in final text)
Private Right of Action
Individuals can sue directly without waiting for regulatory action. This significantly increases liability exposure.
Quick Facts
- Binding
- No
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- No
- Private Action
- Yes
Why It Matters
Could establish strictest child AI safety framework in US if passed. Private right of action significantly increases compliance risk vs SB243 (AG enforcement only). Competing with OpenAI proposal means voters will likely choose between two different approaches. If passed, would layer on top of existing SB243 requirements starting January 2026.
Recent Developments
Initiative filed December 2025 by Common Sense Media founder Jim Steyer. OpenAI filed competing ballot measure in December 2025 with less strict requirements. Neither yet qualified for ballot - need signature gathering. Voters would decide November 2026. Multiple hurdles remain before ballot qualification.
What You Need to Comply
Would expand beyond SB243 requirements - specific requirements TBD in final ballot text. Likely to include enhanced companion chatbot protocols, stricter age verification, content restrictions.
NOPE can helpCite This
APA
California. (n.d.). Artificial Intelligence (AI) and Child Safety Initiative. Retrieved from https://nope.net/regs/us-ca-ballot-2025-025
BibTeX
@misc{us_ca_ballot_2025_025,
title = {Artificial Intelligence (AI) and Child Safety Initiative},
author = {California},
year = {n.d.},
url = {https://nope.net/regs/us-ca-ballot-2025-025}
} Related Regulations
CA AB 489
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
CA AADC
Would require child-focused risk assessments (DPIA-style), safer defaults, and limits on harmful design patterns. Currently blocked on First Amendment grounds.
KOSA
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
Utah AI Mental Health Act
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.