CA AI Child Safety Ballot
Artificial Intelligence (AI) and Child Safety Initiative
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Jurisdiction
California
Enacted
Pending
Effective
TBD
Enforcement
California Attorney General, private right of action
Ballot initiative for November 2026 election; currently gathering signatures for qualification. Competing with OpenAI counter-proposal.
CA Legislative Analyst's OfficeWhy It Matters
Could establish strictest child AI safety framework in US if passed. Private right of action allows individuals to sue, unlike SB243 which has AG enforcement only. Competing with OpenAI-backed proposal means voters will choose between two different approaches. If passed, would layer on top of existing SB243 requirements.
Recent Developments
Initiative filed December 2025 by Common Sense Media founder Jim Steyer. OpenAI filed competing ballot measure in December 2025 with less strict requirements. Neither yet qualified for ballot - need signature gathering. Voters would decide November 2026. Multiple hurdles remain before ballot qualification.
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- AI product developers and operators serving California children
- Companion chatbot providers (expanded definition)
- Schools (device ban requirements)
- Social media platforms
Obligations fall on:
Safety Provisions
- Expands definition of companion chatbot beyond existing SB243
- Raises age threshold for consent to sale/sharing of personal information
- Prohibits certain AI products from being made available to children
- Establishes new state regulatory structure for certain AI products
- Allows state and private individuals to seek monetary awards (private right of action)
- Requires Instructional Quality Commission to review AI literacy content in curriculum frameworks
- Requires schools to ban internet-enabled devices during instructional time
- Creates children's AI safety fund to support state oversight and implementation
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
Private Right of Action
Individuals can sue directly without waiting for regulatory action.
View on map
California
Focus Areas
Cite This
APA
California. (n.d.). Artificial Intelligence (AI) and Child Safety Initiative.
Related Regulations
CA SB 1119
Comprehensive companion chatbot children's safety framework establishing mandatory design features, default settings, prohibited conduct, parental controls, independent audit requirements, and a private right of action.
CA AB 489
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
KOSA
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
OR SB 1546
Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.
CO HB 1263
Imposes obligations on conversational AI service operators including minor-user protections, suicide and self-harm protocols, prohibition on emotional dependence and engagement gamification, and annual safeguard reporting.
Last updated February 17, 2026. Verify against primary sources before relying on this information.