CA AI Child Safety Ballot
Artificial Intelligence (AI) and Child Safety Initiative
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Jurisdiction
California
Enacted
Pending
Effective
TBD
Enforcement
California Attorney General, private right of action
Ballot initiative for November 2026 election; currently gathering signatures for qualification. Competing with OpenAI counter-proposal.
CA Legislative Analyst's OfficeWhy It Matters
Could establish strictest child AI safety framework in US if passed. Private right of action allows individuals to sue, unlike SB243 which has AG enforcement only. Competing with OpenAI-backed proposal means voters will choose between two different approaches. If passed, would layer on top of existing SB243 requirements.
Recent Developments
Initiative filed December 2025 by Common Sense Media founder Jim Steyer. OpenAI filed competing ballot measure in December 2025 with less strict requirements. Neither yet qualified for ballot - need signature gathering. Voters would decide November 2026. Multiple hurdles remain before ballot qualification.
At a Glance
Applies to
Harms addressed
Requires
Who Must Comply
- AI product developers and operators serving California children
- Companion chatbot providers (expanded definition)
- Schools (device ban requirements)
- Social media platforms
Obligations fall on:
Safety Provisions
- Expands definition of companion chatbot beyond existing SB243
- Raises age threshold for consent to sale/sharing of personal information
- Prohibits certain AI products from being made available to children
- Establishes new state regulatory structure for certain AI products
- Allows state and private individuals to seek monetary awards (private right of action)
- Requires Instructional Quality Commission to review AI literacy content in curriculum frameworks
- Requires schools to ban internet-enabled devices during instructional time
- Creates children's AI safety fund to support state oversight and implementation
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
Private Right of Action
Individuals can sue directly without waiting for regulatory action.
View on map
California
Focus Areas
Compliance Help
Would expand beyond SB243 requirements - specific requirements TBD in final ballot text. Likely to include enhanced companion chatbot protocols, stricter age verification, content restrictions.
See how NOPE helpsCite This
APA
California. (n.d.). Artificial Intelligence (AI) and Child Safety Initiative.
Related Regulations
CA AB 489
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
CA SB 867
Proposes a 4-year moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 12. During the moratorium, a task force would develop safety standards with input from technologists, parents, and ethicists.
Ofcom Children's Codes
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
FL Companion Chatbot Act
Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
TX TRAIGA
Comprehensive AI governance with prohibited uses approach. Bans AI that incites self-harm/suicide, exploits children, or intentionally discriminates. Government entities have additional disclosure requirements. First-in-nation AI regulatory sandbox program.
Last updated February 17, 2026. Verify against primary sources before relying on this information.