CO HB 1263
Colorado HB 1263 (Conversational AI Service Operator Requirements)
Imposes obligations on conversational AI service operators including minor-user protections, suicide and self-harm protocols, prohibition on emotional dependence and engagement gamification, and annual safeguard reporting.
Jurisdiction
Colorado
Enacted
Pending
Effective
TBD
Enforcement
Colorado Attorney General
Introduced in the 2026 regular session; passed first committee (amended version PA1, 2026-03-27). Bipartisan support. Separate bill from CO HB 1139 and CO HB 1195.
Colorado General Assembly — HB 26-1263Why It Matters
Explicitly targets engagement gamification and emotional dependence as regulated harms — broadening state regulatory focus beyond disclosure and crisis response into the engagement-pattern design space.
Recent Developments
Passed first committee with amendments March 2026; bipartisan support noted.
At a Glance
Applies to
Harms addressed
Who Must Comply
- Operators of conversational AI services accessible to Colorado users
Obligations fall on:
Safety Provisions
- Mandatory AI disclosure
- Suicide and self-harm protocols with crisis referrals
- Prohibition on sexually explicit content for minors
- Prohibition on engagement gamification
- Prohibition on fostering emotional dependence
- Ban on false professional/therapeutic claims
- Parental access tools
- Annual reporting on safeguard efficacy
Compliance & Enforcement
Penalties
Penalties pending regulatory determination
View on map
Colorado
Focus Areas
Cite This
APA
Colorado. (n.d.). Colorado HB 1263 (Conversational AI Service Operator Requirements).
Related Regulations
ID Conversational AI Safety
Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.
OR SB 1546
Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.
GA AI Chatbot Child Safety
Requires disclosures related to conversational AI services, prohibits emotional manipulation of minors, and mandates crisis response protocols for suicide and self-harm detection.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Brazil ECA Digital
Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.
KIDS Act
Omnibus children's internet safety legislation incorporating the SAFE BOTs Act (AI chatbot safeguards) and AWARE Act (AI education resources). Requires AI chatbot operators to disclose AI status to minors, provide crisis hotline information, and implement break prompts.
Last updated April 16, 2026. Verify against primary sources before relying on this information.