NC AI Chatbot Licensing
North Carolina AI Chatbots Licensing/Safety/Privacy Act (SB 624)
Requires health information chatbots to obtain license from North Carolina Department of Justice before operating. Comprehensive licensing requirements include technical architecture documentation, data practices, security measures, and regulatory compliance. Civil penalties of $50,000 per violation.
Jurisdiction
North Carolina
Enacted
Pending
Effective
TBD
Enforcement
North Carolina Department of Justice
Referred to Committee on Rules and Operations of the Senate on March 26, 2025. Proposed effective date: January 1, 2026.
North Carolina General AssemblyWhy It Matters
Most comprehensive health chatbot regulation proposed in US. Creates licensing barrier for mental health AI. Health-related chatbots would require NC DOJ license. High penalties ($50K) per violation.
Recent Developments
Introduced and referred to committee March 2025. Proposes first-in-nation licensing requirement specifically for health information chatbots.
At a Glance
Applies to
Harms addressed
Who Must Comply
- Chatbot operators dealing substantially with health information
- AI system developers offering health chatbots
- Distributors of health information chatbots in North Carolina
Obligations fall on:
Safety Provisions
- Mandatory licensing for health information chatbots
- Technical architecture documentation required
- Data practice transparency requirements
- Security measure standards
- Regulatory compliance verification
- NC DOJ oversight and enforcement
Compliance & Enforcement
Penalties
$50K/violation
View on map
North Carolina
Focus Areas
Compliance Help
Must obtain health information chatbot license from NC DOJ. Must document technical architecture, data handling, security measures. Must demonstrate regulatory compliance.
See how NOPE helpsCite This
APA
North Carolina. (n.d.). North Carolina AI Chatbots Licensing/Safety/Privacy Act (SB 624).
Related Regulations
CA SB 53
First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.
MI AI Safety Transparency Act
Creates the AI Safety and Security Transparency Act requiring large AI developers to conduct regular risk assessments, third-party audits, and publicly disclose safety protocols. Targets 'critical risk' scenarios (harm to 100+ people or $100M+ damages). Applies to developers spending $100M+ annually on AI or $5M+ on individual models.
PA AI Mental Health Therapy Act
Imposes explicit prohibitions on AI systems making therapeutic judgments, generating treatment plans without human review, or simulating emotional interaction. Violations treated as unprofessional conduct under Commonwealth licensing laws.
FL AI Bill of Rights
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
TX Healthcare AI Law
Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.
CA SB 867
Proposes a 4-year moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 12. During the moratorium, a task force would develop safety standards with input from technologists, parents, and ethicists.
Last updated February 17, 2026. Verify against primary sources before relying on this information.