Skip to main content

MI LEAD for Kids Act

Michigan Leading Ethical AI Development (LEAD) for Kids Act (SB 760)

Ensures dangerous AI companion chatbots are inaccessible to children, including those capable of encouraging self-harming behaviors, illegal activities, or sexually explicit interactions. Implements stronger safety measures to prevent targeting and exploitation of minors.

Jurisdiction

Michigan

Enacted

Pending

Effective

TBD

Enforcement

Michigan Attorney General; relevant consumer protection authorities

Referred to Senate Finance, Insurance, and Consumer Protection Committee. Part of SB 757-760 package announced January 21, 2026.

Michigan Legislature

Why It Matters

Specifically targets AI companion chatbots and harmful behaviors including self-harm encouragement. Goes beyond disclosure requirements to mandate access restrictions for minors.

Recent Developments

Announced January 21, 2026 as part of Michigan Senate Democrats' digital safety package. Directly responds to documented harms from AI companion apps.

At a Glance

Applies to

AI CompanionCharacter Chatbot Minors-focused

Harms addressed

Who Must Comply

  • AI companion chatbot operators
  • Social media companies with AI features
  • Platform operators serving Michigan users
  • Developers of conversational AI systems

Safety Provisions

  • Makes dangerous AI companion chatbots inaccessible to children
  • Blocks AI chatbots capable of encouraging self-harm
  • Blocks AI chatbots encouraging illegal activities
  • Blocks AI chatbots enabling sexually explicit interactions with minors
  • Prevents social media/AI platforms from targeting minors

Compliance & Enforcement

Penalties

Civil penalties; enforcement provisions to be determined upon enactment

View on map

Michigan

Focus Areas

Mental health & crisis
Child safety
Active safeguards required

Compliance Help

Must implement age verification and content filtering to prevent minors from accessing AI companion chatbots with dangerous capabilities (self-harm encouragement, illegal activity promotion, sexual content).

See how NOPE helps

Cite This

APA

Michigan. (n.d.). Michigan Leading Ethical AI Development (LEAD) for Kids Act (SB 760).

Related Regulations

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

In Effect US-CA

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY

NY GBL Art. 47

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

Proposed US-CA

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

Last updated February 17, 2026. Verify against primary sources before relying on this information.