Skip to main content

MI AI Safety Transparency Act

Michigan AI Safety and Security Transparency Act (HB 4668)

Creates the AI Safety and Security Transparency Act requiring large AI developers to conduct regular risk assessments, third-party audits, and publicly disclose safety protocols. Targets 'critical risk' scenarios (harm to 100+ people or $100M+ damages). Applies to developers spending $100M+ annually on AI or $5M+ on individual models.

Jurisdiction

Michigan

Enacted

Pending

Effective

TBD

Enforcement

Michigan Department of Attorney General (presumed)

Referred to House Communications and Technology Committee; motion to discharge Sept 18, 2025

Michigan Legislature

Why It Matters

Would be one of the most comprehensive state-level AI safety laws if enacted. Spending thresholds ($100M annual or $5M per model) target major AI developers. 'Critical risk' definition sets concrete harm thresholds.

Recent Developments

Introduced by Rep. Sarah Lightner (R-Springport). Part of a two-bill package with HB 4667. Motion to discharge from committee filed September 18, 2025.

At a Glance

Applies to

Foundation Model

Who Must Comply

  • AI developers spending $100M+ annually on AI development
  • AI developers spending $5M+ on individual foundation models
  • Foundation model developers in Michigan

Applicability thresholds:

100M USD/annual — Subject to safety and transparency requirements
5M USD — Subject to requirements if spent on individual model

Safety Provisions

  • Mandatory risk assessments for foundation models
  • Third-party safety audits required
  • Public disclosure of safety protocols
  • Testing for dangerous capabilities
  • Safeguards to mitigate critical risks

Compliance & Enforcement

Key Dates

Jan 1, 2026

Target implementation date for safety protocols (if enacted)

Penalties

Penalties pending regulatory determination

View on map

Michigan

Focus Areas

Algorithmic accountability
Active safeguards required

Compliance Help

Large AI developers must implement safety protocols to prevent critical risks (serious harm/death to 100+ people or $100M+ damages), conduct risk assessments, obtain third-party audits, and publicly disclose safety measures.

See how NOPE helps

Cite This

APA

Michigan. (n.d.). Michigan AI Safety and Security Transparency Act (HB 4668).

Related Regulations

In Effect US-CA

CA SB 53

First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.

Pending US-MI

MI LEAD for Kids Act

Ensures dangerous AI companion chatbots are inaccessible to children, including those capable of encouraging self-harming behaviors, illegal activities, or sexually explicit interactions. Implements stronger safety measures to prevent targeting and exploitation of minors.

Pending US-MI

MI Criminal AI Act

Creates new felony offenses and mandatory prison sentences for criminal use, development, or distribution of AI systems. Possessing, developing, deploying, or modifying an AI system with intent to commit a crime is punishable by 8 years imprisonment.

Enacted US-TX

TX Healthcare AI Law

Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.

Pending US-LA

LA Healthcare AI Act

Regulates use of artificial intelligence by healthcare providers in Louisiana. Permits AI for administrative tasks but prohibits AI from making treatment/diagnosis decisions without licensed professional review, directly interacting with patients on treatment matters, or generating therapeutic recommendations without professional approval.

Proposed US-CA

CA SB 867

Proposes a 4-year moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 12. During the moratorium, a task force would develop safety standards with input from technologists, parents, and ethicists.

Last updated January 23, 2026. Verify against primary sources before relying on this information.