Skip to main content

NIST AI RMF

NIST AI Risk Management Framework

Dominant voluntary AI governance framework in the US. Four functions (Govern, Map, Measure, Manage) operationalize what regulators expect. Not legally binding but heavily referenced.

Jurisdiction

United States

US

Enacted

Unknown

Effective

Jan 26, 2023

Enforcement

None (voluntary framework)

Who Must Comply

This law applies to:

  • Organizations developing or deploying AI (voluntary)

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Safety Provisions

  • Govern: organizational policies and culture
  • Map: context and risk understanding
  • Measure: risk assessment methods
  • Manage: response and mitigation strategies
  • Generative AI Profile (NIST AI 600-1) addresses GAI-specific risks

Enforcement

Enforced by

None (voluntary framework)

Quick Facts

Binding
No
Mental Health Focus
No
Child Safety Focus
No
Algorithmic Scope
Yes

Why It Matters

Colorado AI Act provides affirmative defense for NIST RMF compliance. Referenced by federal agencies and increasingly in procurement requirements.

Cite This

APA

United States. (2023). NIST AI Risk Management Framework. Retrieved from https://nope.net/regs/us-nist-ai-rmf

BibTeX

@misc{us_nist_ai_rmf,
  title = {NIST AI Risk Management Framework},
  author = {United States},
  year = {2023},
  url = {https://nope.net/regs/us-nist-ai-rmf}
}

Related Regulations

In Effect US AI Safety

State AG AI Warning

Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."

In Effect US AI Safety

Trump AI Preemption EO

Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).

Enacted US-NY AI Safety

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted US-VT Child Protection

VT AADC

Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.

In Effect AE Child Protection

UAE Child Digital Safety Law

UAE federal law establishing comprehensive child digital safety requirements for digital platforms and internet service providers, with extraterritorial reach to foreign platforms targeting UAE users. Requires age verification, privacy-by-default, content filtering, and proactive AI-powered content detection.

In Effect US-NE Child Protection

NE AADC

Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.