NY RAISE Act
Responsible AI Safety and Education Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
Jurisdiction
New York State
US-NY
Enacted
Dec 19, 2025
Effective
Jan 1, 2027
Enforcement
New York State Department of Financial Services (DFS) - dedicated AI safety office
Signed December 19, 2025; negotiated amendments change effective date to January 1, 2027
What It Requires
Harms Addressed
Who Must Comply
This law applies to:
- • Large AI developers of frontier models
- • Developers operating in whole or in part in New York State
Capability triggers:
Safety Provisions
- • Large developers must create and publish safety and security protocols
- • 72-hour reporting requirement for critical safety incidents
- • Annual review of safety protocols accounting for capability changes
- • Annual independent third-party audits
- • No deployment if unreasonable risk of critical harm
- • Protocols must address severe risks (bioweapons, automated criminal activity)
Compliance Timeline
Jan 1, 2027
Law takes effect
Apr 1, 2027
Initial compliance deadline for existing large developers (90 days after effective date)
Enforcement
Enforced by
New York State Department of Financial Services (DFS) - dedicated AI safety office
Penalties
$3M
Up to $1 million for first violation, up to $3 million for repeat violations
Quick Facts
- Binding
- Yes
- Mental Health Focus
- No
- Child Safety Focus
- No
- Algorithmic Scope
- Yes
Why It Matters
Second state-level frontier AI safety law after California. Establishes incident reporting requirements and independent audit mandate. May set precedent for other states. Creates first state-level DFS office dedicated to AI safety enforcement.
Recent Developments
Signed into law December 19, 2025. Reflects negotiated amendments between Governor Hochul and bill sponsors after June 2025 legislative passage. Effective date set for January 1, 2027.
What You Need to Comply
Large frontier AI developers must establish safety and security protocols before deployment, conduct annual audits, and report critical incidents within 72 hours. May not deploy if unreasonable risk of critical harm exists.
NOPE can helpCite This
APA
New York State. (2025). Responsible AI Safety and Education Act. Retrieved from https://nope.net/regs/us-ny-raise
BibTeX
@misc{us_ny_raise,
title = {Responsible AI Safety and Education Act},
author = {New York State},
year = {2025},
url = {https://nope.net/regs/us-ny-raise}
} Related Regulations
NY S 8420-A
Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
CT SB 1295
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
VT AADC
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
CA AB 489
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
NE AADC
Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.