NY RAISE Act
Responsible AI Safety and Education Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
Jurisdiction
New York State
Enacted
Dec 19, 2025
Effective
Jan 1, 2027
Enforcement
New York State Department of Financial Services (DFS) - dedicated AI safety office
Signed December 19, 2025. Chapter amendments agreed upon (replacing compute thresholds with $500M revenue requirement, lowering penalties) to be formally enacted in January 2026 legislative session. Effective January 1, 2027.
NY Senate S6953BWhy It Matters
Second state-level frontier AI safety law after California. Establishes incident reporting requirements and independent audit mandate. May set precedent for other states. Creates first state-level DFS office dedicated to AI safety enforcement.
Recent Developments
Signed into law December 19, 2025. Chapter amendments agreed with Governor Hochul to replace compute-cost thresholds with $500M revenue requirement and reduce penalties. Formal chapter amendment publication pending as of late January 2026. Aligns with CA TFAIA for unified benchmark among major tech states.
At a Glance
Applies to
Harms addressed
Who Must Comply
- Large AI developers of frontier models
- Developers operating in whole or in part in New York State
Obligations fall on:
Applicability thresholds:
Safety Provisions
- Large developers must create and publish safety and security protocols
- 72-hour reporting requirement for critical safety incidents
- Annual review of safety protocols accounting for capability changes
- Annual independent third-party audits
- No deployment if unreasonable risk of critical harm
- Protocols must address severe risks (bioweapons, automated criminal activity)
Compliance & Enforcement
Key Dates
Jan 1, 2027
Law takes effect
Apr 1, 2027
Initial compliance deadline for existing large developers (90 days after effective date)
Penalties
$3M
View on map
New York State
Focus Areas
Compliance Help
Large frontier AI developers must establish safety and security protocols before deployment, conduct annual audits, and report critical incidents within 72 hours. May not deploy if unreasonable risk of critical harm exists.
See how NOPE helpsCite This
APA
New York State. (2025). Responsible AI Safety and Education Act.
Related Regulations
NY S 8420-A
Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.
NY GBL Art. 47
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
CT SB 1295
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
TX Healthcare AI Law
Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.
LA Healthcare AI Act
Regulates use of artificial intelligence by healthcare providers in Louisiana. Permits AI for administrative tasks but prohibits AI from making treatment/diagnosis decisions without licensed professional review, directly interacting with patients on treatment matters, or generating therapeutic recommendations without professional approval.
VT AADC
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
Last updated January 24, 2026. Verify against primary sources before relying on this information.