NY GBL Art. 47
New York General Business Law Article 47 (AI Companion Models)
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
Jurisdiction
New York State
US-NY
Enacted
May 9, 2025
Effective
Nov 5, 2025
Enforcement
New York Attorney General (exclusive)
Enacted via FY2026 budget bill; effective November 5, 2025
What It Requires
Harms Addressed
Who Must Comply
This law applies to:
- • AI companion chatbot operators serving New York users
Capability triggers:
Exemptions
Customer Service / Transactional Exclusion
high confidenceSystems used by a business entity SOLELY INTENDED to provide users with commercial/product info, customer account info, or customer relationship info.
Conditions:
- • Used by a business entity
- • SOLELY INTENDED for commercial/product info, customer account info, or customer relationship info
- • Does not build companion relationship
"Solely intended" focuses on intent vs California's "used only for" which focuses on actual use. Both are narrow exemptions.
Safety Provisions
- • "Reasonable efforts" protocol for detecting suicidal ideation and self-harm (ALL users)
- • Protocol must also address physical harm to others and financial harm
- • Beginning-of-conversation notification + every 3 hours for continuing interactions
- • Required disclosure text: "THE AI COMPANION IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION."
- • Disclosure format: verbal OR bold/capitalized text ≥16pt
- • Crisis referrals including 988 Suicide & Crisis Lifeline
- • Biannual reporting to Department of State
Compliance Timeline
Nov 5, 2025
All provisions take effect immediately upon enactment
Enforcement
Enforced by
New York Attorney General (exclusive)
Penalties
$15K/day
Up to $15,000 per day (AG enforcement only). No private right of action.
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- No
Why It Matters
Uses three-part CONJUNCTIVE test—ALL THREE criteria must be met (memory/personalization, unprompted emotion questions, sustained personal dialogue). More specific than California's capability-based approach. NO video game exemption unlike California.
Recent Developments
Effective November 5, 2025 (180 days after May 9, 2025 signing). Governor Hochul sent letters to AI companion companies notifying them safeguards are now in effect.
What You Need to Comply
You need: "reasonable efforts" protocol for crisis detection (documentation + testing + monitoring shows good faith); notification system for session start and 3-hour intervals; mechanisms to provide mental health resources when risk detected.
NOPE can helpCite This
APA
New York State. (2025). New York General Business Law Article 47 (AI Companion Models). Retrieved from https://nope.net/regs/us-ny-article47
BibTeX
@misc{us_ny_article47,
title = {New York General Business Law Article 47 (AI Companion Models)},
author = {New York State},
year = {2025},
url = {https://nope.net/regs/us-ny-article47}
} Related Regulations
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
CA SB243
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
NY S 8420-A
Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.
CA AI Child Safety Ballot
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
UK OSA
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
AU Online Safety Act
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.