Skip to main content

NY GBL Art. 47

New York General Business Law Article 47 (AI Companion Models)

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

Jurisdiction

New York State

US-NY

Enacted

May 9, 2025

Effective

Nov 5, 2025

Enforcement

New York Attorney General (exclusive)

Enacted via FY2026 budget bill; effective November 5, 2025

Who Must Comply

This law applies to:

  • AI companion chatbot operators serving New York users

Capability triggers:

Cross-session memory (required)
Persona or character (increases)
Therapeutic language (increases)
Emotional interaction (required)
Unprompted emotion questions (required)
Required Increases applicability

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Exemptions

Customer Service / Transactional Exclusion

high confidence

Systems used by a business entity SOLELY INTENDED to provide users with commercial/product info, customer account info, or customer relationship info.

Conditions:

  • • Used by a business entity
  • • SOLELY INTENDED for commercial/product info, customer account info, or customer relationship info
  • • Does not build companion relationship

"Solely intended" focuses on intent vs California's "used only for" which focuses on actual use. Both are narrow exemptions.

Safety Provisions

  • "Reasonable efforts" protocol for detecting suicidal ideation and self-harm (ALL users)
  • Protocol must also address physical harm to others and financial harm
  • Beginning-of-conversation notification + every 3 hours for continuing interactions
  • Required disclosure text: "THE AI COMPANION IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION."
  • Disclosure format: verbal OR bold/capitalized text ≥16pt
  • Crisis referrals including 988 Suicide & Crisis Lifeline
  • Biannual reporting to Department of State

Compliance Timeline

Nov 5, 2025

All provisions take effect immediately upon enactment

Enforcement

Enforced by

New York Attorney General (exclusive)

Penalties

$15K/day

Per day: $15,000

Up to $15,000 per day (AG enforcement only). No private right of action.

Quick Facts

Binding
Yes
Mental Health Focus
Yes
Child Safety Focus
Yes
Algorithmic Scope
No

Why It Matters

Uses three-part CONJUNCTIVE test—ALL THREE criteria must be met (memory/personalization, unprompted emotion questions, sustained personal dialogue). More specific than California's capability-based approach. NO video game exemption unlike California.

Recent Developments

Effective November 5, 2025 (180 days after May 9, 2025 signing). Governor Hochul sent letters to AI companion companies notifying them safeguards are now in effect.

What You Need to Comply

You need: "reasonable efforts" protocol for crisis detection (documentation + testing + monitoring shows good faith); notification system for session start and 3-hour intervals; mechanisms to provide mental health resources when risk detected.

NOPE can help

Cite This

APA

New York State. (2025). New York General Business Law Article 47 (AI Companion Models). Retrieved from https://nope.net/regs/us-ny-article47

BibTeX

@misc{us_ny_article47,
  title = {New York General Business Law Article 47 (AI Companion Models)},
  author = {New York State},
  year = {2025},
  url = {https://nope.net/regs/us-ny-article47}
}

Related Regulations

Enacted US-NY AI Safety

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

In Effect US-CA AI Safety

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY AI Safety

NY S 8420-A

Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.

Proposed US-CA Child Protection

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

In Effect UK Online Safety

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect AU Online Safety

AU Online Safety Act

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.