Skip to main content

NY GBL Art. 47

New York General Business Law Article 47 (AI Companion Models)

Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.

Jurisdiction

New York State

Enacted

May 9, 2025

Effective

Nov 5, 2025

Enforcement

New York Attorney General (exclusive)

Enacted via FY2026 budget bill; effective November 5, 2025

NY Senate

Why It Matters

Uses three-part CONJUNCTIVE test—ALL THREE criteria must be met (memory/personalization, unprompted emotion questions, sustained personal dialogue). More specific than California's capability-based approach. NO video game exemption unlike California.

Recent Developments

Effective November 5, 2025 (180 days after May 9, 2025 signing). Governor Hochul sent letters to AI companion companies notifying them safeguards are now in effect.

At a Glance

Applies to

AI CompanionCharacter Chatbot

Harms addressed

Who Must Comply

  • AI companion chatbot operators serving New York users

Safety Provisions

  • "Reasonable efforts" protocol for detecting suicidal ideation and self-harm (ALL users)
  • Protocol must also address physical harm to others and financial harm
  • Beginning-of-conversation notification + every 3 hours for continuing interactions
  • Required disclosure text: "THE AI COMPANION IS A COMPUTER PROGRAM AND NOT A HUMAN BEING. IT IS UNABLE TO FEEL HUMAN EMOTION."
  • Disclosure format: verbal OR bold/capitalized text ≥16pt
  • Crisis referrals including 988 Suicide & Crisis Lifeline
  • Biannual reporting to Department of State

Exemptions

Customer Service / Transactional Exclusion

Systems used by a business entity SOLELY INTENDED to provide users with commercial/product info, customer account info, or customer relationship info.

  • • Used by a business entity
  • • SOLELY INTENDED for commercial/product info, customer account info, or customer relationship info
  • • Does not build companion relationship

Compliance & Enforcement

Key Dates

Nov 5, 2025

All provisions take effect immediately upon enactment

Penalties

$15K/day

View on map

New York State

Focus Areas

Mental health & crisis
Child safety
Active safeguards required

Compliance Help

Requires reasonable efforts protocol for crisis detection (documentation, testing, monitoring shows good faith); notification system for session start and 3-hour intervals; mechanisms to provide mental health resources when risk detected.

See how NOPE helps

Cite This

APA

New York State. (2025). New York General Business Law Article 47 (AI Companion Models).

Related Regulations

Enacted US-NY

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Pending US-FL

FL Companion Chatbot Act

Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.

In Effect US-CA

CA SB243

First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.

In Effect US-NY

NY S 7676-B

Protects performers from exploitative digital replica contracts. Contracts for AI-generated digital replicas are void unless they describe use, performer has legal counsel or union representation, and contract doesn't replace work performer would have done.

Proposed US-CA

CA AI Child Safety Ballot

Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

Last updated February 17, 2026. Verify against primary sources before relying on this information.