Skip to main content

NJ AI Mental Health Provider Ban

New Jersey Artificial Intelligence Consumer Protection Act (SB 4463 / AB 5603)

Prohibits AI systems from advertising or representing themselves as licensed mental health professionals. Violations constitute unlawful practice under NJ Consumer Fraud Act with penalties up to $10,000 first offense, $20,000 subsequent offenses.

Jurisdiction

New Jersey

Enacted

Pending

Effective

TBD

Enforcement

New Jersey Division of Consumer Affairs

Bills died in committee at end of 2024-2025 legislative session (January 12, 2026). May be reintroduced in new session.

New Jersey Legislature

Why It Matters

Prevents AI chatbots from misrepresenting therapeutic capabilities. Companion to similar bills in Pennsylvania, Massachusetts, Illinois.

Recent Developments

Introduced May 2025, Assembly version progressed to 2nd reading by June 2025. Still in legislative process as of January 2026.

At a Glance

Applies to

AI CompanionMental Health App

Requires

Who Must Comply

  • Persons who develop AI systems in New Jersey
  • Persons who deploy AI systems in New Jersey
  • AI system operators serving New Jersey users

Safety Provisions

  • Prohibition on AI advertising as licensed mental health provider
  • Prohibition on AI representing ability to act as licensed mental health professional
  • Enforced under Consumer Fraud Act framework

Compliance & Enforcement

Penalties

$20K; $10K/violation

View on map

New Jersey

Focus Areas

Mental health & crisis

Cite This

APA

New Jersey. (n.d.). New Jersey Artificial Intelligence Consumer Protection Act (SB 4463 / AB 5603).

Related Regulations

Pending US-ID

ID Conversational AI Safety

Establishes safety requirements for public-facing conversational AI, including crisis service referrals for suicidal ideation, AI disclosure obligations, and enhanced protections for minors including anti-gamification and content safeguards.

Enacted US-OR

OR SB 1546

Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.

Pending US-MD

MD HB 952

Regulates companion chatbot operators with mandatory disclosures, harm detection, and crisis referral protocols for self-harm and suicidal ideation, backed by product liability and a private right of action.

Enacted US-NH

NH HB 143

Criminalizes use of AI-generated responsive communications to facilitate, encourage, or solicit harmful acts to children, and creates a private right of action for affected children and their parents.

In Effect BR

Brazil ECA Digital

Comprehensive child digital safety law applying to any IT product or service directed at or likely to be accessed by minors in Brazil, with extraterritorial reach.

Proposed US

AI LEAD Act

Classifies AI systems as 'products' under federal law and establishes a federal cause of action for product liability claims against AI developers and deployers, including claims for design defects, failure to warn, and strict liability.

Last updated February 17, 2026. Verify against primary sources before relying on this information.