TN AI Training Felony Act
Tennessee AI Training Felony Act (SB 1493 / HB 1455)
Creates criminal and civil penalties for knowingly training AI systems to encourage suicide or homicide, pose as licensed mental health professionals, or develop emotional relationships with individuals.
Jurisdiction
Tennessee
Enacted
Pending
Effective
TBD
Enforcement
Tennessee criminal courts and civil courts
Under review in committee as of March 2026. No chamber passage yet.
Tennessee General AssemblyWhy It Matters
Most punitive state AI regulation proposed to date, creating felony criminal liability for AI training practices and allowing courts to order cessation of AI operations entirely.
Recent Developments
Under review in Tennessee General Assembly as of March 2026. Most aggressive AI companion regulation proposed in any US state.
At a Glance
Applies to
Harms addressed
Who Must Comply
- AI system developers and trainers
- Entities that knowingly train AI for prohibited purposes
Obligations fall on:
Safety Provisions
- Class A felony for training AI to encourage suicide or criminal homicide
- Prohibits training AI to pose as licensed mental health professional
- Prohibits training AI to offer ongoing emotional support or companionship
- Prohibits training AI to simulate human appearance, voice, or mannerisms
- Prohibits training AI to coax users into isolation or sharing sensitive data
- Prohibits training AI to develop emotional relationships with individuals
Compliance & Enforcement
Penalties
$150K/violation; criminal liability
Private Right of Action
Individuals can sue directly without waiting for regulatory action.
View on map
Tennessee
Focus Areas
Cite This
APA
Tennessee. (n.d.). Tennessee AI Training Felony Act (SB 1493 / HB 1455).
Related Regulations
TN AI Mental Health Prohibition
Prohibits any individual or entity that develops or deploys AI from advertising or representing that the AI is or is able to act as a mental health professional or is capable of providing therapy services.
OR SB 1546
Requires AI chatbot operators to implement evidence-based suicide and self-harm detection protocols, disclose AI nature to users, provide crisis referrals to 988 Suicide and Crisis Lifeline, and apply additional protections for minors including prohibiting deceptive personification.
MD HB 952
Regulates companion chatbot operators with mandatory disclosures, harm detection, and crisis referral protocols for self-harm and suicidal ideation, backed by product liability and a private right of action.
TN ELVIS Act
Protects individuals from unauthorized AI-generated use of their name, photograph, voice, or likeness. Explicitly covers AI-generated voice simulations. Criminal and civil penalties including treble damages for knowing violations.
NH HB 143
Criminalizes use of AI-generated responsive communications to facilitate, encourage, or solicit harmful acts to children, and creates a private right of action for affected children and their parents.
FL AI Bill of Rights
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
Last updated March 23, 2026. Verify against primary sources before relying on this information.