TX TRAIGA
Texas Responsible Artificial Intelligence Governance Act (TRAIGA)
Comprehensive AI governance with prohibited uses approach. Bans AI that incites self-harm/suicide, exploits children, or intentionally discriminates. Government entities have additional disclosure requirements. First-in-nation AI regulatory sandbox program.
Jurisdiction
Texas
US-TX
Enacted
Jun 22, 2025
Effective
Jan 1, 2026
Enforcement
Texas Attorney General (exclusive authority)
Signed June 22, 2025; effective January 1, 2026
What It Requires
Who Must Comply
This law applies to:
- • Any person conducting business in Texas
- • Any person producing products/services used by Texas residents
- • Texas governmental entities (stricter requirements)
Capability triggers:
Exemptions
Hospital Districts
high confidenceHospital districts created under Health and Safety Code exempt
Conditions:
- • Hospital district under HSC
Higher Education Institutions
high confidenceInstitutions of higher education exempt
Conditions:
- • Institution of higher education
Federal Financial Institutions
high confidenceFederally insured financial institutions compliant with federal/state banking law
Conditions:
- • Federally insured
- • Compliant with banking laws
Safety Provisions
- • Prohibition on AI inciting physical self-harm including suicide
- • Prohibition on AI inciting harm to others or criminal activity
- • Prohibition on AI-generated child exploitation content
- • Prohibition on AI impersonating minors for sexual content
- • Government entity disclosure requirements before AI interactions
- • Social scoring prohibition (government entities)
- • Biometric identification restrictions without consent (government entities)
Compliance Timeline
Jan 1, 2026
Full TRAIGA effective date - all provisions take effect
Sep 1, 2026
AG must post online complaint mechanism
Enforcement
Enforced by
Texas Attorney General (exclusive authority)
Penalties
$200K/violation; $40K/day; license revocation
Curable violations: $10,000-$12,000/violation. Uncurable: $80,000-$200,000/violation. Continuing: $2,000-$40,000/day. Licensed professionals: up to $100,000 additional + license suspension.
Quick Facts
- Binding
- Yes
- Mental Health Focus
- Yes
- Child Safety Focus
- Yes
- Algorithmic Scope
- No
Why It Matters
Prohibits AI inciting self-harm/suicide - directly relevant to companion chatbots. 60-day cure period before AG enforcement. No private right of action. Separate from TDPSA/SCOPE (data privacy) - this is AI governance.
Recent Developments
Signed June 2025. Creates first-in-nation AI regulatory sandbox (36-month testing with enforcement protection). Establishes Texas AI Council as advisory body. Safe harbors for NIST AI RMF compliance and red-team testing.
Cite This
APA
Texas. (2025). Texas Responsible Artificial Intelligence Governance Act (TRAIGA). Retrieved from https://nope.net/regs/us-tx-traiga
BibTeX
@misc{us_tx_traiga,
title = {Texas Responsible Artificial Intelligence Governance Act (TRAIGA)},
author = {Texas},
year = {2025},
url = {https://nope.net/regs/us-tx-traiga}
} Related Regulations
TX TDPSA + SCOPE
Texas AG Paxton is the MOST AGGRESSIVE enforcer against AI companion companies. December 2024 investigations launched against Character.AI, Reddit, Instagram, Discord.
NY RAISE Act
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
CA SB 942
Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.
CT SB 1295
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
VT AADC
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
CA AB 489
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.