Products
Zambia's comprehensive data protection law with special protections for vulnerable persons and DPIA requirements for high-risk processing.
Botswana's modernized data protection law requiring Data Protection Impact Assessment and establishing age 16 for consent.
Seychelles' modern data protection law requiring DPO for large-scale processing and recognizing Cross-Border Privacy Rules certification.
Algeria's data protection law with mandatory DPO requirement added by 2025 amendment and 5-day breach notification.
Tanzania's data protection law requiring mandatory 5-year registration and Minister approval for cross-border transfers.
Nigeria's comprehensive data protection law. Section 37 restricts automated decisions. Age of consent 13+ with "where feasible" verification. 72-hour breach notification.
First African country to adopt comprehensive national AI policy. Establishes Responsible AI Office (RAIO) under MINICT. Implements RURA ethical guidelines covering beneficence, non-maleficence, autonomy, justice, explicability, transparency. Non-binding framework.
Uganda's comprehensive data protection law requiring parental consent for children's data and immediate breach notification.
South Africa's comprehensive data protection law. Section 71 restricts automated decisions. Film and Publications Amendment Act adds chatroom safety duties.
Comprehensive data protection law regulating personal data processing in Egypt. Requires parental consent for children under 15, mandatory Data Protection Officer appointment, and 72-hour breach notification.
Kenya's comprehensive law with Section 35 rights against harmful automated decisions. DATA LOCALIZATION requirement: one serving copy on Kenyan servers for certain contexts.
Angola's data protection law establishing right to non-automated individual decisions and requiring express consent for data processing.
GDPR-aligned data protection law requiring Data Protection Officer appointment and breach notification to Commissioner.
Senegal's data protection law requiring prior authorization for health data processing with fines up to CFA 100 million.
Côte d'Ivoire's data protection law prohibiting decisions based solely on automated processing with DPO exemption pathway.
Ghana's comprehensive data protection law establishing data subject rights including right to prevent automated decision-making and requiring breach notification.
Morocco's data protection law establishing rights regarding automated decision-making and requiring parental consent for children's data processing.
Cameroon's cybersecurity and cybercrime law establishing privacy protections in electronic communications and criminalizing child grooming.
Tunisia's data protection law requiring prior authorization for personal data processing and restricting cross-border transfers.
Peru's first comprehensive AI regulatory framework, inspired by EU AI Act. Establishes three-tier risk-based approach: prohibited uses, high-risk systems (including healthcare), and low-risk/acceptable AI. First general AI regulation in Latin America. Requires human oversight, transparency, and risk assessments for high-risk AI including healthcare applications.
CARICOM's 2025 regional cyber security framework establishing digital safety culture and coordinated incident response across 18 member states.
First comprehensive AI law in Latin America. Promotes AI development while establishing ethical principles and governance framework. Creates the National Agency for Artificial Intelligence (ANIA) to oversee AI development and regulation.
First cybersecurity framework law in Latin America (Law 21,663 promulgated Mar 26, 2024; published Apr 8, 2024). Creates National Cybersecurity Agency (ANCI), mandatory incident reporting, and encryption rights.
Ontario's first AI-specific legislation regulating public sector use of AI systems. Requires accountability frameworks, risk management, disclosure, and human oversight. Also addresses cybersecurity and digital information affecting minors under 18.
Non-binding AI governance guidelines establishing principles for responsible AI use. Argentina positioning as AI innovation hub with limited regulatory barriers. Emphasizes transparency, accountability, and human oversight. Multiple legislative proposals pending inspired by EU AI Act, aiming to establish formal regulatory authority.
10-point checklist for AI data processing. Mandatory PIAs for high-risk AI. Note: Separately, Colombia's Consejo Superior de la Judicatura adopted UNESCO AI Guidelines for judiciary (Dec 16, 2024).
Puerto Rico's comprehensive cybersecurity law establishing cybersecurity framework for public and private sectors, complementing Act 111-2005 breach notification.
First AI-specific law in Latin America. Privacy protection throughout AI lifecycle. Accountability for fundamental rights violations.
Ecuador's GDPR-inspired data protection law with 5-day breach notification (stricter than GDPR's 72 hours) and DPIA requirements for high-risk processing.
Quebec's major privacy reform modernizing data protection laws with extraterritorial scope similar to GDPR. First Canadian provincial framework to directly address AI implications through automated decision-making provisions requiring disclosure, explanation rights, and human intervention options.
Jamaica's comprehensive data protection law establishing rights regarding automated decision-taking and requiring parental consent for children's data.
Belize's modern data protection law establishing protection from solely automated decisions and parental consent for children under 13.
Panama's comprehensive data protection law establishing ARCO rights plus portability and requiring breach notification to ANTAI.
Barbados' GDPR-aligned data protection law with biometric data definitions, mandatory DPO, and extraterritorial scope.
Paraguay's credit data protection law with limited scope. Defines sensitive data to include psychological data.
Dominican Republic's data protection law establishing Habeas Data remedy but lacking dedicated supervisory authority.
Nicaragua's data protection law enacted but with limited enforcement due to non-operational supervisory authority (DIPRODAP).
Trinidad and Tobago's data protection law with sensitive data protections including health information. Key provisions await proclamation.
CARICOM regional model policy guidelines for harmonizing cybercrime and data protection laws across Caribbean states. Influenced 13 of 15 member states' legislation.
Costa Rica's data protection law requiring database registration with PRODHAB and establishing comprehensive data subject rights.
Uruguay's comprehensive data protection law with EU adequacy status. Establishes automated decision-making rights and requires explicit consent for sensitive data.
Bahamas' data protection law establishing right to prevent direct marketing and data subject access rights. Modernization pending.
Honduras' transparency law with Habeas Data protection provisions. Comprehensive data protection bill pending as of 2024.
Puerto Rico's medical information privacy law with breach notification requirement 'as expeditiously as possible' - stricter than federal standards.
Risk-based framework similar to EU AI Act. Would prohibit excessive-risk AI (social scoring, autonomous weapons), require impact assessments for high-risk AI, with penalties up to BRL 50M or 2% Brazilian turnover.
Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.
Would have regulated high-impact AI systems with potential penalties up to $25M or 5% global revenue. Part of Bill C-27 which died when Parliament ended.
Comprehensive AI regulation establishing CONAIA (National Commission for Artificial Intelligence) as central regulatory authority under Ministry of Economy. Risk-based framework with authorization, transparency, and accountability requirements for high-risk AI systems. Follows constitutional amendment (February 2025) granting Congress authority to legislate on AI.
Most advanced AI regulatory framework in Latin America. Four-tier EU-inspired risk classification with prohibited AI including social scoring and deepfakes exploiting minors.
First comprehensive AI law in Southeast Asia. Risk-management oriented framework with high-risk AI list updated by Prime Minister. Applies extraterritorially to foreign organizations whose AI systems impact Vietnamese users.
First comprehensive AI legislation in Asia-Pacific and second in the world after EU. Regulates "High-Impact AI" in healthcare, energy, nuclear, transport, government, and education sectors. Requires transparency notifications, content labeling for generative AI, and fundamental rights impact assessments. Notable for lower penalties than EU AI Act and absence of prohibited AI practices.
First comprehensive AI law in Central Asia. Establishes risk-based classification (low/medium/high-risk), mandatory AI content labeling, and explicit prohibitions on manipulation, social scoring, and non-consensual emotion detection. Requires annual risk assessments for high-risk systems.
Requires licensed platforms to implement content moderation systems, child-specific safeguards, and submit Online Safety Plans. Nine categories of harmful content regulated.
Brunei's personal data protection order requiring DPIA and imposing penalties up to 10% Brunei turnover or $1M.
World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.
STRICTEST children's provisions in APAC. Children = under 18; verifiable parental consent MANDATORY; PROHIBITION on tracking, behavioral monitoring, targeted advertising to children.
Sets specific legal requirements under Privacy Act for collecting and using biometric data such as facial recognition and fingerprint scans. Prohibits particularly intrusive uses including emotion prediction and inferring protected characteristics like ethnicity or sex.
Mandatory labeling of AI-generated content (implicit for all, explicit where applicable). Released by State Administration for Market Regulation and Standardization Administration of China. Complements existing GenAI interim measures with three national standards for AI security and governance.
Creates "duty to make reasonable efforts" (not strict requirements) to follow AI principles. Establishes AI Strategy Center. Largely non-binding, consistent with Japan's "soft law" tradition.
Nepal national AI policy establishing governance framework and development priorities. Creates AI Governance Council (chaired by Minister for Communications and IT), AI Regulation Council, National AI Centre, and AI Regulatory Authority. Six pillars including ethics, human resource development, and sectoral application.
Pakistan's national AI roadmap establishing six strategic pillars: AI Innovation Ecosystem, Awareness and Readiness, Research and Development, Infrastructure, Governance, and International Cooperation. Creates National AI Fund (NAIF), Centres of Excellence in 7 cities, and targets training 200,000 individuals annually.
Myanmar's cybersecurity law requiring platforms with 100,000+ users to register and imposing data retention requirements. Enacted post-2021 coup with uncertain enforcement.
Comprehensive facial recognition regulation requiring consent, protecting minors, restricting public space use, mandating data localization, and requiring filing for large-scale processing (100K+ individuals).
Indonesia's comprehensive child online protection regulation establishing age-appropriate design requirements for electronic systems accessible to children. Most granular age classification globally (5 groups). Requires risk assessments, privacy-by-default, parental consent, DPIAs, and prohibits data profiling of children. First of its kind in Asia and Global South.
Establishes experimental legal regimes for digital innovation and AI, broadening liability for damages during testing and creating tracking mechanisms for AI-related incidents.
Requires bloggers with audiences exceeding 10,000 users to register with Roskomnadzor and restricts content reposting and advertising on unregistered pages.
Strengthens Privacy Act requirements for biometric data collection, raising the standard of conduct for collecting biometric information used for automated verification or identification. Cannot collect such information unless individual has consented and it is reasonably necessary.
First mandatory AI governance requirements in Singapore, shifting from voluntary Model AI Governance Framework to binding obligations for financial sector. Establishes three mandatory focus areas: oversight and governance, risk management systems, and development/validation/deployment protocols.
Indonesia's comprehensive data protection law. Health and children's data = "specific personal data" with enhanced protections. Criminal penalties up to 6 years imprisonment.
South Korea's world-strictest deepfake law: 7 years for creating/distributing, 3 years for possessing/viewing deepfake sexual content. Even viewing is criminal.
Presidential decree updating Russia's national AI development strategy through 2030, establishing key principles including human rights protection, security, technological sovereignty, non-discrimination, and accountability.
Creates Commonwealth criminal offences for "deepfake sexual material" (AI/synthetic intimate imagery) without consent. Part of Australia's layered approach: criminal law + eSafety platform enforcement.
Comprehensive AI guidance from Hong Kong Privacy Commissioner. Governance, risk assessment, human oversight, data stewardship. Three core values: respect, benefit, fairness.
Singapore's GenAI-specific guidance: risks (hallucinations, harmful outputs, IP/provenance, misuse) and operational controls (evaluation, transparency, policies, incident response).
Regional AI governance framework for 10 ASEAN member states. Non-binding guide promoting transparency, fairness, security, and human-centricity. January 2025 expansion addresses six GenAI-specific risks including deepfakes, misinformation, and vulnerable population harms.
Sri Lanka's comprehensive data protection law - first in South Asia. Establishes human review rights for automated decisions and DPIA requirements for high-risk processing.
Requires generative AI providers to ensure content "upholds Core Socialist Values," implement content controls, and file algorithms with CAC within 10 business days.
Controls on "deep synthesis" (deepfake) technology including labeling requirements for all deep synthesis outputs and privacy consent for biometric editing.
Pacific Islands Forum regional cybersecurity framework (18 member states) with Boe Declaration (2018) and Lagatoi Declaration (2023) establishing coordinated digital safety standards backed by $27M Australian investment.
Thailand's GDPR-style law. Health data requires explicit consent. First major fine (THB 7M) August 2024. Draft Royal Decree on AI proposes EU-style risk classification.
Mongolia's data protection law defining health, genetic, and biometric data as sensitive with cross-border restrictions and Human Rights Commission oversight.
Requires algorithm filing/registration, user notification of recommendations, and opt-out mechanisms. Prohibits price discrimination based on user profiling.
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Global voluntary certification for cross-border personal data protection. 9 participating economies: US, Canada, Mexico, Japan, South Korea, Singapore, Philippines, Taiwan, Australia.
Widely cited APAC governance framework: internal AI governance, risk management, human involvement, operations management, stakeholder transparency. Functions as "expected practice" in enterprise/procurement.
Fiji's online safety law covering cyberbullying, cyberstalking, and revenge porn with Online Safety Commission oversight and mandatory mediation.
Papua New Guinea's cybercrime law establishing 25+ cyber offenses with penalties up to 15 years for critical infrastructure attacks.
Establishes 10 communication principles and creates both criminal offenses and civil remedies for harmful digital communications. Amended 2022 for intimate image sharing. Note: Post-Christchurch rapid classification powers are in a separate law (Films, Videos, and Publications Classification Amendment Act 2021).
Japan's revenge porn law criminalizing distribution of private sexual images (3 years/¥500,000) but does NOT cover deepfakes - significant legal gap.
Uzbekistan's AI governance framework via amendments to Law on Informatization. Mandates AI content labeling, prohibits AI decisions affecting rights without human oversight, and establishes protections against AI harms to life, health, and dignity. Responds to 3x increase in AI-related violations (1,129 in 2023 to 3,553 in 2024).
Comprehensive AI Basic Act (pending) establishes seven guiding principles and risk-based classification. Note: Taiwan already has ENACTED deepfake/election AI provisions via separate laws (Criminal Code 2023, Election Law 2023, Fraud Prevention Act 2024).
Criminalizes computer-generated and simulated child sexual abuse material, which includes AI-generated imagery. One of few laws globally explicitly addressing synthetic CSAM.
Cambodia's draft data protection law establishing human intervention rights for automated decisions (Article 34) and mandatory DPO requirement.
Under Broadcasting Act framework, requires major social media services to implement systems reducing exposure to harmful content. Child safety is key driver.
Maldives' draft data protection law proposing Data Protection Authority and defining sensitive data including health information.
Major amendment to Taiwan's Personal Data Protection Act establishing independent Personal Data Protection Commission (PDPC) as mandated by Constitutional Court. Significantly strengthens data protection framework for public and private sectors, aligning with EU GDPR standards. Introduces data breach notification obligations, mandatory DPOs for government agencies, and enhanced enforcement powers.
Bangladesh's draft data protection law requiring DPO, imposing data localization requirements, and establishing fines up to BDT 300,000.
Would establish due diligence framework for synthetically generated information including labeling, traceability, and platform processes for handling synthetic media harms.
10 mandatory guardrails proposed for high-risk AI: accountability, risk management, data governance, testing, human oversight, transparency, contestability, supply chain transparency, record keeping, conformity assessment.
Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.
Finland's EU AI Act implementation using decentralized supervision model. Traficom serves as single point of contact and coordination authority. Ten market surveillance authorities share enforcement across sectors. New Sanctions Board handles fines over EUR 100,000.
Hungary's comprehensive AI law implementing the EU AI Act. Designates the National Media and Infocommunications Authority (NMHH) as the primary supervisory authority, with sectoral regulators for specific domains.
First EU member state to fully implement the EU AI Act. Designates three competent authorities, establishes penalty framework aligned with EU maximums, and grants inspection/enforcement powers. Does not add material requirements beyond EU AI Act.
Austria's digital sovereignty framework establishing Sovereignty Compass for AI audits and mandatory Digi-Check for all legislation.
World's first comprehensive risk-based regulatory framework for AI systems. Classifies AI by risk level with escalating requirements from prohibited practices to high-risk obligations.
Comprehensive platform regulation with tiered obligations. VLOPs (45M+ EU users) face systemic risk assessments, algorithmic transparency, and independent audits.
Estonia's €85M AI and Data Action Plan establishing safety testing framework and human-centered AI deployment principles.
Austria's national AI authority established within RTR (Rundfunk und Telekom Regulierungs-GmbH) for EU AI Act market surveillance coordination.
Netherlands' algorithmic risk assessment framework specifically addressing mental health chatbots in risk reports and requiring Fundamental Rights Impact Assessment (FRIA).
Establishes AESIA as Spain's national competent authority for AI supervision - the first such agency in the EU. Headquartered in A Coruña, Galicia. Creates voluntary certification framework for ethical AI systems.
Switzerland's revised data protection law with Article 21 automated decision transparency requirements, human review rights, and fines up to CHF 250,000.
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
Serbia's non-binding AI ethics guidelines establishing explainability, accountability, and transparency principles.
Requires hosting services to remove terrorist content within one hour of receiving a removal order. One of few regulations with real-time moderation mandates.
Portugal's Charter of Digital Rights with Article 9 requiring AI to respect fundamental rights and establishing algorithmic auditability principles.
Malta's voluntary ethical AI framework with four ethical principles and certification pathway via Malta Digital Innovation Authority (MDIA).
Serbia's GDPR-aligned data protection law with profiling safeguards and DPIA requirements.
Foundational EU data protection law with direct AI enforcement precedent. Article 22 restricts automated decision-making; Article 9 classifies mental health data as special category requiring explicit consent; Article 8 sets children's consent thresholds (13-16 by member state).
Requires providers of certain telemedia services to implement provider-side precautionary measures ("Vorsorgemaßnahmen") with regulator-facing evaluability via published BzKJ criteria.
Italian DPA (Garante) is most aggressive EU enforcer on AI. Precedent-setting enforcement against ChatGPT and Replika. Enforcement theory: companion AI processes special category health data.
France's 2024 "digital space" law strengthening national digital regulation and enforcement levers via ARCOM across platform safety and integrity issues.
Proposed permanent framework replacing interim derogation. Parliament position (Nov 2023) limits detection to known/new CSAM, excludes E2EE services. Council has not agreed General Approach.
Temporary legal bridge allowing certain communications providers to voluntarily detect/report/remove CSAM, notwithstanding ePrivacy constraints. Extended via 2024/1307 while permanent CSAR negotiated.
Estonia's Administrative Procedure Act with provisions for automated administrative acts in defined sectors and transparency requirements.
Ukraine's draft GDPR-aligned data protection law establishing 72-hour breach notification and automated processing rules.
EU directive setting baseline safety and minor-protection duties for audiovisual media services and video-sharing platform services, including measures to protect minors from harmful content.
Norway implementing full EU AI Act through EEA Agreement. Most comprehensively regulated non-EU jurisdiction for AI. Also implementing Digital Services Act with some Norwegian additions.
Poland's draft law implementing EU AI Act domestically, creating KRiBSI (national AI authority), regulatory sandboxes, and binding opinions mechanism.
Most specific international guidance on children and AI. Ten requirements for child-centered AI including development/wellbeing support, data/privacy protection, and safety.
First legally binding international AI treaty. Signed by EU, UK, US (Sep 2024), Canada, Japan (Feb 2025) and others. Requires risk/impact assessments, transparency, accountability. National security exemptions apply.
Continent-wide AI strategy endorsed by African Union Executive Council covering 55 member states. Phased implementation 2025-2030. Phase I (2025-2026) focuses on creating governance frameworks, developing national AI strategies, resource mobilization, and capacity building. Aims to harmonize AI development across Africa while respecting member state sovereignty.
First-ever UN General Assembly resolution on AI. Adopted unanimously with 125 co-sponsors (US-led). Establishes human rights as applicable to AI lifecycle, encourages regulatory frameworks, and calls for bridging AI divides between countries. Non-binding but sets global normative expectations.
First certifiable international standard for AI management systems. Uses Plan-Do-Check-Act methodology. Third-party certification available; major AI systems have achieved certification.
AI risk management guidance complementing ISO 31000. Lifecycle risk management; audit/procurement language.
Global normative framework adopted by all 193 UN Member States. Policy Area 8 (Health and Social Wellbeing) directly addresses mental health AI.
WHO guidance emphasizing mental health AI often has methodological/quality flaws requiring extra scrutiny. Six ethical principles for health AI.
Five value-based principles endorsed by 47 governments. OECD definitions of "AI system" and "AI lifecycle" adopted in EU AI Act, US regulations, and CoE Convention.
11 guiding principles for advanced AI. Explicitly prohibits AI posing substantial safety or human rights risks. Code of conduct for developers.
UAE federal law establishing comprehensive child digital safety requirements for digital platforms and internet service providers, with extraterritorial reach to foreign platforms targeting UAE users. Requires age verification, privacy-by-default, content filtering, and proactive AI-powered content detection.
Israel's most significant privacy reform in 40 years, explicitly covering AI systems. Requires Data Protection Officers (DPOs) for entities processing sensitive data at scale, mandates Data Protection Impact Assessments (DPIAs) before AI deployment, and enhances Protection of Privacy Authority enforcement powers. One of first data protection laws to explicitly require DPIAs before AI development or deployment.
First enacted AI-specific regulation in the Middle East, Africa, and South Asia (MEASA) region. Establishes risk-based framework for AI systems in the DIFC financial free zone, with requirements for transparency, human oversight, and accountability.
Jordan's data protection law with medical data processing exceptions, data portability rights, and oversight including security services.
Ambitious national strategy positioning Egypt as regional AI hub for Africa and Middle East. Targets 7.7% ICT sector GDP contribution by 2030, training 30,000 AI specialists, establishing 250 AI companies. Built on six strategic pillars: governance, infrastructure, technology, data, ecosystem, and talent. Accompanied by Egyptian Charter for Responsible AI (April 2023) with ethics principles.
Saudi Arabia's comprehensive personal data protection law with extraterritorial scope, DPO requirements for sensitive processing, and National Data Governance Platform registration.
Binding AI governance requirements for Qatar's financial sector. Mandates board-level accountability, risk assessments, human-in-the-loop for high-impact decisions, and prior QCB approval for high-risk AI systems.
Kuwait's data privacy regulation requiring guardian consent for minors under 18, 72-hour breach notification, and automated decision restrictions.
Oman's data protection law with world's strictest health data regulation: outright BAN on health data processing without Ministry of Health permit. Also requires 72-hour breach notification.
⚠️ WARNING: Syria's data protection law functions as surveillance tool requiring mandatory data sharing with authorities. NOT suitable for confidential mental health services.
Comprehensive media regulation requiring licensing for all digital platforms, social media operations, and influencers. 20 binding content standards with significant penalties.
⚠️ HIGH RISK: Syria's cybercrime law requires ISPs to retain ALL content with government access. NOT suitable for confidential mental health services requiring privacy.
Bahrain's GDPR-aligned data protection law with automated decision-making restrictions and Data Protection Guardian requirement.
Lebanon's electronic transactions and data protection law lacking independent supervisory authority, relying on court remedies for enforcement.
Palestine's cybercrime law with 120-day data retention requirement and doubled penalties for crimes against minors.
Kuwait's cybercrime law criminalizing personal data breaches with 3 years imprisonment and fines of KWD 3,000-10,000.
Proposed comprehensive AI law establishing a risk-based classification system similar to the EU AI Act. Would prohibit high-risk AI practices including social scoring and real-time biometric surveillance, require transparency for AI-generated content, and establish AI regulatory authority.
Comprehensive AI governance from Saudi Data & AI Authority: Ethics Principles (Sep 2023), Generative AI Guidelines (Jan 2024), AI Adoption Framework (Sep 2024). Combined with PDPL creates binding + guidance framework.
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Omnibus data legislation covering customer data access, digital verification services, the Information Commission, and AI-related provisions including copyright/training transparency requirements and new criminal offenses for creating AI-generated intimate images (deepfakes).
UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.
The UK's foundational data protection law, incorporating the UK GDPR (retained EU GDPR post-Brexit). Substantively mirrors EU GDPR with ICO as sole enforcer. Article 22 restricts automated decision-making; Article 9 classifies mental health as special category data; children's consent age set at 13. Parent framework for UK Children's Code; amended by DUA Act 2025.
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Sector-specific, principles-based approach using existing regulators. Five cross-sector principles guide regulatory application rather than horizontal AI legislation.
Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
First federal law addressing AI-generated intimate imagery. Criminalizes publication of nonconsensual intimate imagery (NCII) including AI "digital forgeries." Creates 48-hour takedown obligation for platforms.
Dominant voluntary AI governance framework in the US. Four functions (Govern, Map, Measure, Manage) operationalize what regulators expect. Not legally binding but heavily referenced.
Baseline US children's data privacy regime. Applies to operators of websites/online services directed to children under 13, and to general-audience services with actual knowledge they collect personal info from under-13 users.
Comprehensive federal AI policy requiring safety testing, reporting, and standards development. Revoked in January 2025 by new administration.
Would require age verification, disclosures, and broader child protections for AI chatbots. Part of emerging federal focus on companion AI safety for minors.
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
Creates federal civil remedy for victims of nonconsensual AI-generated intimate imagery (deepfake porn). Allows victims to sue creators, distributors, solicitors, and possessors with intent to distribute.
Would expand COPPA-style protections to teens (13-16) and add stronger constraints including limits on targeted advertising to minors. Often paired politically with KOSA.
FTC applies Section 5 authority against unfair/deceptive AI practices, plus AI-specific rules including the Fake Reviews Rule (Oct 2024) prohibiting AI-generated fake reviews.
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
Requires disclosure to minors that they are interacting with AI (not a human) and that the AI is not a licensed professional. Baseline transparency approach.
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
First comprehensive US state law regulating high-risk AI systems. Modeled partly on EU AI Act with developer and deployer obligations for consequential decisions.
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
Comprehensive AI governance with prohibited uses approach. Bans AI that incites self-harm/suicide, exploits children, or intentionally discriminates. Government entities have additional disclosure requirements. First-in-nation AI regulatory sandbox program.
Requires GenAI developers to publish documentation about training datasets including sources, data types, copyright status, personal information inclusion, and processing methods.
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.
California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.
First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.
Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
Consumer protection law requiring disclosure that users are interacting with AI, not a human. Common precursor to crisis-harm regulation.
Requires all Arkansas public entities to create AI policies with mandatory human-in-the-loop for final decisions. Covers state departments, schools, and political subdivisions.
Amends Arkansas publicity rights law to explicitly include AI-generated reproductions of voice and likeness. Covers simulated voices and 3D generation.
Establishes ownership rules for AI-generated content and trained models. Person providing input owns generated content (if not infringing); person providing training data owns resulting model (if lawfully acquired).
Illinois law prohibiting licensed professionals from using AI systems to make independent therapeutic decisions, directly interact with clients in therapeutic communication, or detect emotions/mental states. AI limited to administrative and supplementary support with licensed professional oversight.
PROHIBITION on AI systems providing professional mental/behavioral healthcare without licensed professional oversight. AI cannot independently diagnose or provide therapy.
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.
Protects performers from exploitative digital replica contracts. Contracts for AI-generated digital replicas are void unless they describe use, performer has legal counsel or union representation, and contract doesn't replace work performer would have done.
Maryland design code modeled on UK approach: applies to covered online products reasonably likely to be accessed by minors. Pushes privacy-by-default, risk assessment, limits on "materially detrimental" data practices.
Texas AG Paxton is the MOST AGGRESSIVE enforcer against AI companion companies. December 2024 investigations launched against Character.AI, Reddit, Instagram, Discord.
Protects individuals from unauthorized AI-generated use of their name, photograph, voice, or likeness. Explicitly covers AI-generated voice simulations. Criminal and civil penalties including treble damages for knowing violations.
First major US state AI consumer protection law. Requires GenAI disclosure on request (reactive) and at outset for high-risk interactions (proactive). Entity deploying GenAI liable for its consumer protection violations. Creates AI Learning Laboratory sandbox.
First-in-world law mandating bias audits for AI hiring tools. Requires annual independent audits and public disclosure of results.
Requires notice, consent, and transparency for AI analysis of video job interviews. Early state-level AI employment regulation.
Restricts algorithmically personalized ("addictive") feeds and overnight notifications for under-18 users without parental consent.
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.
Would have required safety testing and "kill switch" capabilities for frontier AI models above certain compute thresholds. Governor vetoed citing concerns about threshold-based approach.
Washington bill requiring AI companion chatbots to implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Mandates clear disclosure that chatbot is AI (not human) with additional protections for minors. Sponsored by Senators Wellman and Shewmake at Governor Ferguson's request.
Would require child-focused risk assessments (DPIA-style), safer defaults, and limits on harmful design patterns. Currently blocked on First Amendment grounds.