Products
Zambia's comprehensive data protection law with special protections for vulnerable persons and DPIA requirements for high-risk processing.
Botswana's modernized data protection law requiring Data Protection Impact Assessment and establishing age 16 for consent.
Seychelles' modern data protection law requiring DPO for large-scale processing and recognizing Cross-Border Privacy Rules certification.
Algeria's data protection law with mandatory DPO requirement added by 2025 amendment and 5-day breach notification.
Tanzania's data protection law requiring mandatory 5-year registration and Minister approval for cross-border transfers.
Nigeria's comprehensive data protection law. Section 37 restricts automated decisions. Age of consent 13+ with "where feasible" verification. 72-hour breach notification.
First African country to adopt comprehensive national AI policy. Establishes Responsible AI Office (RAIO) under MINICT. Implements RURA ethical guidelines covering beneficence, non-maleficence, autonomy, justice, explicability, transparency. Non-binding framework.
Uganda's comprehensive data protection law requiring parental consent for children's data and immediate breach notification.
South Africa's comprehensive data protection law. Section 71 restricts automated decisions. Film and Publications Amendment Act adds chatroom safety duties.
Comprehensive data protection law regulating personal data processing in Egypt. Requires parental consent for children under 15, mandatory Data Protection Officer appointment, and 72-hour breach notification.
Kenya's comprehensive law with Section 35 rights against harmful automated decisions. DATA LOCALIZATION requirement: one serving copy on Kenyan servers for certain contexts.
Angola's data protection law establishing right to non-automated individual decisions and requiring express consent for data processing.
GDPR-aligned data protection law requiring Data Protection Officer appointment and breach notification to Commissioner.
Senegal's data protection law requiring prior authorization for health data processing with fines up to CFA 100 million.
Côte d'Ivoire's data protection law prohibiting decisions based solely on automated processing with DPO exemption pathway.
Ghana's comprehensive data protection law establishing data subject rights including right to prevent automated decision-making and requiring breach notification.
Morocco's data protection law establishing rights regarding automated decision-making and requiring parental consent for children's data processing.
Cameroon's cybersecurity and cybercrime law establishing privacy protections in electronic communications and criminalizing child grooming.
Tunisia's data protection law requiring prior authorization for personal data processing and restricting cross-border transfers.
Peru's first comprehensive AI regulatory framework, inspired by EU AI Act. Establishes three-tier risk-based approach: prohibited uses, high-risk systems (including healthcare), and low-risk/acceptable AI. First general AI regulation in Latin America. Requires human oversight, transparency, and risk assessments for high-risk AI including healthcare applications.
CARICOM's 2025 regional cyber security framework establishing digital safety culture and coordinated incident response across 18 member states.
First comprehensive AI law in Latin America. Promotes AI development while establishing ethical principles and governance framework. Creates the National Agency for Artificial Intelligence (ANIA) to oversee AI development and regulation.
First cybersecurity framework law in Latin America (Law 21,663 promulgated Mar 26, 2024; published Apr 8, 2024). Creates National Cybersecurity Agency (ANCI), mandatory incident reporting, and encryption rights.
Ontario's first AI-specific legislation regulating public sector use of AI systems. Requires accountability frameworks, risk management, disclosure, and human oversight. Also addresses cybersecurity and digital information affecting minors under 18.
Non-binding AI governance guidelines establishing principles for responsible AI use. Argentina positioning as AI innovation hub with limited regulatory barriers. Emphasizes transparency, accountability, and human oversight. Multiple legislative proposals pending inspired by EU AI Act, aiming to establish formal regulatory authority.
10-point checklist for AI data processing. Mandatory PIAs for high-risk AI. Note: Separately, Colombia's Consejo Superior de la Judicatura adopted UNESCO AI Guidelines for judiciary (Dec 16, 2024).
Puerto Rico's comprehensive cybersecurity law establishing cybersecurity framework for public and private sectors, complementing Act 111-2005 breach notification.
First AI-specific law in Latin America. Privacy protection throughout AI lifecycle. Accountability for fundamental rights violations.
Ecuador's GDPR-inspired data protection law with 5-day breach notification (stricter than GDPR's 72 hours) and DPIA requirements for high-risk processing.
Quebec's major privacy reform modernizing data protection laws with extraterritorial scope similar to GDPR. First Canadian provincial framework to directly address AI implications through automated decision-making provisions requiring disclosure, explanation rights, and human intervention options.
Jamaica's comprehensive data protection law establishing rights regarding automated decision-taking and requiring parental consent for children's data.
Belize's modern data protection law establishing protection from solely automated decisions and parental consent for children under 13.
Panama's comprehensive data protection law establishing ARCO rights plus portability and requiring breach notification to ANTAI.
Barbados' GDPR-aligned data protection law with biometric data definitions, mandatory DPO, and extraterritorial scope.
Paraguay's credit data protection law with limited scope. Defines sensitive data to include psychological data.
Dominican Republic's data protection law establishing Habeas Data remedy but lacking dedicated supervisory authority.
Nicaragua's data protection law enacted but with limited enforcement due to non-operational supervisory authority (DIPRODAP).
Trinidad and Tobago's data protection law with sensitive data protections including health information. Key provisions await proclamation.
CARICOM regional model policy guidelines for harmonizing cybercrime and data protection laws across Caribbean states. Influenced 13 of 15 member states' legislation.
Costa Rica's data protection law requiring database registration with PRODHAB and establishing comprehensive data subject rights.
Uruguay's comprehensive data protection law with EU adequacy status. Establishes automated decision-making rights and requires explicit consent for sensitive data.
Bahamas' data protection law establishing right to prevent direct marketing and data subject access rights. Modernization pending.
Honduras' transparency law with Habeas Data protection provisions. Comprehensive data protection bill pending as of 2024.
Puerto Rico's medical information privacy law with breach notification requirement 'as expeditiously as possible' - stricter than federal standards.
Risk-based framework similar to EU AI Act. Would prohibit excessive-risk AI (social scoring, autonomous weapons), require impact assessments for high-risk AI, with penalties up to BRL 50M or 2% Brazilian turnover.
Most advanced AI regulatory framework in Latin America. Four-tier EU-inspired risk classification with prohibited AI including social scoring and deepfakes exploiting minors.
Would have regulated high-impact AI systems with potential penalties up to $25M or 5% global revenue. Part of Bill C-27 which died when Parliament ended.
Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.
Comprehensive AI regulation establishing CONAIA (National Commission for Artificial Intelligence) as central regulatory authority under Ministry of Economy. Risk-based framework with authorization, transparency, and accountability requirements for high-risk AI systems. Follows constitutional amendment (February 2025) granting Congress authority to legislate on AI.
First comprehensive AI law in Southeast Asia. Risk-management oriented framework with high-risk AI list updated by Prime Minister. Applies extraterritorially to foreign organizations whose AI systems impact Vietnamese users.
World's first comprehensive governance framework for agentic AI systems capable of autonomous reasoning, planning, and action. Builds on Singapore's 2020 Model AI Governance Framework with specific guidance for deploying AI agents responsibly.
First comprehensive AI legislation in Asia-Pacific and second in the world after EU. Regulates "High-Impact AI" in healthcare, energy, nuclear, transport, government, and education sectors. Requires transparency notifications, content labeling for generative AI, and fundamental rights impact assessments. Notable for lower penalties than EU AI Act and absence of prohibited AI practices.
First comprehensive AI law in Central Asia. Establishes risk-based classification (low/medium/high-risk), mandatory AI content labeling, and explicit prohibitions on manipulation, social scoring, and non-consensual emotion detection. Requires annual risk assessments for high-risk systems.
Comprehensive AI Basic Act (pending) establishes seven guiding principles and risk-based classification. Note: Taiwan already has ENACTED deepfake/election AI provisions via separate laws (Criminal Code 2023, Election Law 2023, Fraud Prevention Act 2024).
Brunei's personal data protection order requiring DPIA and imposing penalties up to 10% Brunei turnover or $1M.
First major revision of China's foundational Cybersecurity Law since 2017. Introduces formal AI governance provisions, significantly increases penalties, and expands extraterritorial application to all cybersecurity violations.
Requires licensed platforms to implement content moderation systems, child-specific safeguards, and submit Online Safety Plans. Nine categories of harmful content regulated.
World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.
STRICTEST children's provisions in APAC. Children = under 18; verifiable parental consent MANDATORY; PROHIBITION on tracking, behavioral monitoring, targeted advertising to children.
Sets specific legal requirements under Privacy Act for collecting and using biometric data such as facial recognition and fingerprint scans. Prohibits particularly intrusive uses including emotion prediction and inferring protected characteristics like ethnicity or sex.
Creates "duty to make reasonable efforts" (not strict requirements) to follow AI principles. Establishes AI Strategy Center. Largely non-binding, consistent with Japan's "soft law" tradition.
Mandatory labeling of AI-generated content (implicit for all, explicit where applicable). Released by State Administration for Market Regulation and Standardization Administration of China. Complements existing GenAI interim measures with three national standards for AI security and governance.
Nepal national AI policy establishing governance framework and development priorities. Creates AI Governance Council (chaired by Minister for Communications and IT), AI Regulation Council, National AI Centre, and AI Regulatory Authority. Six pillars including ethics, human resource development, and sectoral application.
Pakistan's national AI roadmap establishing six strategic pillars: AI Innovation Ecosystem, Awareness and Readiness, Research and Development, Infrastructure, Governance, and International Cooperation. Creates National AI Fund (NAIF), Centres of Excellence in 7 cities, and targets training 200,000 individuals annually.
Myanmar's cybersecurity law requiring platforms with 100,000+ users to register and imposing data retention requirements. Enacted post-2021 coup with uncertain enforcement.
Comprehensive facial recognition regulation requiring consent, protecting minors, restricting public space use, mandating data localization, and requiring filing for large-scale processing (100K+ individuals).
Indonesia's comprehensive child online protection regulation establishing age-appropriate design requirements for electronic systems accessible to children. Most granular age classification globally (5 groups). Requires risk assessments, privacy-by-default, parental consent, DPIAs, and prohibits data profiling of children. First of its kind in Asia and Global South.
Establishes experimental legal regimes for digital innovation and AI, broadening liability for damages during testing and creating tracking mechanisms for AI-related incidents.
Requires bloggers with audiences exceeding 10,000 users to register with Roskomnadzor and restricts content reposting and advertising on unregistered pages.
Strengthens Privacy Act requirements for biometric data collection, raising the standard of conduct for collecting biometric information used for automated verification or identification. Cannot collect such information unless individual has consented and it is reasonably necessary.
First mandatory AI governance requirements in Singapore, shifting from voluntary Model AI Governance Framework to binding obligations for financial sector. Establishes three mandatory focus areas: oversight and governance, risk management systems, and development/validation/deployment protocols.
Indonesia's comprehensive data protection law. Health and children's data = "specific personal data" with enhanced protections. Criminal penalties up to 6 years imprisonment.
South Korea's world-strictest deepfake law: 7 years for creating/distributing, 3 years for possessing/viewing deepfake sexual content. Even viewing is criminal.
Presidential decree updating Russia's national AI development strategy through 2030, establishing key principles including human rights protection, security, technological sovereignty, non-discrimination, and accountability.
Creates Commonwealth criminal offences for "deepfake sexual material" (AI/synthetic intimate imagery) without consent. Part of Australia's layered approach: criminal law + eSafety platform enforcement.
Comprehensive AI guidance from Hong Kong Privacy Commissioner. Governance, risk assessment, human oversight, data stewardship. Three core values: respect, benefit, fairness.
Singapore's GenAI-specific guidance: risks (hallucinations, harmful outputs, IP/provenance, misuse) and operational controls (evaluation, transparency, policies, incident response).
Regional AI governance framework for 10 ASEAN member states. Non-binding guide promoting transparency, fairness, security, and human-centricity. January 2025 expansion addresses six GenAI-specific risks including deepfakes, misinformation, and vulnerable population harms.
Sri Lanka's comprehensive data protection law - first in South Asia. Establishes human review rights for automated decisions and DPIA requirements for high-risk processing.
Requires generative AI providers to ensure content "upholds Core Socialist Values," implement content controls, and file algorithms with CAC within 10 business days.
Under Broadcasting Act framework, requires major social media services to implement systems reducing exposure to harmful content. Child safety is key driver.
Criminalizes computer-generated and simulated child sexual abuse material, which includes AI-generated imagery. One of few laws globally explicitly addressing synthetic CSAM.
Controls on "deep synthesis" (deepfake) technology including labeling requirements for all deep synthesis outputs and privacy consent for biometric editing.
Pacific Islands Forum regional cybersecurity framework (18 member states) with Boe Declaration (2018) and Lagatoi Declaration (2023) establishing coordinated digital safety standards backed by $27M Australian investment.
Thailand's GDPR-style law. Health data requires explicit consent. First major fine (THB 7M) August 2024. Draft Royal Decree on AI proposes EU-style risk classification.
Mongolia's data protection law defining health, genetic, and biometric data as sensitive with cross-border restrictions and Human Rights Commission oversight.
Requires algorithm filing/registration, user notification of recommendations, and opt-out mechanisms. Prohibits price discrimination based on user profiling.
Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.
Global voluntary certification for cross-border personal data protection. 9 participating economies: US, Canada, Mexico, Japan, South Korea, Singapore, Philippines, Taiwan, Australia.
Widely cited APAC governance framework: internal AI governance, risk management, human involvement, operations management, stakeholder transparency. Functions as "expected practice" in enterprise/procurement.
Fiji's online safety law covering cyberbullying, cyberstalking, and revenge porn with Online Safety Commission oversight and mandatory mediation.
Papua New Guinea's cybercrime law establishing 25+ cyber offenses with penalties up to 15 years for critical infrastructure attacks.
Establishes 10 communication principles and creates both criminal offenses and civil remedies for harmful digital communications. Amended 2022 for intimate image sharing. Note: Post-Christchurch rapid classification powers are in a separate law (Films, Videos, and Publications Classification Amendment Act 2021).
Japan's revenge porn law criminalizing distribution of private sexual images (3 years/¥500,000) but does NOT cover deepfakes - significant legal gap.
Maldives' draft data protection law proposing Data Protection Authority and defining sensitive data including health information.
Bangladesh's draft data protection law requiring DPO, imposing data localization requirements, and establishing fines up to BDT 300,000.
10 mandatory guardrails proposed for high-risk AI: accountability, risk management, data governance, testing, human oversight, transparency, contestability, supply chain transparency, record keeping, conformity assessment.
Uzbekistan's AI governance framework via amendments to Law on Informatization. Mandates AI content labeling, prohibits AI decisions affecting rights without human oversight, and establishes protections against AI harms to life, health, and dignity. Responds to 3x increase in AI-related violations (1,129 in 2023 to 3,553 in 2024).
Major amendment to Taiwan's Personal Data Protection Act establishing independent Personal Data Protection Commission (PDPC) as mandated by Constitutional Court. Significantly strengthens data protection framework for public and private sectors, aligning with EU GDPR standards. Introduces data breach notification obligations, mandatory DPOs for government agencies, and enhanced enforcement powers.
Cambodia's draft data protection law establishing human intervention rights for automated decisions (Article 34) and mandatory DPO requirement.
Would establish due diligence framework for synthetically generated information including labeling, traceability, and platform processes for handling synthetic media harms.
Modernized product liability framework explicitly covering AI systems and software as products. Shifts burden of proof in complex AI cases, allows disclosure orders for technical documentation, and addresses liability for AI-caused harm including through software updates.
Finland's EU AI Act implementation using decentralized supervision model. Traficom serves as single point of contact and coordination authority. Ten market surveillance authorities share enforcement across sectors. New Sanctions Board handles fines over EUR 100,000.
Hungary's comprehensive AI law implementing the EU AI Act. Designates the National Media and Infocommunications Authority (NMHH) as the primary supervisory authority, with sectoral regulators for specific domains.
First EU member state to fully implement the EU AI Act. Designates three competent authorities, establishes penalty framework aligned with EU maximums, and grants inspection/enforcement powers. Does not add material requirements beyond EU AI Act.
Austria's digital sovereignty framework establishing Sovereignty Compass for AI audits and mandatory Digi-Check for all legislation.
World's first comprehensive risk-based regulatory framework for AI systems. Classifies AI by risk level with escalating requirements from prohibited practices to high-risk obligations.
France's 2024 "digital space" law strengthening national digital regulation and enforcement levers via ARCOM across platform safety and integrity issues.
Requires providers of certain telemedia services to implement provider-side precautionary measures ("Vorsorgemaßnahmen") with regulator-facing evaluability via published BzKJ criteria.
Comprehensive platform regulation with tiered obligations. VLOPs (45M+ EU users) face systemic risk assessments, algorithmic transparency, and independent audits.
Netherlands' algorithmic risk assessment framework specifically addressing mental health chatbots in risk reports and requiring Fundamental Rights Impact Assessment (FRIA).
Austria's national AI authority established within RTR (Rundfunk und Telekom Regulierungs-GmbH) for EU AI Act market surveillance coordination.
Estonia's €85M AI and Data Action Plan establishing safety testing framework and human-centered AI deployment principles.
Establishes AESIA as Spain's national competent authority for AI supervision - the first such agency in the EU. Headquartered in A Coruña, Galicia. Creates voluntary certification framework for ethical AI systems.
Switzerland's revised data protection law with Article 21 automated decision transparency requirements, human review rights, and fines up to CHF 250,000.
Estonia's Administrative Procedure Act with provisions for automated administrative acts in defined sectors and transparency requirements.
Italian DPA (Garante) is most aggressive EU enforcer on AI. Precedent-setting enforcement against ChatGPT and Replika. Enforcement theory: companion AI processes special category health data.
Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.
Serbia's non-binding AI ethics guidelines establishing explainability, accountability, and transparency principles.
Requires hosting services to remove terrorist content within one hour of receiving a removal order. One of few regulations with real-time moderation mandates.
Temporary legal bridge allowing certain communications providers to voluntarily detect/report/remove CSAM, notwithstanding ePrivacy constraints. Extended via 2024/1307 while permanent CSAR negotiated.
Portugal's Charter of Digital Rights with Article 9 requiring AI to respect fundamental rights and establishing algorithmic auditability principles.
Malta's voluntary ethical AI framework with four ethical principles and certification pathway via Malta Digital Innovation Authority (MDIA).
Serbia's GDPR-aligned data protection law with profiling safeguards and DPIA requirements.
Foundational EU data protection law with direct AI enforcement precedent. Article 22 restricts automated decision-making; Article 9 classifies mental health data as special category requiring explicit consent; Article 8 sets children's consent thresholds (13-16 by member state).
EU directive setting baseline safety and minor-protection duties for audiovisual media services and video-sharing platform services, including measures to protect minors from harmful content.
Proposed permanent framework replacing interim derogation. Parliament position (Nov 2023) limits detection to known/new CSAM, excludes E2EE services. Council has not agreed General Approach.
Proposed amendments to the EU AI Act that would delay high-risk AI system obligations by up to 16 months, making compliance conditional on availability of harmonised standards and support tools.
Norway implementing full EU AI Act through EEA Agreement. Most comprehensively regulated non-EU jurisdiction for AI. Also implementing Digital Services Act with some Norwegian additions.
Poland's draft law implementing EU AI Act domestically, creating KRiBSI (national AI authority), regulatory sandboxes, and binding opinions mechanism.
Ukraine's draft GDPR-aligned data protection law establishing 72-hour breach notification and automated processing rules.
Most specific international guidance on children and AI. Ten requirements for child-centered AI including development/wellbeing support, data/privacy protection, and safety.
First legally binding international AI treaty. Signed by EU, UK, US (Sep 2024), Canada, Japan (Feb 2025) and others. Requires risk/impact assessments, transparency, accountability. National security exemptions apply.
Continent-wide AI strategy endorsed by African Union Executive Council covering 55 member states. Phased implementation 2025-2030. Phase I (2025-2026) focuses on creating governance frameworks, developing national AI strategies, resource mobilization, and capacity building. Aims to harmonize AI development across Africa while respecting member state sovereignty.
First-ever UN General Assembly resolution on AI. Adopted unanimously with 125 co-sponsors (US-led). Establishes human rights as applicable to AI lifecycle, encourages regulatory frameworks, and calls for bridging AI divides between countries. Non-binding but sets global normative expectations.
First certifiable international standard for AI management systems. Uses Plan-Do-Check-Act methodology. Third-party certification available; major AI systems have achieved certification.
11 guiding principles for advanced AI. Explicitly prohibits AI posing substantial safety or human rights risks. Code of conduct for developers.
AI risk management guidance complementing ISO 31000. Lifecycle risk management; audit/procurement language.
Global normative framework adopted by all 193 UN Member States. Policy Area 8 (Health and Social Wellbeing) directly addresses mental health AI.
WHO guidance emphasizing mental health AI often has methodological/quality flaws requiring extra scrutiny. Six ethical principles for health AI.
Five value-based principles endorsed by 47 governments. OECD definitions of "AI system" and "AI lifecycle" adopted in EU AI Act, US regulations, and CoE Convention.
UAE federal law establishing comprehensive child digital safety requirements for digital platforms and internet service providers, with extraterritorial reach to foreign platforms targeting UAE users. Requires age verification, privacy-by-default, content filtering, and proactive AI-powered content detection.
Israel's most significant privacy reform in 40 years, explicitly covering AI systems. Requires Data Protection Officers (DPOs) for entities processing sensitive data at scale, mandates Data Protection Impact Assessments (DPIAs) before AI deployment, and enhances Protection of Privacy Authority enforcement powers. One of first data protection laws to explicitly require DPIAs before AI development or deployment.
First enacted AI-specific regulation in the Middle East, Africa, and South Asia (MEASA) region. Establishes risk-based framework for AI systems in the DIFC financial free zone, with requirements for transparency, human oversight, and accountability.
Jordan's data protection law with medical data processing exceptions, data portability rights, and oversight including security services.
Ambitious national strategy positioning Egypt as regional AI hub for Africa and Middle East. Targets 7.7% ICT sector GDP contribution by 2030, training 30,000 AI specialists, establishing 250 AI companies. Built on six strategic pillars: governance, infrastructure, technology, data, ecosystem, and talent. Accompanied by Egyptian Charter for Responsible AI (April 2023) with ethics principles.
Saudi Arabia's comprehensive personal data protection law with extraterritorial scope, DPO requirements for sensitive processing, and National Data Governance Platform registration.
Binding AI governance requirements for Qatar's financial sector. Mandates board-level accountability, risk assessments, human-in-the-loop for high-impact decisions, and prior QCB approval for high-risk AI systems.
Kuwait's data privacy regulation requiring guardian consent for minors under 18, 72-hour breach notification, and automated decision restrictions.
Oman's data protection law with world's strictest health data regulation: outright BAN on health data processing without Ministry of Health permit. Also requires 72-hour breach notification.
⚠️ WARNING: Syria's data protection law functions as surveillance tool requiring mandatory data sharing with authorities. NOT suitable for confidential mental health services.
Comprehensive media regulation requiring licensing for all digital platforms, social media operations, and influencers. 20 binding content standards with significant penalties.
Comprehensive AI governance from Saudi Data & AI Authority: Ethics Principles (Sep 2023), Generative AI Guidelines (Jan 2024), AI Adoption Framework (Sep 2024). Combined with PDPL creates binding + guidance framework.
⚠️ HIGH RISK: Syria's cybercrime law requires ISPs to retain ALL content with government access. NOT suitable for confidential mental health services requiring privacy.
Bahrain's GDPR-aligned data protection law with automated decision-making restrictions and Data Protection Guardian requirement.
Lebanon's electronic transactions and data protection law lacking independent supervisory authority, relying on court remedies for enforcement.
Palestine's cybercrime law with 120-day data retention requirement and doubled penalties for crimes against minors.
Kuwait's cybercrime law criminalizing personal data breaches with 3 years imprisonment and fines of KWD 3,000-10,000.
Proposed comprehensive AI law establishing a risk-based classification system similar to the EU AI Act. Would prohibit high-risk AI practices including social scoring and real-time biometric surveillance, require transparency for AI-generated content, and establish AI regulatory authority.
Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.
Omnibus data legislation covering customer data access, digital verification services, the Information Commission, and AI-related provisions including copyright/training transparency requirements and new criminal offenses for creating AI-generated intimate images (deepfakes).
One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.
Sector-specific, principles-based approach using existing regulators. Five cross-sector principles guide regulatory application rather than horizontal AI legislation.
UK's enforceable "privacy-by-design for kids" regime. Applies to online services likely to be accessed by children under 18. Forces high-privacy defaults, limits on profiling/nudges, DPIA-style risk work, safety-by-design.
The UK's foundational data protection law, incorporating the UK GDPR (retained EU GDPR post-Brexit). Substantively mirrors EU GDPR with ICO as sole enforcer. Article 22 restricts automated decision-making; Article 9 classifies mental health as special category data; children's consent age set at 13. Parent framework for UK Children's Code; amended by DUA Act 2025.
UK government consultation on restricting children's access to AI chatbots, banning addictive design features like infinite scrolling and auto-play, and potentially setting age restrictions for social media. Would amend the Crime and Policing Bill to bring AI chatbot providers under Online Safety Act duties.
Executive order directing federal agencies to preempt conflicting state AI laws while explicitly preserving state child safety protections. Creates DOJ AI Litigation Task Force to challenge state laws, directs FTC/FCC to establish federal standards. Highly controversial - legal experts dispute whether executive orders can preempt state legislation (only Congress or courts have this authority).
September 2025 FTC compulsory orders to 7 AI companion companies demanding information on children's mental health impacts. Precursor to enforcement.
Coordinated state AG warnings: 44 AGs (Aug 25, 2025, led by TN, IL, NC, and SC AGs) and 42 AGs (Dec 2025, led by PA AG) to OpenAI, Meta, and others citing chatbots "flirting with children, encouraging self-harm, and engaging in sexual conversations."
First federal law addressing AI-generated intimate imagery. Criminalizes publication of nonconsensual intimate imagery (NCII) including AI "digital forgeries." Creates 48-hour takedown obligation for platforms.
Dominant voluntary AI governance framework in the US. Four functions (Govern, Map, Measure, Manage) operationalize what regulators expect. Not legally binding but heavily referenced.
FTC applies Section 5 authority against unfair/deceptive AI practices, plus AI-specific rules including the Fake Reviews Rule (Oct 2024) prohibiting AI-generated fake reviews.
Baseline US children's data privacy regime. Applies to operators of websites/online services directed to children under 13, and to general-audience services with actual knowledge they collect personal info from under-13 users.
Would expand COPPA-style protections to teens (13-16) and add stronger constraints including limits on targeted advertising to minors. Often paired politically with KOSA.
Requires disclosure to minors that they are interacting with AI (not a human) and that the AI is not a licensed professional. Baseline transparency approach.
Explicitly defines "companion AI chatbot" and "suicidal ideation" in statutory context. Sets covered-entity obligations including age verification.
Comprehensive federal AI policy requiring safety testing, reporting, and standards development. Revoked in January 2025 by new administration.
Creates federal civil remedy for victims of nonconsensual AI-generated intimate imagery (deepfake porn). Allows victims to sue creators, distributors, solicitors, and possessors with intent to distribute.
Would establish duty of care for platforms regarding minor safety. Passed full Senate 91-3 in July 2024; passed Senate Commerce Committee multiple times (2022, 2023). Not yet enacted.
Would require age verification, disclosures, and broader child protections for AI chatbots. Part of emerging federal focus on companion AI safety for minors.
Vermont design code structured to be more litigation-resistant: focuses on data processing harms rather than content-based restrictions. AG rulemaking authority begins July 2025.
Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.
Requires large GenAI providers (1M+ monthly users) to provide free AI detection tools, embed latent disclosures (watermarks/metadata) in AI-generated content, and offer optional manifest (visible) disclosures to users.
Standalone bill requiring age verification for ALL users (adults and minors) before accessing companion AI chatbots. Requires user account creation, confidential age verification, notifications that user is interacting with AI, and specific actions when user is determined to be a minor including potential access blocking.
Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.
Regulates companion AI chatbots with emphasis on self-harm prevention and crisis intervention. Requires suicide/self-harm detection protocols, 988 crisis referrals, prohibition on chatbots discussing self-harm with users, and annual reporting on crisis interventions. Includes minor-specific protections including AI disclosure, break reminders, and prohibition on sexually explicit content.
Establishes an 'AI Bill of Rights' for Floridians including the right to know if communicating with AI, parental controls over minors' AI chatbot access, prohibition on selling user data, disclosure requirements for AI-generated political ads, and protections against unauthorized use of name/image/likeness by AI.
First comprehensive US state law regulating high-risk AI systems. Modeled partly on EU AI Act with developer and deployer obligations for consequential decisions.
First US law specifically regulating companion chatbots. Uses capabilities-based definition (not intent-based). Requires evidence-based suicide detection, crisis referrals, and published protocols. Two-tier regime: baseline duties for all users, enhanced protections for known minors. Private right of action with $1,000 per violation.
Comprehensive AI governance with prohibited uses approach. Bans AI that incites self-harm/suicide, exploits children, or intentionally discriminates. Government entities have additional disclosure requirements. First-in-nation AI regulatory sandbox program.
Requires law enforcement agencies to disclose when AI is used to write or assist in creating official police reports, maintain audit trails, and preserve AI-generated first drafts.
Amends Illinois Human Rights Act to make it a civil rights violation for employers to use AI that discriminates based on protected classes in employment decisions. Requires notice to employees and applicants when AI is used in hiring, firing, promotion, or discipline.
Regulates AI use in healthcare prior authorization decisions. Requires adverse determinations be reviewed by licensed physicians qualified in relevant specialty. Sets decision timelines: 7 days for non-urgent, 72 hours for urgent services. Prohibits prior authorization for emergency services and opioid use disorder treatment.
First US frontier AI transparency law. Requires large AI developers (>$500M revenue) to publish governance frameworks, submit quarterly risk reports, and report critical safety incidents. Applies to models trained with >10^26 FLOP.
Prohibits AI systems from using terms, letters, or phrases that falsely indicate or imply possession of a healthcare professional license.
Requires GenAI developers to publish documentation about training datasets including sources, data types, copyright status, personal information inclusion, and processing methods.
California Privacy Protection Agency regulations establishing consumer rights and business obligations for Automated Decision-Making Technology (ADMT) that makes significant decisions including healthcare. Requires pre-use notice, opt-out rights, access rights, appeal rights, and risk assessments.
Nebraska design code blending privacy-by-design with engagement constraints (feeds, notifications, time limits) aimed at reducing compulsive use.
Requires disclosure when advertisements use AI-generated 'synthetic performers.' Penalties of $1,000 for first offense, $5,000 for subsequent violations.
Requires AI companion chatbot operators to implement protocols addressing suicidal ideation and self-harm, plus periodic disclosures and reminders to users. Uses three-part CONJUNCTIVE definition (all three criteria must be met). No private right of action—AG enforcement only.
Consumer protection law requiring disclosure that users are interacting with AI, not a human. Common precursor to crisis-harm regulation.
Amends Pennsylvania's forgery statutes to criminalize creation and distribution of non-consensual AI-generated deepfakes ('forged digital likenesses') with intent to defraud or injure. Establishes tiered criminal penalties including felony charges for fraud-related deepfakes.
Establishes civil liability for online impersonation using AI. Person liable if they knowingly and with intent to harm, defraud, intimidate, or threaten use AI to impersonate another's name, voice, signature, photograph, or likeness. Civil remedies include injunctive relief, actual damages, exemplary damages ($500+ minimum), costs, and attorney's fees. Satire and parody exempted.
Requires social media platforms to provide accessible complaint system for explicit deepfake material. Platform must confirm receipt within 48 hours, investigate within 30 days (60 days if delayed), and provide mandatory updates to reporting user.
Requires healthcare practitioners using AI for diagnosis to review all AI-generated records and disclose AI use to patients. Mandates EHR data localization (Texas patient data must be physically stored in US). Applies to covered entities and third-party vendors.
Expands civil liability for AI-generated nonconsensual intimate images (deepfake pornography). Criminalizes threatening to create intimate deepfakes to coerce, extort, harass, or intimidate. Imposes penalties on individuals, websites, and payment processors involved in distributing such content. 10-year statute of limitations.
Establishes ownership rules for AI-generated content and trained models. Person providing input owns generated content (if not infringing); person providing training data owns resulting model (if lawfully acquired).
Requires all Arkansas public entities to create AI policies with mandatory human-in-the-loop for final decisions. Covers state departments, schools, and political subdivisions.
Amends Arkansas publicity rights law to explicitly include AI-generated reproductions of voice and likeness. Covers simulated voices and 3D generation.
Illinois law prohibiting licensed professionals from using AI systems to make independent therapeutic decisions, directly interact with clients in therapeutic communication, or detect emotions/mental states. AI limited to administrative and supplementary support with licensed professional oversight.
Broadens breach of privacy to include sharing AI-created or AI-altered materials that portray individuals in sexual manner without consent. Criminalizes possession, creation, and distribution of AI-generated child sexual abuse material (CSAM).
PROHIBITION on AI systems providing professional mental/behavioral healthcare without licensed professional oversight. AI cannot independently diagnose or provide therapy.
Prohibits disseminating deepfakes about candidates within 90 days of election with intent to cause injury. Class 1 misdemeanor with up to 1 year imprisonment and $2,000 fine. Affirmative defense for content with AI manipulation disclosure. Civil remedies available to AG, candidates, and depicted individuals.
S.28 creates offense of obscene visual representations of child sexual abuse, including AI-generated CSAM. S.29 addresses morphed images of identifiable minors. Both bills criminalize creation, possession, and distribution of AI-generated sexual images without consent. Offenders added to sex offender registry.
Consumer protection requirements for mental health chatbots including disclosure obligations and safeguards. Specifically targets AI applications marketed for mental health support.
Establishes AI Governance Committee within state government to develop policies for responsible AI use. Creates framework for state agency AI deployment including risk assessment, transparency, and accountability measures. Committee advises on AI procurement and implementation.
Protects performers from exploitative digital replica contracts. Contracts for AI-generated digital replicas are void unless they describe use, performer has legal counsel or union representation, and contract doesn't replace work performer would have done.
Establishes criminal penalties for distributing synthetic intimate imagery (AI-generated deepfake pornography) without consent. Creates civil remedies for victims. Protects internet service providers from liability for third-party content.
Criminalizes fraudulent use of deepfakes as a Class B felony (1-7 years imprisonment). First state law with explicit private right of action for deepfake victims. Enhanced penalties when deepfakes result in wrongful arrest. Prohibits lobbyists who violate the law from registering.
First-in-nation mandate requiring all Ohio K-12 public schools to adopt formal AI usage policies by July 1, 2026. Ohio Department of Education and Workforce released model policy on December 30, 2025 covering academic integrity, procurement/privacy, and anti-bullying. Districts can adopt state model or create their own aligned policy.
Maryland design code modeled on UK approach: applies to covered online products reasonably likely to be accessed by minors. Pushes privacy-by-default, risk assessment, limits on "materially detrimental" data practices.
Makes it illegal to distribute AI-generated deceptive media if distributor knows it falsely represents a person and intends to influence an election. Exceptions for media with disclaimers and news organizations. Class A misdemeanor for first offense, Class D felony for repeat offenders.
Creates Delaware AI Commission to advise on AI utilization and safety within state government. Commission conducts inventory of generative AI usage across executive, legislative, and judicial agencies, identifies high-risk areas, and recommends statewide AI guidelines. Advisory body without enforcement authority.
Prohibits distribution of materially deceptive media (deepfakes) in elections from February 1 through general election without disclaimer. Criminalizes violations with escalating penalties from petty misdemeanor to Class C felony if intent to cause violence. Private right of action for candidates, depicted individuals, and voter advocacy organizations.
Texas AG Paxton is the MOST AGGRESSIVE enforcer against AI companion companies. December 2024 investigations launched against Character.AI, Reddit, Instagram, Discord.
Criminalizes dissemination of election deepfakes without consent within 90 days of election with intent to injure candidates, influence results, or deter voting. Escalating penalties from 1 year/$5,000 to 5 years/$10,000 for repeat offenders or intent to incite violence. Private right of action for candidates and political parties.
Creates Government Technology Modernization Council within Department of Management Services. Criminalizes knowingly possessing, controlling, or intentionally viewing AI-generated child pornography. Prohibits viewing photographs, motion pictures, representations, images, data files, computer depictions, or presentations that include generated child pornography.
Regulates AI use by New Hampshire state agencies. Prohibits AI for unlawful discrimination, real-time biometric surveillance in public spaces (except law enforcement with warrant), and malicious deepfakes. Requires human oversight for irreversible AI decisions and mandatory AI disclosure to users.
Amends Idaho child pornography law to explicitly include AI-generated sexual depictions of children where depictions appear to be real children. Addresses challenge that investigators cannot distinguish AI-generated from real images. Criminal felony penalties including imprisonment.
Protects individuals from unauthorized AI-generated use of their name, photograph, voice, or likeness. Explicitly covers AI-generated voice simulations. Criminal and civil penalties including treble damages for knowing violations.
Requires political advertisements, electioneering communications, or miscellaneous advertisements using AI to include specified disclaimer. Criminal and civil penalties for violations.
HF 2240 and SF 2243 criminalize creation of AI-generated intimate images without consent. HF 2240: Aggravated misdemeanor for adult non-consensual sexual images. SF 2243: Class D felony for AI-generated CSAM depicting minors (up to 5 years imprisonment).
Prohibits creating or distributing deceptive and fraudulent deepfakes of political candidates or parties within 90 days before an election, unless content includes clear AI disclosure. Exempts satire/parody and interactive computer services. Civil penalties for violations.
First major US state AI consumer protection law. Requires GenAI disclosure on request (reactive) and at outset for high-risk interactions (proactive). Entity deploying GenAI liable for its consumer protection violations. Creates AI Learning Laboratory sandbox.
Requires disclosure when AI is used to generate materially deceptive media in political ads. Ads must include 'This content generated by AI' notice in text and audio. Applies to candidate ads, issue advocacy ads, and referendum ads.
Requires University of Tennessee, Board of Regents, and all state university governing boards to adopt policies regarding AI use by students, faculty, and staff for instructional and assignment purposes. Policies must be implemented by July 1, 2025.
First-in-world law mandating bias audits for AI hiring tools. Requires annual independent audits and public disclosure of results.
Requires notice, consent, and transparency for AI analysis of video job interviews. Early state-level AI employment regulation.
Creates new felony offenses and mandatory prison sentences for criminal use, development, or distribution of AI systems. Possessing, developing, deploying, or modifying an AI system with intent to commit a crime is punishable by 8 years imprisonment.
Imposes explicit prohibitions on AI systems making therapeutic judgments, generating treatment plans without human review, or simulating emotional interaction. Violations treated as unprofessional conduct under Commonwealth licensing laws.
Ensures dangerous AI companion chatbots are inaccessible to children, including those capable of encouraging self-harming behaviors, illegal activities, or sexually explicit interactions. Implements stronger safety measures to prevent targeting and exploitation of minors.
Regulates use of artificial intelligence by healthcare providers in Louisiana. Permits AI for administrative tasks but prohibits AI from making treatment/diagnosis decisions without licensed professional review, directly interacting with patients on treatment matters, or generating therapeutic recommendations without professional approval.
Washington bill requiring AI companion chatbots to implement safeguards to detect and respond to user expressions of self-harm, suicidal ideation, or emotional crisis. Mandates clear disclosure that chatbot is AI (not human) with additional protections for minors. Sponsored by Senators Wellman and Shewmake at Governor Ferguson's request.
Creates the AI Safety and Security Transparency Act requiring large AI developers to conduct regular risk assessments, third-party audits, and publicly disclose safety protocols. Targets 'critical risk' scenarios (harm to 100+ people or $100M+ damages). Applies to developers spending $100M+ annually on AI or $5M+ on individual models.
Would require child-focused risk assessments (DPIA-style), safer defaults, and limits on harmful design patterns. Currently blocked on First Amendment grounds.
Prohibits AI from making independent therapeutic decisions in mental or behavioral health settings. Requires licensed professional review of all AI treatment plans and patient interactions.
Requires health information chatbots to obtain license from North Carolina Department of Justice before operating. Comprehensive licensing requirements include technical architecture documentation, data practices, security measures, and regulatory compliance. Civil penalties of $50,000 per violation.
Restricts algorithmically personalized ("addictive") feeds and overnight notifications for under-18 users without parental consent.
Requires businesses to disclose when individuals communicate with AI in textual or spoken conversations. Prohibits deception about AI vs. human interaction. Provides private right of action with damages up to $1,000 plus attorney fees. AG can impose civil penalties up to $5 million.
Proposes a 4-year moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 12. During the moratorium, a task force would develop safety standards with input from technologists, parents, and ethicists.
Would have required safety testing and "kill switch" capabilities for frontier AI models above certain compute thresholds. Governor vetoed citing concerns about threshold-based approach.
Prohibits AI systems from advertising or representing themselves as licensed mental health professionals. Violations constitute unlawful practice under NJ Consumer Fraud Act with penalties up to $10,000 first offense, $20,000 subsequent offenses.
Comprehensive child AI safety ballot initiative by Common Sense Media. Expands companion chatbot definitions, raises age threshold for data sale consent, prohibits certain AI products for children, establishes new state regulatory structure. Allows state and private lawsuits, requires AI literacy in curriculum, mandates school device bans during instruction, creates children's AI safety fund.