Skip to main content

Korea AI Act

Framework Act on AI Development and Establishment of Trust (Law No. 20676)

First comprehensive AI legislation in Asia-Pacific and second in the world after EU. Regulates "High-Impact AI" in healthcare, energy, nuclear, transport, government, and education sectors. Requires transparency notifications, content labeling for generative AI, and fundamental rights impact assessments. Notable for lower penalties than EU AI Act and absence of prohibited AI practices.

Jurisdiction

South Korea

KR

Enacted

Jan 21, 2025

Effective

Jan 22, 2026

Enforcement

Ministry of Science and ICT

Passed Dec 26, 2024; promulgated Jan 21, 2025

Who Must Comply

This law applies to:

  • AI developers and deployers
  • Foreign companies above user/revenue thresholds

Who bears obligations:

This regulation places direct obligations on deployers (organizations using AI systems).

Exemptions

National Defense and Security Exemption

high confidence

AI developed exclusively for national defense or security purposes is exempt from the Act

Conditions:

  • • AI used exclusively for national defense
  • • AI used exclusively for security purposes

Safety Provisions

  • High-Impact AI regulation in critical sectors (healthcare, energy, nuclear, transport, government, education)
  • Mandatory advance notification to users of high-impact or generative AI
  • Labeling requirements for AI-generated content (images, text, video)
  • Fundamental rights impact assessment before deployment
  • Risk management and human oversight requirements
  • Foreign company domestic representative requirement above thresholds
  • AI Safety Institute established under MSIT
  • No prohibited AI practices (unlike EU AI Act)

Compliance Timeline

Jan 22, 2026

All provisions take effect after one-year transition period

Enforcement

Enforced by

Ministry of Science and ICT

Penalties

KRW 30M; criminal (up to 3yr)

Max fine: $30,000,000
Criminal liability(up to 3y)

Administrative fines up to KRW 30 million (~$20,700 USD) for: failure to comply with corrective orders, failure to notify users about high-impact/generative AI, failure to designate domestic representative. Up to 3 years imprisonment or KRW 30 million fine for leaking confidential information obtained under the Act. Grace period for initial enforcement.

Quick Facts

Binding
Yes
Mental Health Focus
No
Child Safety Focus
No
Algorithmic Scope
Yes

Why It Matters

First comprehensive AI law in APAC, effective January 22, 2026. Foreign companies above user/revenue thresholds must designate Korean representative. Lower penalty cap than EU (KRW 30M vs billions) but establishes regional precedent. Enforcement Decree published September 2025 with grace period for initial compliance.

Recent Developments

Enforcement Decree draft published September 8, 2025 clarifying standards and procedures. Grace period for fines announced to minimize confusion during initial enforcement. AI Safety Institute being established under MSIT.

What You Need to Comply

High-impact AI operators must implement: risk management plans, explanation methods for AI outputs, user protection plans, human oversight mechanisms, and documentation of safety measures. Fundamental rights impact assessments required before deploying high-impact AI products/services.

NOPE can help

Cite This

APA

South Korea. (2025). Framework Act on AI Development and Establishment of Trust (Law No. 20676). Retrieved from https://nope.net/regs/kr-ai-act

BibTeX

@misc{kr_ai_act,
  title = {Framework Act on AI Development and Establishment of Trust (Law No. 20676)},
  author = {South Korea},
  year = {2025},
  url = {https://nope.net/regs/kr-ai-act}
}

Related Regulations

In Effect CN AI Safety

China Algorithm Rules

Requires algorithm filing/registration, user notification of recommendations, and opt-out mechanisms. Prohibits price discrimination based on user profiling.

In Effect KR Online Safety

South Korea Deepfake Law

South Korea's world-strictest deepfake law: 7 years for creating/distributing, 3 years for possessing/viewing deepfake sexual content. Even viewing is criminal.

Enacted US-NY AI Safety

NY RAISE Act

Requires large AI developers of frontier models operating in New York to create safety protocols, report critical incidents within 72 hours, conduct annual reviews, and undergo independent audits. Creates dedicated DFS office funded by developer fees.

Enacted US-CT AI Safety

CT SB 1295

Creates COMPLETE BAN on targeted advertising to under-18s regardless of consent. Requires AI impact assessments. Connecticut issued first CTDPA fine ($85,000) in 2025.

Enacted NZ Data Protection

NZ Biometric Code

Sets specific legal requirements under Privacy Act for collecting and using biometric data such as facial recognition and fingerprint scans. Prohibits particularly intrusive uses including emotion prediction and inferring protected characteristics like ethnicity or sex.

In Effect SG Sector-Specific

SG MAS AI Governance

First mandatory AI governance requirements in Singapore, shifting from voluntary Model AI Governance Framework to binding obligations for financial sector. Establishes three mandatory focus areas: oversight and governance, risk management systems, and development/validation/deployment protocols.