Skip to main content

G7 Hiroshima Process

G7 Hiroshima AI Process — Guiding Principles + Code of Conduct

11 guiding principles for advanced AI. Explicitly prohibits AI posing substantial safety or human rights risks. Code of conduct for developers.

Jurisdiction

International (G7)

Enacted

Oct 30, 2023

Effective

Oct 30, 2023

Enforcement

TBD

EU Digital Strategy

Why It Matters

G7 consensus creates influential soft law baseline for AI governance expectations globally.

Who Must Comply

  • Organizations developing/deploying advanced AI (voluntary)

Safety Provisions

  • 11 guiding principles for advanced AI
  • Prohibition: AI posing substantial safety risks
  • Prohibition: AI posing substantial human rights risks
  • Risk management and testing expectations
  • Transparency and information-sharing
  • Security controls and abuse prevention
  • Incident reporting/response
  • Responsible deployment and monitoring

View on map

International (G7)

Focus Areas

Algorithmic accountability
Active safeguards required

Compliance Help

Calls for evaluations, safety controls, monitoring, incident response, and documentation matching advanced AI norms.

See how NOPE helps

Cite This

APA

International (G7). (2023). G7 Hiroshima AI Process — Guiding Principles + Code of Conduct.

Related Regulations

Last updated February 17, 2026. Verify against primary sources before relying on this information.