AU National AI Plan
Australia National AI Plan and AI Safety Institute
National AI policy roadmap replacing previously proposed mandatory AI guardrails. Focuses on leveraging existing legal frameworks rather than new mandatory requirements. Establishes the Australian AI Safety Institute (AISI) to monitor, test, and share information on AI risks and harms.
Jurisdiction
Australia
Enacted
Pending
Effective
Dec 2, 2025
Enforcement
TBD
National AI Plan released December 2, 2025. Australian AI Safety Institute (AISI) announced November 25, 2025 with AUD 29.9M funding; operational early 2026.
Australian Government Department of IndustryWhy It Matters
Represents Australia's shift from mandatory AI guardrails to a voluntary, institution-based approach. The AISI is the primary new mechanism for AI safety, providing monitoring and testing capabilities rather than enforceable requirements.
Recent Developments
Released December 2, 2025. Replaces previously proposed mandatory AI guardrails (au-ai-guardrails, now failed) with voluntary approach relying on existing legal frameworks. AISI joins International Network of AI Safety Institutes. Criticized for being 'big ambitions, light on details' with no measurable implementation milestones.
Who Must Comply
- AI developers and deployers in Australia
- Government agencies
Safety Provisions
- Australian AI Safety Institute (AISI) established for AI risk monitoring and testing
- AISI to provide guidance on responsible AI adoption
- AISI to support coordinated government action on AI safety
- National AI Centre (NAIC) expanded as coordination hub
- Relies on existing legal frameworks (Privacy Act, Consumer Law, sector rules)
View on map
Australia
Focus Areas
General regulation
Cite This
APA
Australia. (2025). Australia National AI Plan and AI Safety Institute.
Related Regulations
AU AI Guardrails
10 mandatory guardrails proposed for high-risk AI: accountability, risk management, data governance, testing, human oversight, transparency, contestability, supply chain transparency, record keeping, conformity assessment.
AU OSA Phase 2 Codes
Phase 2 industry codes under Australia's Online Safety Act extending age-restricted material obligations to AI companion chatbots, generative AI services, search engines, app stores, and gaming platforms. Requires robust age assurance, prohibits AI-generated sexually explicit conversations with minors, and mandates suicide/self-harm content safeguards.
China CSL Amendments
First major revision of China's foundational Cybersecurity Law since 2017. Introduces formal AI governance provisions, significantly increases penalties, and expands extraterritorial application to all cybersecurity violations.
India AI Governance Guidelines
Voluntary AI governance framework built on seven core principles ('sutras'): Trust, People First, Innovation over Restraint, Fairness & Equity, Accountability, Understandable by Design, and Safety/Resilience/Sustainability. Establishes AI Governance Group, AI Safety Institute, and Technology & Policy Expert Committee.
Brunei PDPO
Brunei's personal data protection order requiring DPIA and imposing penalties up to 10% Brunei turnover or $1M.
India DPDP Act
STRICTEST children's provisions in APAC. Children = under 18; verifiable parental consent MANDATORY; PROHIBITION on tracking, behavioral monitoring, targeted advertising to children.
Last updated March 23, 2026. Verify against primary sources before relying on this information.