Skip to main content

AU Online Safety Act

Online Safety Act 2021

Grants eSafety Commissioner powers to issue removal notices with 24-hour compliance. Basic Online Safety Expectations (BOSE) formalize baseline safety governance requirements.

Jurisdiction

Australia

Enacted

Pending

Effective

Jan 23, 2022

Enforcement

eSafety Commissioner

Federal Register of Legislation

Why It Matters

Australia is among the most explicit and proactive regulators on AI chatbot safety and self-harm specifically. eSafety actively targeting AI chatbots with legal notices and enforcement powers. Non-compliance with reporting notices: up to AUD $825,000/day.

Recent Developments

eSafety issued legal notices to AI companion providers (Oct 23, 2025) demanding child safety control explanations, explicitly citing suicide/self-harm risks. Companies notified: Character.AI (~160K Australian MAUs as of June 2025), Nomi (Glimpse.AI), Chai Research, Chub AI.

At a Glance

Applies to

AI CompanionChatbot ServiceOnline Platform

Who Must Comply

  • Online services available to Australians

Safety Provisions

  • Removal notices with 24-hour compliance requirement
  • Class 1 material (CSAM, terrorism, extreme violence) must be removed
  • Basic Online Safety Expectations: safety by design, responsiveness, transparency
  • Industry codes for harmful content categories

Compliance & Enforcement

Key Dates

Jan 23, 2022

Act fully in force

Jan 1, 2024

Phase 1 Unlawful Material Codes in operation

Dec 27, 2025

Phase 2 first tranche codes effective (hosting, search engines)

Mar 9, 2026

Phase 2 second tranche codes effective (social media, apps)

Jun 27, 2026

Search engines implement logged-in age assurance

Sep 9, 2026

App stores implement age assurance for 18+ apps

Penalties

A$825K/day

View on map

Australia

Focus Areas

Mental health & crisis
Child safety
Active safeguards required

Compliance Help

Requires rapid detection systems to meet 24-hour removal deadlines and safety-by-design controls to reduce harmful content exposure. AI chatbot operators must document suicide/self-harm detection capabilities for eSafety compliance.

See how NOPE helps

Cite This

APA

Australia. (2022). Online Safety Act 2021.

Related Regulations

In Effect UK

UK OSA

One of the most comprehensive platform content moderation regimes globally. Creates specific duties around suicide, self-harm, and eating disorder content for children with 'highly effective' age assurance requirements.

In Effect IE

Ireland OSMR

Establishes Coimisiún na Meán (Media Commission) with binding duties for video-sharing platforms. One of the cleaner examples of explicit self-harm/suicide/eating-disorder content duties in platform governance.

Failed CA

C-63

Would have established Digital Safety Commission with platform duties for seven harmful content categories including content inducing children to harm themselves. Required 24-hour CSAM takedown.

In Effect AU

AU Social Media Age Ban

World's first social media minimum age law. Platforms must prevent under-16s from holding accounts. Implementation depends on age assurance technology.

In Effect AU

AU Privacy Amendment 2024

Strengthens Privacy Act requirements for biometric data collection, raising the standard of conduct for collecting biometric information used for automated verification or identification. Cannot collect such information unless individual has consented and it is reasonably necessary.

In Effect GB

Ofcom Children's Codes

Ofcom codes requiring user-to-user services and search services to protect children from harmful content including suicide, self-harm, and eating disorder content. Explicitly covers AI chatbots that enable content sharing between users. Requires detection technology, content moderation, and recommender system controls.

Last updated February 17, 2026. Verify against primary sources before relying on this information.