Australia has announced it will include YouTube under its world-first ban on social media access for users under the age of 16, reversing an earlier decision to exempt the video-sharing platform.
The move follows a recommendation from Australia’s internet safety regulator, based on research showing that 37 percent of children aged 10 to 15 reported seeing harmful content on YouTube—the highest rate among studied platforms.
The ban, set to take effect on December 10, 2025, prohibits anyone under 16 from creating a YouTube account.
However, minors may still watch videos on the platform if an adult prepares content or enables access through logged‑out viewing modes.
Prime Minister Anthony Albanese said the decision is meant to protect children from online harms and to hold tech companies responsible for their role in exposing minors to unsafe content.
Communications Minister Anika Wells has confirmed that social media firms must block underage users or face fines of up to A$49.5 million for non‑compliance.
YouTube pushed back, asserting that it is a video platform rather than a social media site and arguing its content serves educational and mental health needs.
It had argued for its original exemption, citing popularity with teachers and structured content formats.
Alphabet, YouTube’s parent company, is said to be considering legal action over the reversal .
The legislative change is part of the broader Online Safety Amendment (Social Media Minimum Age) Act passed in November 2024, which introduced rules barring under‑16s from social media services, with enforcement beginning in December 2025.
Platforms previously exempted on educational or health grounds—such as YouTube, Google Classroom, and health apps—are now under review for consistency with the law.
Age‑verification requirements are still being finalised, though early reports suggest a mix of automated systems and parental tools will be used to confirm age without relying solely on official IDs.
Australia remains the first nation to roll out such sweeping restrictions nationwide, drawing a mix of support and criticism.
Advocates cite the prevalence of inappropriate content—including violent, hateful, or self‑harm themes—while critics urge a more balanced approach that protects online learning and creativity while keeping children safe.
