Associate Professor Ritesh Chugh says the ban, due to take effect on December 10, is a bold and necessary move, but warns its success will hinge on fair enforcement, smarter risk reduction and giving young people the skills to protect themselves online.

“This is an important national conversation about how we balance the benefits of social connection with the need to protect younger Australians from online risks,” Chugh, whose research focuses on education, digital policy and the interplay between technology, society and ethical responsibility, said.

“The responsibility won’t fall solely on parents and schools – platforms will now have a clear duty of care.

“But the challenge will be in the detail: how to enforce it fairly, avoid unnecessary intrusion, and keep safe digital spaces open for young people.”

Under the new laws, platforms including TikTok, Instagram, Facebook, Snapchat, X and YouTube must prevent under-16s from creating accounts or interacting on their services or face fines of up to $50 million.

While most social media services at present have a minimum age requirement for account holders, they often don’t enforce it. That will no longer be acceptable. 

“If people are asked to verify their age for every Google search, it’s like showing your passport every time you step into a library,” Associate Professor Ritesh Chugh says. “People will naturally look for ways around it, and determined young users will find them.

Age-restricted platforms will be expected to take steps to:

  • find existing accounts held by under-16s, and deactivate those accounts;
  • prevent under-16s from opening new accounts;
  • prevent workarounds that may allow under-16s to bypass the restrictions; and
  • have processes to correct errors if someone is mistakenly missed by or included in the restrictions, so no one is removed unfairly.

Social media companies will be expected to come up with “reasonable alternatives” to government IDs (passports, drivers licences etc) for users to prove they are 16 or older.

In June, a preliminary report from an organisation commissioned to conduct trials of age-checking technology found that options existed to verify the age of users privately, robustly and effectively, however questions have been raised by some experts over the viability of some of the technology tested, and in the process created doubt over whether the ban can actually be executed.

An example is face-scanning technology, which was tested on school students but could only guess their age within an 18-month range in 85 per cent of cases.

Chugh warns that poorly designed age verification could backfire.

“If people are asked to verify their age for every Google search, it’s like showing your passport every time you step into a library.

“People will naturally look for ways around it, and determined young users will find them.”

Late last month, Communications Minister Anika Wells told ABC News that the Federal Government was awaiting final recommendations out of the age-checking tech trials (expected to be later this year, but prior to the ban) to provide more clarity on what the government considered “reasonable steps” companies should be taking to enforce the ban.

“There is technology and each platform works differently,” she said.

Wells said companies should be working directly with eSafety Commissioner Julie Inman Grant to establish verification methods.

“Reasonable steps is reasonable,” she concluded.

International experience offers a cautionary tale.

UK laws have pushed more teens to use VPNs, bypassing checks and creating “data blind spots” that make harmful activity harder to detect.

Chugh also points to the role of algorithms in pushing extreme or harmful content into mainstream feeds.

“The internet’s ‘bad corners’ have moved into the middle of the street,” he said.

“If we don’t change the incentives that drive platforms to push addictive, inappropriate content, parental controls will always be playing catch-up.”

Chugh’s solution is a multi-layered approach – parents staying engaged, schools teaching digital literacy, platforms making safety the default, and governments setting and enforcing strong standards.

“We often talk about making the internet safer for kids, but we also need to make kids safer for the internet,” he said.