Undertaken by UK-based assessment body Age Check Certification Scheme, the trial evaluated ways to verify ages, including formal verification using government documents, parental approval, or technologies to determine age based on facial structure, gestures, or behaviours – and found all were technically possible.
“But we did not find a single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments,” the report said.
The report recommended that methods to enforce the ban should be “layered” to create the most robust system, and highlighted that many of the technology providers were looking at ways to address sidestepping safeguards, through things like document forgeries and VPNs (virtual private networks) which obscure the user’s country.
More than 60 tools were assessed as part of the trial, which found technology could be used “privately, efficiently and effectively” to prevent Australians accessing explicit and inappropriate content.
But this does not mean Australia’s children will be completely protected from online harms.
“There’s going to be groups of young people that will still get around this,” Australian National University associate professor of law Faith Gordon said.
“I don’t think it’s a watertight solution at all.
“Age-assurance technology is clearly not the ‘silver bullet’ to make the digital world safer for children.”
The Federal Government’s social media ban requires platforms to take “reasonable steps” to enforce age limits, but does not specify a method.

Communications Minister Anika Wells says there is no excuse for social media platforms not to have a combination of age assurance methods in their platforms ready for December 10.
The systems tested were “generally secure and consistent with information security standards” and could handle prickly issues including AI-generated spoofing and forgeries, members of the Age Check Certification Scheme said.
“However, the rapidly evolving threat environment means that these systems – while presently fairly robust – cannot be considered infallible,” the group made up of experts and stakeholders said.
While facial-recognition technology, often used for age assurance, was 92 per cent accurate for people aged 18 or over, it was prone to biases and misidentifying people who were not white or did not present as male, Gordon said.
There are significant drops in accuracy for those within two years of the cut-off age, meaning almost 10 per cent of 16-year-olds are falsely rejected.
Dr Shaanan Cohney, a senior lecturer in the Faculty of Engineering and IT at the University of Melbourne, said while the report is expansive, its problems range from inconsistent claims (calling age estimation deployable and ready for prime time, even while documenting serious flaws in the relevant age bands) to far more serious omissions, such as failing to model a realistic spectrum of ways young people would circumvent the technology.
“In short, the report understates risk, overstates effectiveness, and falls well short of the standard security and privacy researchers expect for a high-stakes, society-wide intervention,” Cohney said.
Many Australians older than 16 could be wrongly excluded from using social media platforms, while some underage children could still have access.
The ban only prevents kids from creating accounts on social media platforms such as Facebook, Instagram, X, TikTok, YouTube and Snapchat, meaning they could still be groomed elsewhere online.
Young children would still be allowed on gaming platforms such as Fortnite, where bad actors could approach kids by offering in-game purchases, Gordon said.
The report warned unnecessary data retention could occur as tech giants anticipated future regulation.
“We found some concerning evidence that in the absence of specific guidance, service providers were apparently over-anticipating the eventual needs of regulators about providing personal information for future investigations,” it said.
This could lead to increased risk of privacy breaches because of unnecessary and disproportionate collection and retention of data.
The Greens have urged the Government to reconsider the use of age-verification technology.
“The age-assurance trial findings accidentally prove the social media age ban is unworkable and it is time to rethink this flawed approach,” Greens senator David Shoebridge said.
The trial was launched after the Federal Government announced a social media ban for people younger than 16, which will come into effect in December.
Communications Minister Anika Wells said the findings showed there were effective methods that could be used by social media platforms to enforce age limits.
Tech giants could be fined up to $49.5 million for failing to prevent people younger than 16 from having an account on an age-restricted social media platform.
“This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Wells said.
“While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and importantly, that user privacy can be safeguarded.”
While polling indicates most Australian adults support banning social media for children under 16, some mental health advocates say the policy has the potential to cut kids off from connection, and others say it could push children under 16 to even-less-regulated corners of the internet.
(with AAP)