Issued under Australia’s Online Safety Act, notices were also sent to services Discord, Snap, Skype and WhatsApp and require all recipients to explain how they are tackling child abuse material, livestreamed abuse, online grooming, sexual extortion and where applicable the production of “synthetic” or deepfaked child abuse material created using generative AI.

For the first time the notices will require tech companies to report periodically to eSafety for the next two years, with eSafety publishing regular summaries of the findings to improve transparency, demonstrate safety weaknesses and incentivise improvements.

eSafety Commissioner Julie Inman Grant said the companies were chosen partly based on answers many of them provided to eSafety in 2022 and 2023 exposing a range of safety concerns when it came to protecting children from abuse.

“We’re stepping up the pressure on these companies to lift their game,” Inman Grant said in a statement.

“They’ll be required to report to us every six months and show us they are making improvements.

Inman Grant explained that when the national independent regulator for online safety sent notices to the companies back in 2022/3, some of their answers were “alarming but not surprising” as the body had suspected for a long time that there were significant gaps and differences across services’ practices.

“In our subsequent conversations with these companies, we still haven’t seen meaningful changes or improvements to these identified safety shortcomings,” she said.

“Apple and Microsoft said in 2022 that they do not attempt to proactively detect child abuse material stored in their widely used iCloud and OneDrive services.

"This is despite the fact it is well-known that these file storing services serve as a haven for child sexual abuse and pro-terror content to persist and thrive in the dark.”

eSafety also learnt that Skype, Microsoft Teams, FaceTime, and Discord did not use any technology to detect live-streaming of child sexual abuse in video chats.

This is despite evidence of the extensive use of Skype, in particular, for this long-standing and proliferating crime.

“Meta ... admitted it did not always share information between its services when an account is banned for child abuse, meaning offenders banned on Facebook may be able to continue perpetrating abuse through their Instagram accounts, and offenders banned on WhatsApp may not be banned on either Facebook or Instagram.”

The issuing of the notices have occured just as a coroner has said that education about online safety can help prevent deaths like that of a teen who fell victim to a sextortion scam two years ago.

Rohan Patrick Cosgriff was 17 when he was found dead at his home near Ballarat in July 2022.

In his pocket was a note that read: “I made a huge mistake. I’m sorry.”

Police later discovered that in the two days prior to his death, the teen was a victim of sexual extortion.

He’d been pressured into sending an intimate picture of himself to someone called “Christine” on SnapChat, who then threatened to distribute the images unless money was paid.

Investigators were unable to identify the person, but found the SnapChat account originated from Nigeria.

A total of 11 other young people over the past decade have also taken their lives after being victims of sextortion or image-based abuse.

In a report released last week, Victorian Coroner Audrey Jamieson said a large amount of information was available for those who needed it – but there needed to be a shift in education.

“The fact remains that with all the education in the world, and no matter how many times the message ‘don’t send intimate images’ is repeated, young people will continue to do these things,” Jamieson said.

“The conversation must turn to should you find yourself in this situation, it is going to be OK.”

A total of 11 other young people over the past decade have also taken their lives after being victims of sextortion or image-based abuse.

At least seven others died after they experienced bullying of a sexual nature but not image-based or sextortion.

“If a young person finds themself in a situation like Rohan did, the most important thing is that they know they have not done anything wrong,” the coroner wrote.

“And that the situation will not define the rest of their lives.”

The coroner also suggested the teen’s death could inform the statutory review of the national Online Safety Act, particularly with respect to combating sextortion led by transnational crime syndicates.

Eight different Google services, including YouTube, are not blocking links to websites that are known to contain child abuse material, eSafety has found, this is despite the availability of databases of these known abuse websites that many services use.

Despite eSafety investigators regularly observing use of Snapchat for grooming and sexual extortion, eSafety found that the service was not using any tools to detect grooming in chats.

“The report also unearthed wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services,” Inman Grant continued.

“Back in 2022, Microsoft said on average it took two days to respond, or as long as 19 days when these reports required re-review, which was the longest of all the providers. Snap on the other hand reported responding within four minutes.

“Speed isn’t everything, but every minute counts when a child is at risk.”

Inman Grant said the notices will let sSafety know if the companies have made any improvements in online safety since 2022/3 and make sure that the companies remain accountable for harm still being perpetrated against children on their services.

“We know that some of these companies have been making improvements in some areas – this is the opportunity to show us progress across the board.”

Key potential safety risks considered in this round of notices include the ability for adults to contact children on a platform, risks of sexual extortion, as well as features such as livestreaming, end-to-end encryption, generative AI and recommender systems.

Compliance with a notice is mandatory, and there may be financial penalties of up to $782,500 a day for services that do not respond.

The companies will have until February 15, 2025 to provide their first round of responses.