Google, Facebook’s parent company Meta, TikTok, Reddit, Discord, Twitch and Snap must answer a series of questions from online safety watchdog eSafety about the number of children on their platforms and the age assurance measures used to prevent access by under-aged kids.

Most of these sites have their own age limits to prevent children under 13 from using social media.

But one-in-four children aged eight to 10 used social media at least once a week and almost half of all 11 to 13-year-olds accessed the sites at the same rate, according to eSafety research.

Commissioner Julie Inman Grant says legally-imposed age limits are on the table, but noted the online sphere offered some benefits to teenagers and said more must be understood of the potential effectiveness and unintended consequences of any restrictions.

“To ensure the safety of young Australians, we need to provide them – and their parents, carers and educators – with effective education and prevention strategies,” she said.

eSafety research shows two- in-three teenagers aged 14 to 17 have viewed content of drug use, self-harm, violence and other harmful content in the past year.

“It cannot all fall on the shoulders of kids, parents and teachers – industry need to play their part, too.”

The eight social media companies will have 30 days to provide their responses to the eSafety Commissioner.

Internet access has become an increasing concern with two- in-three teenagers aged 14 to 17 having viewed content of drug use, self-harm, violence and other harmful content in the past year.

The Federal Government has provided $6.5 million for a pilot program of age-assurance technology but Prime Minister Anthony Albanese has said any age requirements must be proven to work.

Opposition Leader Peter Dutton has vowed to ban children under 16 accessing social media should the coalition win the next election.

The questions follow legal notices issued by the eSafety Commissioner, requiring Apple, Google, Meta and Microsoft to report to the regulator every six months about measures they have in place to tackle online child sexual abuse.

Issued under Australia’s Online Safety Act, notices were also sent to Discord, Snap, Skype and WhatsApp and require all to explain how they are tackling child abuse material, livestreamed abuse, online grooming, sexual extortion and where applicable the production of “synthetic” or deepfaked child abuse material created using generative AI.

For the first time the notices will require tech companies to report periodically to eSafety for the next two years, with eSafety publishing regular summaries of the findings to improve transparency, demonstrate safety weaknesses and incentivise improvements.

Skype, Microsoft Teams, FaceTime, and Discord do not use any technology to detect live-streaming of child sexual abuse in video chats.

Twelve 11 young people over the past decade have taken their lives after being victims of sextortion or image-based abuse.

At least seven others died after they experienced bullying of a sexual nature but not image-based or sextortion.

(with AAP)