Almost a quarter have seen sexually explicit content online in the past year and more than one in 10 have seen violent sexual material, according to a survey (titled latest Keeping Kids Safe Online research) by the commission of 3454 children aged 10 to 17.
The findings have prompted eSafety to issue an Online Safety Advisory, including advice on how to protect children from seeing too much, too soon.
“Pornography is so pervasive and invasive that children are often stumbling across it by accident,” eSafety commissioner Julie Inman Grant said.
“We want children to understand the pillars of a healthy relationship begin with consent and respect.”
eSafety said schools play a critical role in education and early intervention.
“They should talk openly about respectful relationships and digital consent and update curriculum to reflect real-world online risks,” it said in the latest advisory.
“They should also support student wellbeing with counselling and clear reporting pathways and work with families to build consistent messages across home and school.”
Inman Grant recently registered new codes, drafted and submitted by industry, which require a wide range of technology services to do more to restrict children’s access to porn, high-impact violent material, and material that encourages self-harm, suicide or disordered eating, and empower users of all ages to control the material they do not want to see.
The codes cover search engine services, hosting services, internet carriage services such as telcos, app stores, device manufacturers, social media, gaming and messaging services and other apps and websites, including some generative AI services.

Julie Inman Grant says it’s crucial that kids know they won’t be in trouble if they come to parents, carers or teachers for help regarding exposure to violent and extreme pornography.
Inman Grant also raised concerns about AI chatbots that children were engaging with for hours every day.
“We’ve been concerned about AI chatbots for a while now and have heard anecdotal reports of children – some as young as 10 years of age – spending up to 5 hours per day conversing, at times sexually, with AI companions,” Inman Grant said.
“There has been a recent proliferation of these apps online and many of them are free, accessible to children, and advertised on mainstream services.
“Importantly these codes include measures to protect children from chatbots which can generate highly sexualised or pornographic material.”
Kids still able to use search platforms without logins
Meanwhile, Australians won’t need to log into accounts for online search platforms and verify their age but this has come with a warning about children easily skirting content restrictions.
Search engines like Google and Microsoft will be captured by online safety codes demanding they block children under 18 from accessing harmful content like pornography and violence.
Users will be able to verify their age by logging into accounts but this won’t be a blanket requirement.
But while users can still use such platforms anonymously, this would automatically filter out age inappropriate content, tech industry group Digi’s regulatory affairs director Dr Jennifer Duxbury said.
“There is a choice, you can also continue to search in a logged out state,” Duxbury told a parliamentary inquiry into the code on Wednesday.
“Now the experience that you’ll get there is a little bit different because the imagery, this graphic pornography and very violent imagery, will be blurred.”
But blocking content relating to suicide was harder because filters would inadvertently exclude self-help websites.
The code had to balance excluding harmful material and being mindful not to capture mental health and drug support services, Duxbury said.
She acknowledged no technology would be perfect but that search engines were adept at filtering out content like porn and very violent imagery as platforms already offered safe search options that could be turned on.
“Yes, children can figure out how to do a search in a logged out state but ... they will not be immediately confronted with a range of pornographic and very violent imagery,” she said.
This would be effective as people were being exposed to such content unintentionally, she said, citing the eSafety Commission.
“So they’re not actually really looking for it,” Duxbury said.
Platforms were also working with porn sites on age gating material as the code “is not a sure-fire complete protection against exposure from that sort of material,” she said.
With reagrds to AI chatbots, Duxbury said there was “an open question as to whether those AI companions should be lawful or not” but she wasn’t aware of any chatbots that could be used anonymously.
Not all chatbots were covered by the regulations but it did capture ones that generated harmful content.
“There is a regulation in place here that says that those providers need to make sure that under 18 year old users are not accessing those services,” Duxbury said.
(With AAP)