Figures released on Friday show complaints to the federal eSafety Commissioner’s image-based abuse reporting line have surged, with four out of five cases involving female victims.
Commissioner Julie Inman Grant believes the rapid rise in reporting among young people may only reveal part of the problem, warning the numbers did not represent “the whole picture”.
Inman has issued an urgent call for schools to report deepfake incidents to appropriate authorities as the rapid proliferation of ‘nudify’ apps online takes a growing toll on communities nationwide.
“Anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings,” she said.
Deepfakes refer to digitally altered images of a person’s face or body, and young women and girls are often targeted in a sexual manner.
The use of artificial intelligence has made accessibility much easier for perpetrators.
“With just one photo, these apps can nudify the image with the power of AI in seconds,” Inman Grant warns.
“Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the school yard and beyond.
“There have also been reports that some of these images have been traded among school children in exchange for money.”
Aware of the string of deepfake incidents taking place nationwide, one school in South Australia has developed a clear and comprehensive plan of action ready to roll out if staff or students were to be targeted.
Created by the school’s Director of Technology Enrichment, Joanne Villis, the plan includes an explicit set of instructions about how the school would respond, detailing who would take specific actions and through what channels.
Villis told EducationHQ all school leaders should now consider such a plan as an essential part of their procedural toolkit if they want to minimise the harm and reputational risk that artificial content poses to individuals and schools alike.
“At a senior leadership and at a school level, open the discussion and the awareness about deepfakes – what they actually are and the prevalence of them,” Villis advised.
“Especially, get your staff aware that leadership have an understanding and are taking action in terms of a policy around it…”

eSafety is hosting a series of webinars throughout July and August for educators, youth-serving organisations and parents on AI-assisted image-based abuse and navigating the deepfake threat.
To help schools, the eSafety Commissioner has released a new step-by-step guide for responding to deepfake incidents.
Providing support and advice, the guide is designed to work alongside schools’ existing policies and procedures.
Along similar lines to Villis’s plan, the guide strongly encourages educators to prioritise the wellbeing of children and staff who might be targeted.
It outlines the steps for reporting image-based abuse, including contacting police and reporting to eSafety if the content has been shared online, or if there are threats to share it.
Dr Asher Flynn, Associate Professor of Criminology at Monash University, said the sharing of explicit deepfake images of underagers was a deeply concerning trend.
She said the situation is complex and stressed that responsibility for addressing the issue does not lie solely with leaders, students, teachers, or parents, but also with major tech companies.
“(We need) to hold tech companies and digital platforms more accountable,” Flynn said.
“We can do this by not allowing advertisement of freely accessible apps that you can use to de-clothe people or to nudify them.”
She acknowledged that some progress is being made, but emphasised the need for clearer and stricter regulations around what can be promoted and accessed online.
Educating parents and children to identify and understand the complexity of deepfakes is also vital, Flynn said.
“These technologies are available and we can’t ignore them.
“It’s really important to also have that round table conversation, so everyone knows this is what can happen and what the consequences of doing that are for someone.”
Laws cracking down on the sharing of sexually explicit AI-generated images and deepfakes without consent were recently introduced to federal parliament.
Multiple reports have emerged of deepfake images being circulated in schools across the country, including an incident west of Melbourne where explicit deepfake images of 50 Melbourne schoolgirls were created and shared online last year.
Professional learning on navigating the deepfake threat
Coming up in Term 3, on Tuesday, August 5 at 3.45pm (AEST), eSafety is running a professional learning session titled: AI-assisted image-based abuse: Navigating the deepfake threat
For educators and youth-serving professionals, the webinar covers:
- the tools, the behaviours and impacts of AI-assisted image-based abuse
- what is driving the use of deepfakes
- support strategies to prevent and respond to AI-assisted image-based abuse
To register, click here.
(with AAP)
1800 RESPECT (1800 737 732)
National Sexual Abuse and Redress Support Service 1800 211 028