Created by Joanne Villis, Director of Technology Enrichment at the all-girls South Australian school, the document is one she hopes will never need to be activated.
Nevertheless, a ‘deepfake crisis response plan’ is something Villis says all school leaders should now consider as an essential part of their procedural toolkit if they want to minimise the harm and reputational risk that artificial content poses to individuals and schools alike.
Aware of the string of deepfake incidents involving schools, and having just put together a data breach response plan, Villis got thinking about just how prepared her own school was in the advent of a deepfake attack.
She questioned if the college was equipped with the tools to quickly identify, assess and respond to such incidents in the moment.
“I was thinking, ‘what would our response be?’ Because in that situation, normally it’s the principal who is the target – and when you have a crisis, your clarity about decision making and processes may not be as clear…”
What was needed was an explicit set of instructions about how the school would respond, detailing who would take specific actions and through what channels.
“The actual procedure you follow; who’s responsible for communication, how we communicate with the public, which platforms we use, and what we can and can’t say, needs to be clear,” she says.
Considering how you will safeguard the wellbeing of people targeted is also critical, the educator notes.
“For example, if the principal is targeted, the person who should be taking the lead should be the deputy principal so the principal is one step removed…”
Inspired by the ‘original thinking’ and broader policy outline from Brad Entwistle, who founded school marketing firm Imageseven, Villis started out piecing together a concrete response plan.
“I first came up with a policy and procedure, but when I submitted it to our Policy Committee, some members didn’t know what a deepfake was. So, definitions were needed at the start.”
The review process led Villis to add specific examples of potential deepfake incidents, helping staff understand a range of scenarios and appropriate responses.
“We included examples of both teachers and students being targeted to set the scene,” she says.
And while a deepfake incident might trigger the school’s wider critical response policy, the situation requires something more specific to work from, Villis says.
“I’ve already written up, for example, a letter of response to our community which the principal can send out if there’s a deepfake (incident).
“We wrote it, she’s checked it, so rather than having to write that in a critical time it’s there ready to go and adapt as needed.”
Guidelines for communication with the school community are a key part of the plan.
“Issue an initial communication within two hours of the crisis identification, confirming the synthetic nature of the content and advising against its distribution,” the final document reads.
“Aim for daily updates in ongoing situations. In single occurrences, one initial advisory communication followed by a more comprehensive summary of findings and actions may suffice,” it continues.
A stipulation that ‘balance is key’ here is bolded for emphasis.
“Maintain transparency and availability, while avoiding over dramatising or excessive communication that may heighten concerns,” the plan dictates.
A ‘deepfake’ is a digitally manipulated media file – an image, video, or audio recording – that has been created or altered using AI to convincingly portray someone doing or saying something they never actually did or said.
It was decided that email was the best and primary form of communication with the community in the advent of a deepfake crisis, Villis notes.
“…because then the whole of the community can get that information they need, we also communicate through school stream which is kind of like your email, and so then you can actually track who has actually opened the email, who has the information and who doesn’t.
Engaging in detailed discussions on social media, including via private messaging platforms, is to be avoided.
“But if you put [any communications up] on a social media platform you’re not capturing the whole entire community, and in the times of a crisis, that’s when people start talking and you want them to have the correct message,” Villis says.
To other school leaders considering developing their own deepfake response plan, Villis as a few words of advice.
“At a senior leadership and at a school level, open the discussion and the awareness about deepfakes – what they actually are and the prevalence of them.
“Especially, get your staff aware that leadership have an understanding and are taking action in terms of a policy around it…”
The passing of the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 has only reinforced the importance and relevance of this work for schools, Villis says.
According to the school leader, the Bill represents ‘significant improvement’ in the legal protections available to teachers and principals who may be targeted by deepfake abuse.
“One of the most important advancements is the introduction of a federal criminal offence that explicitly criminalises the transmission of deepfake sexual material without consent,” Villis recently shared on LinkedIn.
Previously, victims such as educators had to rely on civil remedies, which could be time-consuming, expensive, and emotionally distressing, Villis says, but with this Bill, law enforcement can now actively pursue perpetrators, shifting the burden away from victims.
However, Villis points out limitations in the Bill’s protections.
Firstly, the offence applies only when material is transmitted. If a deepfake is created but not distributed, no federal criminal offence has occurred.
This leaves a gap in prevention, especially in schools, where the mere existence of such material can cause significant distress or reputational damage.
Additionally, Villis warns the Bill does not criminalise threats to share deepfake content, which can be used to intimidate or silence educators even if the material is never published.
Recently it was reported that male students from a private school in Sydney were caught selling deepfake nude images of female students via social media.
The students reportedly used AI to superimpose the faces of their female peers onto sexually graphic images, also targeting girls from two other independent schools.
Media outlets are reporting the images were being sold for less than $5 within group chats on Instagram and Snapchat.
Meanwhile, an e-Safety spokesperson told 7NEWS.com.au it had received 38 complaints about explicit deepfake images involving children under 18 in NSW since January 2023.