And as companies tussle to market their products to schools in the AI arms race, there’s a lot on the line if we fail to get this right, Dr James Curran, founder and CEO of Grok Academy, says.

We sat down with the expert following his talk at EduTECH to unpack the pressing issues facing teachers and school leaders when it comes to generative AI tools. 


SD: Hi James, you’ve said there’s great value for teachers if they can better understand the ‘intuition’ underpinning generative AI. Do you think that enough of the profession are across how this technology works at this fundamental level?

JC: I think that most of the explanations that exist out in the general public are either very high level … or they’re highly technical, deep in the mathematics of how it works. And there’s not a lot of great descriptions that, I think, build that intuition.

To be fair to teachers, I don’t think we’ve given them the materials yet that they need to do this. And then, this is a general problem with the teaching profession, there’s not enough time to do learning.

"... while as urgently as we can, every teacher needs to develop an intuitive understanding of [generative AI], there are some schools, for example, a metro private school where probably every child in Year 9 has got an account for ChatGPT and other things, and is maybe using them extensively – and there are other schools, let’s say regional schools, lower socio-economic schools, where generative AI is not in the top 100 problems that the school leaders are worrying about.

So, I think there is a bit of a two-seed thing going on here, but fundamentally, my message is that the technology is going to keep changing. What we say about the technology, and exactly what it’s capable of doing today versus tomorrow, will continue to evolve.

Every edtech product out there is racing to include, or at least market, that it includes generative AI, and the only way for our teachers to really navigate all of this change is to have some idea about the fundamentals.

We know that generative AI uses probability to model its language, how can teachers and school leaders harness this to help guide their use of the tech?

The first thing is that the intuition allows you to see why the tools are not reliable generators of fact, because while much of the time … it has a high chance of generating something that is correct, there is still also a reasonable chance that it’s going to produce something that’s nonsensical – something that sounds like fluent, thought-out language, but in fact, is not actually either factually correct or logically correct, because the statistical models don’t think.

They don’t actually process fact in that way, their whole way of operating is to say, ‘what is the next most useful word probabilistically to generate in response to what the user prompted’.

Say a teacher asks it to generate a lesson plan for a particular content description in the Australian Curriculum, it may produce something that is reasonable output, but it may also generate something randomly that doesn’t align to that content description at all – and I’ve seen that.

I've put all of the Australian Curriculum – Digital Technologies content descriptions into ChatGPT and other tools, and I can tell you with a high probability they’re going to produce output that doesn’t actually directly address the content description, although it will say very authoritatively that it does.

These tools can talk a big game, but in reality, they’re not actually taking into account the things that they say that they are. The marketing for these things is already promising to solve all of teachers’ problems.

I think the only way that you can truly distinguish what are reliable and safe use of generative AI and examples where it produces rubbish, is intuition, It is the only thing to help you understand that.

Ultimately, this is what we want the students to [grasp]. They should be using generative AI in school because they need to see its strengths and weaknesses.

They need to see it produce rubbish and understand why, and understand that they and teachers are professionally responsible for the output, and so they should be checking and fact checking things carefully before they stand behind [the generated] words.

The expert says with generative AI, there's an ethical question of ‘just because I can produce more texts, does it mean that I should?'

To what extent, do you think, are commercial interests clouding the picture for schools with generative AI?

I think this is a technology that has a huge amount of hype behind it at the moment, but it also has significant transformative power, and only time will truly tell what things will change and what things won’t change with generative AI built into so many of the products that we use.

To put it bluntly, when companies like OpenAI, like Microsoft and all of the big tech companies are all investing heavily in a generative AI arms race, they all need to turn that into a commercial success to be able to continue justifying it…

From my perspective, many of these tools are not what I would call ‘educational technology’, they are general technology; the language models, the data that these tools are trained on, and so on, is general text slurped up from the internet.

It’s not a model that’s been specifically designed to be educationally targeted…

Students will inevitably be the ones left to grapple with the ethical conundrums generative AI presents. What are some of the big ones they’ll face, do you think?

None of us know what the set of new ethical conundrums will be, but we do know that each kind of new technology introduces a new set of ethical considerations.

I suppose we have all learnt, and I’ll just use social media as one example, that if the technology develops, and we’re not having those ethical conversations and we’re not developing the ethical ‘muscles’ of our kids to ask these questions, then the technology will continue to develop in a way that suits the big tech companies and not necessarily in a way that results in a better future that we all want.

To me, these tools really mean that the ethical understanding, the general capability in the Australian Curriculum, is absolutely something that we need to keep exercising with kids…

I think there are ethical problems in how the data is created, filtered, curated, corrected, all of those things we should be asking about. And there are people out there asking, but I think the vast majority of consumers and users would have no idea about these things.

And then a really critical question to always ask is, ‘just because we can, does it mean we should do something with generative AI?’

For example, it takes more energy to write than it does to read. I’m going to spend more time writing an email than it’s going to take for me to read an email.

But with generative AI, I can put down two or three dot points and ask it to produce me an email that it would then take longer to read than it might take to write, especially if I don’t bother reading it as a human before I send it.

So, it actually flips the equation. Say I want to write a bunch of marketing content, or I want to send out an EDM to my customers, the amount of effort that I now need to put into producing those is way lower than it used to be, but the cost of other people reading that has still stayed high ... and I think that’s quite a dangerous thing, and not something that’s been discussed very often.

There’s an ethical question of ‘just because I can produce more texts, does it mean that I should?’ We all know that the world is full of marketing guff and our inboxes are full of emails that we don’t have time to read already…

It sounds like an efficiency trap…

The main winner in that trap is actually the folk that are selling us the technology on both ends, to do the generating and then the summarising of that text.

Just because a teacher, can generate emails for parents, or can read and give feedback to a student’s work using Gen AI, doesn’t mean that that’s actually the most valuable thing.

The teacher could ‘avoid’ doing that work, but they’ve also lost all of the intuition that comes from having read that student’s work closely or making that connection with the parent.

I am very supportive of using generative tools to help teachers, but I think it’s very dangerous to say it’s ‘to help teachers with their workload’, because the first question you should ask is, ‘why does that work have to be done at all, if it can be done by a tool rather than a human?’

A lot of the time the answer is, ‘well, there’s a lot of paperwork, red tape and so on that we actually don’t need to do’. We should be re-examining what we’re asking teachers to do, not saying, ‘I can use a tool to generate content that I probably won’t read and I’ll upload into a system where someone else won’t read it either’.

This is one [issue] our current teacher workforce will need to consider, but ultimately, our kids will be navigating a world that these things happen in, and I think the line between what is useful and what is not is a very subtle one ... it’s a very slippery and subtle slope between that and scenarios where we’re just generating huge amounts of text.

Is there anything else you’d like to mention?

One of the things that is really important to put in these articles is it’s not surprising that teachers are turning to generative tools to manage their workload. It is a profession that’s under profound work stress where we don’t do a great job of providing resources for teachers.

I think teaching is one of the few professions that you could turn up as, let’s say, a new Year 3 teacher in your school, and that the previous Year 3 teacher left the school with their teaching program, and there was [nothing] left behind.

I don’t know of any other industry where staff would be allowed to walk off with the products or part of the products, meaning that the next person who came in had to start from scratch.

At the same time, while we have a national curriculum, we don’t actually have a national minimum standard set of resources and teaching program that any teacher could have access to if they need it – if they’re a grad teacher struggling through their first few years, if they’re new to teaching a subject, if they’re new to teaching a year level.

We don’t have those high quality, standardised sets of resources, the kind of things that the Grattan Institute report was recommending.

And so, it’s no surprise to me at all that teachers would turn to whatever technology is available to help them navigate what is still an incredibly silly system.

If we’re going to worry about the quality of generated AI output, the first thing we should say is, ‘well, why are teachers actually turning to generative AI to do these things when there are 380,000 teachers in Australia all reinventing the wheel on a day to day basis?’

 We should actually be doing this in a much more effective, collective manner.