new paper titled The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI outlines how tools like ChatGPT can actually short-circuit the retrieval, error correction, and schema-building processes needed for ‘robust’ neural encoding to occur.

“In the age of generative AI and ubiquitous digital tools, human cognition faces a structural paradox: as external aids become more capable, internal memory systems risk atrophy,” the researchers flag.

One of the authors, Dr Michael Johnston from The New Zealand Initiative, says when students rely on AI as a mental crutch from the outset, this damages systems in the brain that are essential for mastery, critical thinking and long-term retention.

“The foremost thing really, whether it’s AI or any other technology, is that if young people are able to use technology to do things before they’ve learned to do it themselves, and if those things are important, either to further learning or to their lives and prosperity, then we’re actually selling them short by letting them use the technology,” the cognitive psychologist tells EducationHQ.

This is not about demonising AI or other tech tools being used in schools either, the expert clarifies.

Rather, it’s recognising what’s at stake if young people avoid doing critical mental heavy lifting.

“When we learn, we can think of that as a process of storing information and knowledge and skills and things in long-term memory, and it has to be organised well into things called cognitive schema, which basically are linked bits of knowledge,” Johnston explains.

“That’s what supports us being able to think critically. It supports creativity as well – you can’t think critically or be creative without personally-held knowledge.

“And so if we’re relying on ChatGPT or any other AI tool to give us knowledge as and when we need it, rather than holding it ourselves, or to produce our thinking as it were for us by writing documents, then we’re not building those schema in long-term memory that are going to support us being able to think for ourselves.”

There is a big question to be asked here: could our increased reliance on AI as a ‘memory aid’ be eroding our overall cognitive capacity as a society? Are we getting dumber the more we offload to tech?

Leading UK assessment expert and author Daisy Christodoulou, director of education at No More Marking, suggests this may well be the case, noting a reversal of the ‘Flynn effect’ appears to have taken place.

“For most of the 20th century, scores on IQ tests steadily increased. This trend was termed ‘the Flynn effect’, after James Flynn, who described it. But in developed countries, this increase appears to have stopped and even reversed for cohorts born in the latter decades of the 20th century,” Christodoulou writes on Substack.

She notes that Flynn himself has analysed historical UK data and concluded that there is a drop in teens’ IQ scores from the 1980s onwards.

Could our ‘cognitive offloading’ to tech, and now AI, be making us less smart?

“Most people don’t have to do hard cognitive labour, and technology can provide us with so much easy entertainment and distraction.

“But these improvements might have created a new problem: it’s now easy to be stupid,” Christodoulou argues.

It’s for this reason that schools ought to respond by becoming “gymnasia for the mind”, the expert proposes.

“They should be places that allow you to practice and acquire the basic skills that are no longer directly rewarded in daily life, but which are still vital.”

Johnston agrees entirely.

Yet, as Christodoulou reminds us, others view the emergence of AI technologies as a chance for students to stop having to learn the basics and to instead focus on those things machines are unable to do.

This line of thinking has posed a threat to education well before AI came into the picture, Johnston says.

Over the last 20 years or so, he says, schools in Australia, New Zealand and in most English-speaking countries adopted a ‘child-centred learning’ philosophy with roots in social constructivism.

“[It’s] the idea that teachers are relegated more as facilitators of learning rather than leaders of learning in the classroom, that children will discover knowledge for themselves and so on.

“And with anything complex, that’s just not true, it doesn’t work. And we’ve seen it in the educational data from around the world in studies like PISA, where most of these countries have been going down, and down, and down over time...”

“But once AI comes along and gets applied in a situation like that, the temptation becomes to think, ‘oh, this is just another tool that they can use to learn for themselves…”

Yet with a sound explicit teaching approach based on cognition and neuroscience in place, “we are able to see clearly both the risks and opportunities that AI and other tools offer,” the expert says.

When used judicially, AI can be used to boost student learning significantly, Johnston adds.

Take feedback, for example.

“One of the most powerful things that supports learning is (teacher) feedback … and Professor John Hattie published a book a couple of decades ago called Visible Learning, and in that he found that feedback was the most effective teaching technique – bar none. Now AI, if it’s properly set up, can do that really well,” Johnston says.

Say a student writes an essay and feeds it into AI, instead of simply correcting any errors, the tech could offer something far more sophisticated and useful, he explains.

“It might … be able to analyse the argument of the essay and say, ‘have you thought about this part, it doesn’t really make sense’, or, ‘you could explain this better’, or ‘the introduction isn’t very strong because it’s not very engaging’, whatever it might be, and then the student is able to take that feedback and apply it.

“So, in that way the AI can become a really useful teaching assistant in the classroom.”

Of course, there is a big caveat to this, the expert says.

“It should always be under the supervision of a human teacher, because I think ethically we need to maintain human control of these things and not just relegate everything to the machine.

“Otherwise, we will end up being ruled by [it], not because the machine has any great desire for power – because of course it doesn’t – but once we get lazy, we get less capable, and we just become overly reliant on the technology.”