Having cast a critical eye over the historical context of Running Records and the research that was used to justify its widespread use, La Trobe University PhD candidate Annie Unger advises school leaders to look elsewhere if they want an evidence-based assessment to gauge students’ reading progress and inform teachers’ practice.

Running Records is based on a theory of reading and reading acquisition tied to the (now discredited) ‘three-queuing’ instructional approach developed by New Zealand educator and psychologist Marie Clay.

Limited to studying cognitive processes through observable behaviours, in the1960s Clay developed an ‘error analysis’ approach that honed in on the errors and instances of self-correction children made when reading in the early years as a means of tracking their progress, Unger, a former literacy interventionist, explains. 

But when a cognitive science lens is applied, the approach doesn’t stack up, the researcher warns.

“When you look at that time in the ’60s, they didn’t have access to [things like] neuroimaging or computer models that we have today,” Unger tells EducationHQ.

“So, they had to find whatever that they could to learn more about what was happening in kids’ brains … so they would say, ‘well, what is one thing we can analyse? We can analyse these errors and look for the pattern’.

“And when they analysed the errors, they did find these really consistent patterns of kids making certain categories of mistakes…”  

Part of the problem is that Running Records seem to make ‘intuitive sense’ to teachers, she says.

“Running Records are often like furniture in a school, they’re so familiar and I think they have a lot of trustworthiness built on that familiarity, and they often make intuitive sense to people.

“Every time you do a Running Record you get evidence of kids making meaning, instruction, and visual errors. It’s a very consistent pattern.”

Unfortunately, the theory behind the assessment is based on the misguided premise that by analysing children’s reading errors you’re granted a ‘window’ into hidden mental reading processes, Unger notes. 

“It’s a logical flaw – even though the pattern makes it seem like you’re getting very trustworthy evidence, it’s actually directing your attention in the wrong spot.” 

“What we actually see, when kids ... can’t read [a given] word, they’re having to [draw on strategies] essentially other than reading.”

When children struggle to nail a word, they have to draw on the information they do have to make a ‘good guess’ about what it could be – this could involve the meaning or grammar context, or various letter-sound relationships to try and find the correct answer, Unger adds. 

“But because they’re not particularly successful strategies, a lot of the time kids make mistakes – and so that’s why we see these patterns so much when kids are making mistakes, and it’s why it feels like we get such consistent data and such consistent evidence on Running Records.”

Unger found ‘circular reasoning’ at work in the research supporting Running Records.

Despite a shift towards the structured literacy and instructional approaches informed by the science of reading in schools across Australia, Unger says she suspects many still use Running Records.

The scale of the assessment’s use is hard to quantify, she adds.

“In terms of academic research, there isn’t much to tell us.

“But just in terms of what you hear when you go on a Facebook group of teachers and you see discussions, I think they are certainly still being used.”

School leaders have made “quite substantial” investments in the assessment tool as well, the researcher notes.

“I think there has been a really big shift away, but for a lot of schools … often it’s much easier to introduce something than to stop using [it].

Today, multiple commercial programs and assessment systems continue to use Running Records or assessments based on them.  

And when it comes to the research base supporting their effectiveness over the decades, Unger found ‘circular reasoning’ at work.

“[This research] ties in very much with the teaching approaches that were based on [Running Records], so it was all very cohesive and sort of coalesces together, so people didn’t necessarily have a reason to question it,” she shares.

The core assumption behind Running Records was left unchallenged for a long time – a phenomenon that speaks to a broader trend in education, Unger suggests.

“When I looked at a lot of the research around Running Records, the peer-reviewed research from the academic journals, the research is – fair enough – based on the assumption that Running Records are a helpful, valid tool for assessing kids’ reading, rather than [questioning], ‘is this a valid tool?’ as the starting point.

“And so what happens is you can build quite a large body of literature around the topic, but you haven’t taken that first step of going, ‘is this a useful body of literature to build?’”

Unger reminds us that even the most promising hypotheses, based on the culmination of best evidence in any given moment of time, can ultimately be unsupported.

Her research offers a ‘cautionary tale’ of adopting practices at scale without rigorous evidence to support them, and warns we must also be prepared to change our minds when it becomes clear that insufficient evidence exists for a favoured approach and/or when a body of new evidence is built.

“In education, we must be cautious not just of failing to adopt evidence-based practices but also of failing to monitor and critically appraise the evidence supporting practices we do adopt,” she writes.

As for an alternative to Running Records, Unger nominates normed tests of oral reading fluency, which provide teachers with an opportunity to listen to children read and a means of analysing this against standardised norms.

“They can be used to help measure whether students are at risk in terms of reading – they are another way that you can listen to a child read and get data from that.

“And just listening to a child reading is a really helpful thing for a teacher. Certainly, saying Running Records aren’t the best way to do that, isn’t to say that just listening to kids reading isn’t very valuable, and knowing [a student] found this text easier to read or harder to read is really insightful information to a teacher.

“But we don’t need the process of the Running Record to do that...”

Unger has some final words of advice for school leaders:

“I would say that if you’re looking for an evidence-based reading assessment, I’d look beyond Running Records.”