Led by Jennifer Meyer from the University of Vienna, the research examined how successfully 937 German students from Years 7, 8 and 9 engaged with computer-based feedback given on a writing task.
Students were asked to revise their texts in accordance with the feedback.
Some 20 per cent totally ignored the feedback and made no changes to their texts.
Meanwhile, 47 per cent of students who did engage with the feedback showed no improvement in their final work.
Commenting on the ‘sobering’ research in a Substack post, Dr Carl Hendrick noted there were several ‘paradoxical’ findings that challenge conventional understandings of teacher feedback processes.
“Perhaps most surprising is that higher-performing students were actually less likely to improve after receiving feedback.
“This counterintuitive result suggests that there’s either a motivation issue where these students may feel less compelled to make substantive changes, or there is some kind of ceiling effect of diminishing returns where further improvement becomes increasingly difficult,” he wrote.
Hendrick, a science of learning expert and Professor of Education at Academica University of Applied Sciences in Amsterdam, also called out a ‘troubling disconnect’ that became apparent in the study.
“…many students reported finding feedback useful but still failed to implement it effectively.
“In other words, even when they know it’s useful, they still either ignore it or don’t know what to do to improve.”
Male students and those with lower cognitive abilities were more likely to show disengagement with feedback, the study found.
This ‘significant’ gender gap remained even after controlling for cognitive abilities, motivation, and prior performance, thus “suggesting deeper socialisation factors at play,” Hendrick suggested.
Male students might be more likely than female students to ‘drop out’ in the initial phases of feedback processing, researchers proposed, leading them to decide not to accept and act upon the feedback.
Prior research indicates that male students might read the feedback but still not actively engage with it because they don’t see the value in it.
Referencing one meta-analysis from the ‘90s, researchers noted that “male students may be particularly likely to respond to the competitive nature of evaluative achievement and, hence, to adopt a self-confident approach that leads them to deny the informational value of others’ evaluations”.
Indeed, the research found female students perceived the feedback as more useful compared to male students, but these differences disappeared when controlling for non-cognitive measures.
“Stereotypes also relate to teacher feedback practices: Teachers provide more feedback to the group they have negative stereotypes about in the given domain; this indicates that feedback practices can depend on domain and gender stereotypes and, therefore, influence the feedback that students receive across their schooling careers,” researchers added.
This might shape how students react to feedback, they pointed out.
“Accordingly, it would be interesting to investigate whether female students also tend to engage more with feedback in stereotypically male domains.
“For example, research on feedback perceptions has shown that male students tend to perceive that they receive more feedback in maths compared to female students.”
The nature of teachers’ feedback might also account for the different responses between girls and boys.
Earlier findings indicate gender differences only appear for specifically unfriendly statements, the study flags, with women perceiving such feedback as more constructive than men.
Hendrick argued that we’ve known for a long time that not all feedback is effective.
“Kluger and DeNisi’s 1996 meta-analysis found that while feedback interventions overall had a modest positive effect on performance, 38 per cent of the feedback interventions actually decreased performance,” he wrote.
He warned that without addressing the factors causing student disengagement, many educators “may be unknowingly performing a laborious, time-intensive ritual that serves institutional expectations rather than actual student learning”.
OECD survey data suggest Australian teachers spend 33 per cent of their time each week on ‘core’ teaching activities, such as correcting student work, preparing for lessons, team work and professional development – four times as much as the amount of time (8 per cent) they spend on general administrative activities, according to the Grattan Institute.
Marking requirements in particular reportedly suck up much of teachers’ time outside of school hours and during school holiday periods.
EducationHQ has heard reports that teachers are taking sick leave solely so they can finish a backlog of student marking and feedback tasks.
Yet despite the study’s findings, Hendrick argues that marking is not a total waste of teachers’ time.
“Carefully reading a student’s work demonstrates respect for their efforts and helps teachers gain deeper insights into their thinking, misconceptions, and progress.
“This close analysis allows teachers to better understand individual students’ strengths and challenges,” he noted.
Crucially, marking also offers clear guidance that can direct instruction, he added.
“When teachers analyse student work, they can identify patterns across the classroom, spot common misconceptions, and adjust their teaching accordingly.
“This diagnostic function helps teachers make informed decisions about what to teach next, which concepts need reinforcement, and which students might need additional support.”
Yet if teacher feedback is not effective or students are not engaging with it, is there a solution at hand?
Yes there is, Hendrick argued.
“The future of marking and feedback may lie in combining two emerging approaches: artificial intelligence and comparative judgment.
“AI tools can now analyse student work with increasing sophistication, identifying patterns, providing targeted feedback, and even adapting to individual student needs.
“Meanwhile, comparative judgment – where teachers make quick relative assessments between pieces of work rather than marking against complex rubrics – has been shown to produce more reliable and efficient evaluations,” he offered.