We’ve heard a lot about feedback being important, but so often we’re basing assumptions on qualitative responses from a few enthusiastic learners. What about in a class of 600? Do the numbers stack up? Further, feedback takes many forms – how do we know which approach works best? And how can we teach a large cohort of teaching associates and students to give good feedback?

Testing the evidence

A team of us (Simon Knight, Andy Leigh, Yvonne Davila, Leigh Martin, Dan Krix, and Alexandra Thompson) set about exploring the data from an innovative benchmarking task, run in a large (~500 student) first year undergraduate life sciences subject: Biocomplexity. You can check out the slides below (presented at the Teaching and Learning Forum), or read the paper online:

Knight, S., Leigh, A., Davila, Y.C., Martin, L.J., & Krix, D.W. (forthcoming). Calibrating Assessment Literacy Through Benchmarking Tasks. Assessment & Evaluation in Higher Education.

View the slide show here:

In this subject, students develop both their evolutionary biology knowledge, and their academic and communication skills within that context. To support these students’ understanding of their assessment criteria, they: (1) complete a benchmarking task, in which they use SPARKPlus to assess three exemplars, subsequently receiving feedback on their accuracy and viewing all other feedback given including the instructors; and (2) complete a self-assessment for their own assignment, which follows the same criteria as those assessed in the benchmarking.

Benchmarking

Benchmarking is intended to, (1) engage students with the assessment criteria and their application; (2) critically expose students to exemplars of varying quality, and the evaluation of these exemplars; and (3) provide diagnostic information to the students and teaching team regarding the calibration of their evaluative judgement against the assessment criteria.

Data from 2012-15 of this innovation was analysed to investigate the relationship between accuracy of student-assessments and learning outcomes, and to understand the features of quality feedback in these tasks. Analysis indicates that:

  • students who complete the benchmarking task perform better
  • that students who are more accurate self-assessors perform better
  • That students who are more accurate in the benchmarking task are also more accurate in the self-assessment task
  • The students are overwhelmingly positive about the task, and are able to articulate its key intended learning outcomes

Feature image by José Alejandro Cuffia.

Join the discussion