Authored by Jenny Wallace with contributions from Kathy Egea, Alisa Percy and Joshua Dymock

The recent First and Further Year (FFYE) Forum explored the importance of students evaluating their own learning processes and work to become lifelong, adaptable learners. Referred to as evaluative judgement, this capability is increasingly urgent in the age of generative AI.

Participants in the Forum posed open questions about how generative AI is changing our conversation about learning and assessment, including what kind of evaluative judgement is needed to engage with this evolving technology in transparent and critical ways.  

“Experiential, practiced and dialogic” 

The Forum began with a provocation:

How are we developing students as critical and ethical consumers and producers of knowledge in civil society? To what extent are universities producing ‘professionals’ that reproduce the status quo, or educating ethical and responsible global citizens and changemakers?

Dr Alisa Percy argued that the growth of generative AI means critical literacy and evaluative judgement now need to be prioritised in the ways we design for learning. Locating critical literacy and evaluative judgement at the centre of our curriculum design requires a greater focus on students’ agency and capacity to ask difficult questions about information they are accessing, and evaluate the quality of their own work and the work of others (Boud et al., 2018). 

We can scaffold these capacities with activities that are experiential, practiced and dialogic (Sadler, 2010). Activities that contribute to this include the evaluation of exemplars, peer and tutor feedback, and student self-assessment and reflection (Tai et al., 2018).

The Forum was framed around the critical role that dialogue plays in learning: what kind of dialogue are we fostering, and how does generative AI fit in? 

Keep students at the centre of the conversation 

To explore this question, we heard from academics who have implemented evaluative judgement strategies in their teaching. Dr Jessica Appleton and Sonia Matiuk from the School of Nursing and Midwifery explained how they used an approach of scaffolding and structured analysis to develop Nursing students’ critical dialogue with the subject reading texts. Jessica and Sonia found that students showed greater understanding of their texts and worked collaboratively to present their conclusions. 

Dr Evana Wright from the Faculty of Law told the Forum how she used a ChatGPT-generated response to an essay question as the basis for a student assessment. Students wrote a critical review of the response and then a self-reflection on the limitations of AI systems, with a focus on legal practice. From student feedback, Evana discovered that while students needed more scaffolding with reflection, the assessment gave them an improved understanding of the implications of using generative AI in legal practice. 

More feedback, please!

For our student panel, generative AI is a critical tool in the development of evaluative judgement. Lucas, Shaowei, Bianca and Ashleigh shared experiences of in-class activities that help them evaluate the quality of their own work, from analysing and applying marking criteria to discussing their assignments with their tutors. The students talked about the ways in which they use generative AI in their learning, emphasising its value in helping to refine their writing. They also stressed the caution with which they approach AI output, as it does not always portray diverse perspectives. 

These responses highlight students’ desire for more dialogue about their subjects and learning. They’re looking for supportive, actionable and personalised feedback from their teachers, and guided use of generative AI to augment it. The panel finished with an appeal that was resonant of Alisa’s initial arguments for critical engagement: more open conversation about the responsible use of generative AI in higher education, particularly in relation to interrogating how AI sources and presents its information. 

AI as creative co-designer

To wrap up the themes of the Forum with some practical takeaways, Joshua Dymock explored how we can manage the dialogue with generative AI in a way that develops students’ evaluative judgement. He suggested that while generative AI can assist students in brainstorming, or polishing their writing, its real value lies in human-GenAI co-design, where there is a high degree of dialogue between the user and the AI tool. 

The effectiveness of this dialogue can be developed via prompts we give to the AI. For example, if we want generative AI to assist in developing a set of educational guidelines, we should define what ‘role’ we’d like the AI to emulate (e.g. academic writing tutor), tell it the goal of the task, and give it concise instructions. Adding this detail can help craft a better response, and also gives the user feedback on their own understanding of the task. 

Take a look at these resources to learn more about the dialogue about evaluative judgement and principles for effective and ethical use of GenAI. You can also hear more from the student view of GenAI and explore these topics more broadly with a review of events from the AI x L&T week.

Join the discussion