This post was co-authored by Ann Wilson and Rosalie Goldsmith.
In the previous blogs in this series we offered a three tier strategy for thinking about assessment in the age of GenAI, and suggested that the three tiers might be assessments:
- where GenAI could be used as an assistant,
- where GenAI could be used, and
- where GenAI could not be used.
(From UCL: Using AI tools in assessment).
In this blog we will explore those assessments when GenAI is integral to assessment. These assessments could include work such as:
- drafting and structuring content
- generating ideas
- comparing content (AI generated and human generated)
- creating content in particular styles
- producing summaries
- analysing content
- reframing content
- researching and seeking answers
- creating artwork (images, audio and videos)
- playing a Socrative role and engaging in a conversational discussion
- developing code
- translating content
- generating initial content to be critiqued by students
Lund University in Sweden has what it terms a ‘permissive’ approach to the use of GenAI in assessment, in their words:
“…you can ban them if you like (good luck, but it’s part of the range of options!) or use them if you meet certain criteria:
- The assessment must still be valid (meet the intended outcomes) and secure (you know the student did the work)
- Tools must be accessible and compliant
There must be clear communications with students – we provide a template.”
What would be the advantage to students of using GenAI as an integral part of the assessment?
The best-performing students will be those that develop the critical thinking and information literacy skills to appropriately enter inputs and analyse the outputs that … Al tools produce.
Neumann et al 2023, We need to talk about ChatGPT: The future of AI and higher education.
The following examples are from UTS academics integrating AI into their assessment activities. You may remember an earlier blog by Anna Lindfor Lindqvist – a case study about how she is using AI as an integral part of the assignment in FEIT.
In a blog earlier this year three FEIT academics talked about how they were using GenAI, and specifically ChatGPT in their assessments.
Other ideas for using GenAI in assessment
Lydia Arnold has developed the following Jisc resource that is full of ideas about how you might use GenAI in assessments (log in with staff ID and password to view).
The table below presents one such idea which will give you a flavour of the richness within. Is this something you could adapt to your subject or discipline? Do let us know. We are keen to build our knowledge about how this can improve our assessment practices:
Accessible text version: AI prompt competition (.docx, 16KB).
Ian Farmer from FEIT has some very useful advice about using the ROSE model to prompt AI in this video.
Ethan Mollick, in his article on whether AI will reshape the future of work, (spoiler alert: the answer is yes) describes the Jagged Frontier of AI:
Imagine a fortress wall, with some towers and battlements jutting out into the countryside, while others fold back towards the centre of the castle. That wall is the capability of AI, and the further from the centre, the harder the task. Everything inside the wall can be done by the AI, everything outside is hard for the AI to do.
Ethan Mollick, Centaurs and Cyborgs on the Jagged Frontier
The wall is invisible, so it is difficult to judge what GenAI can and can’t do well. This means we need to guide our students to develop their evaluative judgement about what a good assessment task, assignment or artefact looks like. Even when GenAI does the developing of a piece of work, our students still need to be able to do the judging of when it is a good version of that work. In this way we can think about GenAI as augmenting our students’ capabilities, as opposed to automating them.