This post was co-authored by Caroline Havery, Rosalie Goldsmith and Ann Wilson.
Amir Ghapanchi makes a strong case for rethinking assessment in the light of GenAI in his recent Times Higher Education Supplement article, How generative AI like ChatGPT is pushing assessment reform. Not only is the story about academic integrity, but it is also about learning and how we capture the process of learning. This brings us to the point David Boud made about the three purposes of assessment: one is to assure that the learning outcomes have been met (summative assessment). To assure that the learning outcomes have been met we will want to know that the work we are marking has been done by our students. We don’t need to assure ourselves every time that this is the case, but we certainly need to be able to do it some of the time. The other two purposes Boud identifies are that assessment enables students to use information to aid their learning now (formative assessment) and that it builds capacity in students to judge their own learning (sustainable assessment).
In the first blog in this series, we explored a three tier strategy for thinking about assessment in the age of GenAI, and suggested that the three tiers might be:
- where GenAI could be used as an assistant,
- assessment where GenAI could be used, and
- assessments where GenAI could not be used.
(From UCL: Using AI tools in assessment).
It is this last type that we will discuss in this blog. In the TEQSA document Assessment reform for the age of Artificial Intelligence this assessment is referred to as “…security at meaningful points across a program to inform decisions about progression and completion”. It is the type of assessment that you are probably most familiar with.
It is a type of assessment where you will be witness to a student’s performance, where you know that they have done the work themselves without assistance, where they will usually be co-located with you, or your proxy, to ensure that they are doing this work themselves. As this form of assessment requires a great deal of time and personnel to manage it, you will use it sparingly in your course. You will want to use it when you are assessing ideas that bring together thinking and elements essential to your course, the course intended learning outcomes, and knowledge that is essential to student understanding and future success. You could think about this in relation to programmatic, or course-wide assessment, where this type of assessment is used sparingly but strategically across the course. Some examples of this type of assessment could be:
- In-person unseen examinations
- Class tests
- Some online tests
- Vivas or oral presentation/questions
There are some other types of assessments you may not yet have tried, which we explain below.
Interactive oral
An interactive oral is a conversation, designed to be as close to a real life conversation as possible, and therefore more authentic. It is less the question and answer that is a viva, and more like a conversation, or exchange of ideas. Hannelie Du Plessis-Walker (Coventry University) explores a range of options for these assessments in the Interactive Orals Guide (PDF file).
Deakin University uses interactive orals in the business school – learn more from their Interactive Oral Assessment page.
In their paper Evaluating competency development using interactive oral
assessments (PDF download), the authors describe using interactive orals in Engineering assessments. The paper Scalable Vivas: Industry Authentic Assessment to Large Cohorts through “Debate Club” Tutorials (PDF download) also shows the use of similar assessments in Engineering.
Objective Structured Clinical Exam (OSCE)
An OSCE is used frequently in nursing and midwifery assessments. It is a performance exam where students participate in a simulated clinical scenario. Leslie McInnes explained the format and structure of an OSCE in the blog Remote oral presentations: bringing the OSCE online, and an Online OSCE for Nursing Education is presented in our Adaptable resources for teaching with technology on LX Resources. Perhaps you could adapt the OSCE to your discipline?
Discussion-based assessments
Group interviews where the interviewees are given a task to solve collaboratively: students can be evaluated on several professional attributes during the task, e.g. communication; problem solving; team work, negotiation.
Course-based assessment
Before we finish it might be useful to think about course-based (sometimes called programme based) assessment, where the assessment may not necessarily sit in each subject but focuses on the achievement of the CILOs (Course intended learning outcomes). This might be a way of ameliorating the cost in terms of time and personnel in using these types of assessments by ensuring that they are assessing the CILOs rather than subject outcomes. A good starting point might be to watch the video from the University of Bradford Programme Assessment Strategies.