The Society for Learning Analytics Research (SoLAR) is an inter-disciplinary network of leading international researchers who are exploring the role and impact of analytics on teaching, learning, training and development. This upcoming SoLAR panel session brings together expertise from multiple quarters to explore arguments and evidence about the use of online proctoring.

A pandemic legacy, continued?

The emergence of online exam proctoring (aka remote invigilation) in higher education may be seen as a function of multiple interacting drivers, including:

  • the rise of online learning
  • emergency exam measures required by the pandemic 
  • cloud computing and the increasing availability of data for training machine learning classifiers 
  • university assessment regimes
  • rising concerns around student cheating
  • accountability pressures from accrediting bodies

Commercial proctoring services claiming to automate the detection of potential cheating are among the most complicated forms of AI deployed at scale in higher education, requiring various combinations of image, video and keystroke analysis, depending on the services. Moreover, due to the pandemic, they were introduced in great haste in many institutions in order to permit students to graduate, with far less time for informed deliberation than would have been expected. Consequently, there was significant controversy around this form of automation, with protests at some universities seeing withdrawal of the services, and research beginning to clarify the ethical issues, and produce new empirical evidence.

Numerous institutions are satisfied that the services they procured met the emergency need, and are continuing with them, which would make this one of the ‘new normal’ legacies of the pandemic. Critics ask, however, whether this should become ‘business as usual’. Regardless of one’s views, the rapid introduction of such complex automation merits ongoing critical reflection.

Exploring the arguments and the evidence

SoLAR is hosting an upcoming panel which brings together expertise from multiple quarters to explore a range of questions, arguments, and what the evidence is telling us. Questions under examination include:

  • This is just exams and invigilation in new clothes, right? They’re not perfect, but universities aren’t about to drop them anytime soon, so let’s all get on with it…
  • Are there quite distinct approaches to the delivery of such services that we can now articulate, to help people understand the choices they need to make?
  • What ethical issues do we now recognise that were perhaps poorly understood two years ago — or that we simply couldn’t afford to engage with in the emergency, but which we must address now? 
  • What evidence is there about the effectiveness of remote proctoring — automated, or human-powered — at reducing rates of cheating? 
  • What answers are there to the question, “Should we trust the AI?” Are we now over (yet another) AI hype curve, and ready for a reality check on what ‘human-AI teaming’ looks like for online proctoring to function sustainably and ethically?
  • What (new?) alternatives to exams are there for universities to deliver trustworthy verification of student ability, and what are the trade-offs?
  • Who might be better or worse off as a result of the introduction of proctoring?

Introducing the panel

Our panel brings rich experience on the frontline of practice, business and academia:

Image of 5 panel speakers
  • Phillip Dawson is a Professor and the Associate Director of the Centre for Research in Assessment and Digital Learning, Deakin University. Phill researches assessment in higher education, focusing on feedback and cheating, predominantly in digital learning contexts.
  • Jarrod Morgan is an inspiring entrepreneur, award-winning business leader, keynote speaker, and chief strategist for the world’s leading online testing company. Jarrod founded ProctorU in 2008, and in 2020 led the company through its merger and evolution into Meazure Learning.
  • Jeannie Paterson is Professor of Law and Co-Director of the Centre for AI and Digital Ethics, University of Melbourne. She teaches and researches in the fields of consumer protection law, consumer credit and banking law, and AI and the law.
  • Lesley Sefcik is a Senior Lecturer and Academic Integrity Advisor at Curtin University. She provides university-wide teaching, advice, and academic research within the field of academic integrity.

Join the webinar

You are invited to join this free online session, which is chaired by Simon Buckingham Shum, Professor of Learning Informatics and Director of the Connected Intelligence Centre at UTS. Simon’s team researches, deploys and evaluates Learning Analytics/AI-enabled ed-tech tools, and Simon recently coordinated the UTS “EdTech Ethics” Deliberative Democracy Consultation in which online exam proctoring was an example examined by students and staff.

To join the session on Thursday 28 April, register here.

Feature image by Christin Hume

Join the discussion