The following post is simply a recount of the key ideas and issues Phil Dawson presented at the webinar Detecting and addressing contract cheating in online assessment. Almost all the research cited is his and the ideas are taken directly from his work.
How can we learn to detect contract cheating?
Just like this series, the advice on detection falls into the three neat parts:
- Indicators
- Training
- Software
Indicators
Step one is simple. Start looking for contract cheating. If you’re not looking for it, you won’t ever find it (Lines, 2016 and Medway et al, 2018).
Seriously, the second best way to find it is to start looking for it. Simply looking for contract cheating increased accuracy of detection by 60% (Dawson & Sutherland Smith, 2018, Can markers detect contract cheating?).
Following this study, the same team asked the question: can training improve marker accuracy at detecting contract cheating? Spoiler alert! It can. The very best way to find contract cheating is to get some training to learn what to look for (Dawson & Sutherland Smith, 2019).
You already have all the skills you need, but training can increase your chances of identifying cases. It can also reduce the rate of false positives.
Training
Most of the indicators that have allowed participants in studies to detect contract cheating are very task-specific. That is, they are specific to the particular assessment task in the particular discipline.
This is heartening. You as the assessment designer and subject specialist over many years know the task better than anyone. That said, there are also seem to be some generic indicators.
- Reflection done poorly
- Contract cheating is often outsourced to other countries, so the style of reflection often doesn’t mesh with what is usual for reflective tasks in Australia
- Strange formatting
- Running head is a dead giveaway
- Anytime APA formatting is perfect
- Unusual mistakes
- There’s a set of mistakes that you know as a marker students make year after year. So any mistakes that wildly deviate from the expected patterns you’re familiar with could warrant a second look.
- Metadata (or lack thereof)
- Check the ‘author’ field in a word document. If it has been erased, this may warrant questions.
- Does not address question
- Very, very generic work.
- Not using materials from class
- Something that lacks the key reading from the class or doesn’t even mention a resource that you know everyone has read.
- Wrong task type
- Happens quite often. You asked for a fact sheet and they gave you a Powerpoint presentation.
- Reads as something by a generalist
- When something seems alien to the discipline of the task and lacks the discipline specific ways of looking at the problem.
What else can we do to get an edge?
Read the CRADLE suggests series:
Study the techniques of contract cheating writers:
Hold a workshop with your marking team to train them on what to look for and establish your discipline specific indicators (draw on this Marker contract cheating detection workshop outline). You will need to have access to examples of contract cheating to do this though, so take into account the ethical dimensions of purchasing such work, or check if you have proven cases from previous assessments to use.
The results from the first study into contract cheating in VIVAs or authentic online oral assessment yielded a 100% detection rate. This study needs to be replicated, but it certainly seems promising (Sotiriadou et al, 2019).
Software can help
Turnitin may help you to identify some cases, but only if you know what to look for. The text similarity detection product does not assess the document metadat,a which is what is really needed to identify contract cheating.
A pilot study into Turnitin’s new authorship investigate product: Can software improve marker accuracy at detecting contract cheating? reveals that it can make a significant difference to detection especially for large groups of students. But this raises other problems as well.
Zooming out
We’ve read a lot about the individual act of assessment. Now it’s important that we zoom out to think about the degree, program and course level. Universities graduate people with degrees, so degrees are ultimately what we need to secure rather than just individual assessment tasks. We don’t just want to think of cheating in terms of each individual task.
‘Cheat-proofing’ every act of assessment is likely impossible and certainly not a good idea. We want some tasks where we can focus on developing academic integrity rather than trying to identify who has cheated.
Think at a course and program level about securing those acts of assessment that matter to the degree outcomes or Course Intended Learning Outcomes (CILOS). Which matter for accrediting someone to get their degree and which are simply stepping stones along the way?
Consider how we can focus our assessment security resourcing on the times when it really matters and focus on developing academic integrity all the other times.
More resources
A short guide on The prevention of contract cheating in an online environment.
A more comprehensive guide specific to assessment during the COVID-19 pandemic: Ensuring academic integrity and assessment security with redesigned online delivery.
Future events from Transforming Assessment
- Defending assessment against e-Cheating: design and standards: 18 November, 2020 7am
- Re-imagining assessment to “robot-proof” our students: 19 November, 2020 7am
And that’s all.
This is the third and final post in a three-part series
If you missed either of the other posts in this series, check them out in the links below:
Feature photo by Tim Savage from Pexels