Deputy Vice-Chancellor (Education and Students) Professor Kylie Readman recently presented a town hall webinar that addressed assessment and academic integrity at UTS as impacted by AI. There was a lot to cover off, with a variety of questions from attendees flowing in, many of which were unable to be answered within the timeframe. Below, Kylie responds to the remaining questions and offers some additional guidance on delivering online exams.

Sharing ideas and examples

How can we reach out to offer suggestions or ideas?

We’d love to hear from you – you can share your suggestions and ideas via this form.

There are now numerous AI tools that generate images – what can we do about non-text deliverables? 

This case study from DAB’s Dr Frederique Sunstrum may be of interest to you. Internationally there are academics now sharing their thinking and practice around image-based GenAI, but we would like to build this community within UTS.

Where can one go to discuss some of the research and ideas about assessment, fairness and justice for students’ learning? 

UTS will be hosting the event Research on AI for Teaching and Learning on 1 August, where you can discuss some of the research in this area. There are several other opportunities for engagement during the AI x L&T week and more planned with a research focus in the coming months. For deeper conversations about the research (theoretical and practical) aspects of social justice in education, you might also consider enrolling in the Graduate Certificate in Higher Education Teaching and Learning.

Training and resources

Could we generate a training module (similar to Academic Integrity or Avoiding Plagiarism) for this?

Rather than learning about AI generally via a module/quiz, students will be guided through GenAI usage in the context of their subjects. There are a number of ways subject coordinators can do this, drawing on a range of resources in Canvas and LX Resources. This includes a new ‘Use of Generative AI in this subject’ template for Canvas, which can be tailored for your subjects. Students will also be able to access general information on GenAI via a new Study Guide, and there are now Powerpoint slides available for classroom communication about AI and academic integrity:

Will there be bolstered resources for the academic integrity faculty staff for when we detect a likely misuse of AI? 

There are a range of ‘surge’ activities underway, including faculty-deployed staff who are working with subject coordinators and casual teaching staff to upskill them in a range of GenAI-assessment–related areas and redevelop ‘vulnerable’ assessments. The intention is to prevent and reduce the incidence of academic integrity issues.

We have to be mindful (see below on ‘Detection’) that there are a number of uses of GenAI that would not automatically be considered to be academic misconduct. This is why it is critical for staff to communicate expectations with students from the beginning of the teaching period.

In 2024, we hope to include a function that supports academic integrity practices university-wide, moving it from its current project status to business as usual. Faculty resourcing and staffing will be managed by the relevant faculty, working with the team supporting academic integrity centrally and the Governance Support Unit’s misconduct team.


Turnitin has said that more false positives occur when there is 20% or less of AI content detected. Could there be an acceptable percentage of AI content, as there (informally) is with similarity percentages? 

We have not yet turned on the AI-writing detection module – but if we do, the answer to this question depends upon two critical factors: 

  1. Your assessment design and what precisely students are being asked to do. For example, if you have clearly explained that students are not to use any GenAI in a particular assessment task, you should expect a very low threshold of AI if you test submitted assessments to Turnitin. Conversely, if students were expected to use GenAI and critically evaluate its responses in a submission, you would expect a much higher rate of detection to occur.  
  2. How any potential flagging of students by Turnitin is itself managed by the teaching team. We would not recommend course teams set an absolute threshold and work from there. Human oversight is necessary. Ideally this would be accompanied by some understanding of the student’s writing and capabilities, and triangulated against other evidence of student work.

As with all AI-based tools which work on statistical principles, it is important to treat the Turnitin detector’s classifications with healthy scepticism. If Turnitin indicates an unacceptable level of AI within the context of that particular assessment task, you should approach students in accordance with UTS misconduct processes.

Can you tell us more about accessibility tools and why use of these might be flagged as GenAI in Turnitin? Are these accessibility tools based in GenAI? 

Many students use accessibility tools that use artificial intelligence (also known as assistive technology) to access materials online. For example, speech-to-text tools such as Dragon Naturally Speaking or Read & Write can dictate text as an alternative to keyboard usage, which benefits students who are neurodiverse or experience mobility impacts. There’s the potential that tools like this could be flagged in Turnitin as they may use copy and paste functionality to input the text.

GenAI can also be used by students as an accessibility tool – for example, if a student needs assistance with planning, they can use ChatGPT to create a study plan. Learn more about accessibility in learning and teaching in this resource collection or submit an Inclusive Practices support ticket for further assistance.


I talked about exams at the Town Hall, and I’d like share some further information that addresses additional questions on this topic that were asked on the day.

Prior to the pandemic and its impact on exam delivery, UTS had determined to decrease reliance on exams as a form of assessment. The reason for this was that, as core to our approach to professional practice education, we aim to employ authentic forms of assessment. While a small number of specific contexts may necessitate in person-invigilated exams (e.g. professional accreditation requirements), even these types of exams are not immune to identity fraud and other forms of cheating.

At the Town Hall, I referenced a large scale study into contract cheating (Harper, Bretag & Rundle, 2019) that reported the majority of common student cheating behaviours (seven out of ten) occurs in exams, particularly multiple choice. Considering the limited capacity of exams to assess learning outcomes, we aim to support staff and students to explore alternative assessment methods rather than relying primarily on exams. 

Credentialing our students requires us to ensure they have achieved the course learning outcomes. These outcomes are nuanced, developed over time and demonstrated in various contexts. We have numerous authentic methods available to verify student identity, such as work-integrated learning, Objective Structured Clinical Exam (OSCEs), studio and lab-based assessments with direct observation by tutors, vivas and in-person presentations.

Progressive assessments that check in on students’ development of bigger tasks and include written and in-person reflections are another good example of how identity can be verified in non-invigilated settings. When multiple task types are used in a subject, they validate the authenticity of a student’s learning outcomes and offer greater validity than in-person exams.

The rise of generative AI tools has also presented a timely opportunity for us to reassess our overall approach to assessment. Thus far, we have taken an educative approach, urging subject coordinators to adapt assessments that are susceptible to improper use of AI tools, clearly communicate expectations to students and closely monitor student responses. In our pursuit of understanding, we have actively partnered with our student body to gain insights into their experiences and expectations. Their advice to us is clear: they seek guidance to avoid cheating, they desire to learn how to effectively use AI tools and they acknowledge the need for adaptation in assessment practices in light of generative AI. 

We recognise the importance of moving forward and are working with students, staff and industry to holistically reassess our approach to learning and assessment across a course: de-emphasising focus on knowledge and increasing emphasis on capabilities through application of knowledge and skills across a range of contexts. 

Other considerations and moving forward

I like the idea of the students showing their ability in OSCEs or roleplay, but do we need to incorporate the student’s ability to write academically?

At a course level, we do need to be supporting students’ development of academic literacy rather than assuming they arrive at university with the necessary skills to research, read and write in an academic and disciplinary context. Consideration needs to be given to the Course Intended Learning Outcomes and how academic writing, in the context of graduate attributes, skills and knowledge, is represented in them. This should provide the direction needed as to how and when academic writing should be assessed. We recommend that you reach out to the IML Academic Language and Learning representative for your Faculty, via this form.

VR/AR/XR are being combined with AI and open language models to produce greater opportunities for organisations in several industries. Will our position on AI be applicable to these platforms? 

We hope that the broad principles that we are proposing alongside the guidance in our resources are agnostic to specific technologies (except where they are specifically about writing, for instance), and so will apply to GenAI in VR/AR/XR — but we welcome feedback if think otherwise. We would also welcome examples of teaching practice, assignments and student guidance on the use of GenAI in VR/AR/XR. Please share any ideas or suggestions via this feedback form.

What are you most excited about for our new future where this type of AI is woven into our lives and learning? 

Firstly, it’s important to note that Generative AI is a set of tools that are indifferent about ethics, responsibility or trust that can be applied to almost anything because of the massive amount of data available – if not now, then definitely in a future that is closer than most of us can imagine. It is up to people, systems and governments to make sure they are used to improve the quality of human life and the existence of our planet.  

That being said, I am excited about the opportunity for us to use productivity tools to do work that can be easily automated, particularly those tasks that, when automated, provide greater insights into human experience. I’m excited about having more time to do the things that really matter for better wellbeing and relationships. In the field of education, because what we consider ‘work’ is going to change significantly, how we engage students in learning, what we need to include in the curriculum and how we partner with industries and professions to educate workers into the future will have to change too. Our future curriculum and delivery models will most likely see a different balance between self-directed, teacher-directed and professional workplace learning. Additionally, there will be changes that relate to duration, location and certification, as well as changes to students’ sense of community, wellbeing and belonging.

Ultimately this is a moment of great expansion and, as educators and learners, we must stay engaged, look forward and contemplate a learning and living ecosystem significantly different to what we experience today. 

Join the discussion