If you needed proof of the demand right now to learn about how generative AI tools are changing the nature of education, it could be found in the big turnout for the Faculty of Health workshop on Generative AI in Learning and Teaching on Thursday 4 May.
Mixed feelings about generative AI were revealed in an opening Menti poll where ‘excited’, ‘anxious’ and ‘nervous’ were the most common words participants used to describe their feelings toward ChatGPT.
Risks and opportunities
Ann Wilson from IML kicked things off by discussing some of the risks and opportunities offered by generative AI. Risks include a possible increase in the number of students submitting work that is not their own, with a resulting reduction in the quality of learning and the integrity of the qualifications they receive. Opportunities include the possibility of rethinking assessment design to focus on the process of learning rather than products that often act as proxies of learning.
Sylvia Singh from the LX.lab then gave an overview of some of the AI different tools and what they can do (HINT: they can create your next presentation for you, including a catchy video). Sylvia also pointed out that although the University doesn’t officially support ChatGPT at this stage, there are many resources available to support staff in working out how best to use it in their subjects (scroll down to the end of this post for LX resources on AI).
Dr Daniel Demant from the School of Public Health gave an overview of his research into who uses chatbots and why. Although the data is still being analysed from the 150 students who participated in the study, Daniel shared some initial student responses indicating a high level of awareness that generative AI tools can’t be relied upon to give accurate information, but can be used to do less cognitively demanding tasks such as ensuring that reference lists are accurately formatted. Daniel also introduced us to ChatGPT’s lesser-known cousin, CatGPT (google it – it’s well worth the time!).
ChatGPT and assessment
Dr Michael Rennie from the School of Sport and Exercise Science then showed us how he has incorporated ChatGPT into an assessment task. Students were asked to create their own plan for a clinical intervention, then get ChatGPT to propose a clinical intervention for the same scenario, and critically compare the two plans. Encouraging students to compare their own ideas with a response given by ChatGPT seems a rich area for assessment design as it encourages the higher order thinking skills we want our graduates to demonstrate.
Professor Bronwyn Hemsley gave a brief overview of several ways that ChatGPT is being used in Speech Pathology, both as a learning and teaching tool and as a way to help clients. Bronwyn has used ChatGPT to create case studies, write multiple choice quizzes, and draft Canvas pages, while also using it as a classroom tool to help students build their knowledge. She also gave an example of how it can be used as a tool for people with stroke who have reading and writing difficulties. The potential accessibility benefits of these technologies are enormous.
Academic integrity and quality
Professor Amanda Wilson gave initial findings from two different studies she’s been involved in. The first study examined the extent to which multiple texts created by ChatGPT in response to the same question would generate sufficiently similar answers to be flagged as matching by Turnitin. The results indicate that there may be enough similarity in ChatGPT’s responses to generate at least some text that would be flagged by Turnitin’s text-matching tool. The second study examined the variation in vocabulary, sentence complexity and readability in texts generated by ChatGPT. This study found that ChatGPT can produce a surprising variety of vocabulary and sentence complexity even when given the same prompts, meaning that tools designed to detect the use of AI may not be as reliable as some might hope.
In the final session, Jenny Wallace and Lucian Sutevski from the LX.Lab encouraged participants to grade a text that had been generated by ChatGPT in response to a given assessment task. Lucian had engaged in a lengthy interaction with ChatGPT to get it to generate prompts which would then be used to ensure as effective a final text as possible. The general consensus amongst the group was that the text that had been generated would pass the assessment task based on the given marking criteria, but that it showed little critical engagement with the topic and lacked sufficient depth when attempting to make links between theory and practice.
In a final summary by Ann Wilson and Associate Professor Caroline Havery from IML, participants were encouraged to continue exploring Generative AI tools and to upskill as much as possible, especially given that students who start University from next year onwards are likely to have been using these tools on a regular basis. With this in mind, participants were encouraged to have open discussions with their students about how Generative AI tools can be used in their subjects, and in their professions, with the aim of increasing the AI literacy of everyone involved.
You can find resources on AI in learning and teaching in the LX Resources collection, including guidelines and case studies from UTS academics.