This blog post was co-written by Kylie Readman (Deputy Vice-Chancellor, Education & Students), Simon Buckingham Shum (Director, Connected Intelligence Centre), Jan McLean (Director, Institute of Interactive Multimedia & Learning) and Chris Girdler (Digital Content Lead, LX.Lab)
Generative AI such as ChatGPT and DALL-E has potential applications across many disciplines. It has the capabilities to do so much work sourcing, structuring and rendering materials (texts, images, music, videos, etc.) — work that until now only students could do. A key question then, is whether it is doing too much?
We can expect to access it increasingly via everyday products, leading to a range of changes to our personal and professional lives. Like many of the digital tools we appropriate as educational technologies, we should work towards applying this as a means to enrich learning, not undermine it.
Engagement that is both effective and ethical
So, where do we begin with upskilling ourselves in how generative AI can be used as a learning technology? A simple method that can be used by both academics and students is to think about engaging with the tool in a way that’s both effective and ethical.
Consider the following:
- Is it deepening learning or undermining it?
- How do we play to its strengths and guard against its weaknesses?
- Do we understand which prompts elicit the best responses?
- Are you setting assignments that can’t be passed by an AI?
- Does AI improve learning outcomes – and are these the right outcomes?
Consider the following:
- Do students understand how to maintain their academic integrity with these new tools?
- Do we all understand how ChatGPT data has been sourced and cleaned?
- Can students reflect more critically on ChatGPT in their contexts, than ChatGPT itself can?
- Do assignments incentivise students’ critical engagement with AI?
Impacts on academic integrity
Much of the anxiety around AI for universities stems from the potential for students to submit work that uncritically reproduces the output of generative AI, and claim it as their own. The effectiveness of detection software at this early stage is variable, and this is only one part of the bigger picture. Having a policing mindset alone is a blinkered approach – educational institutions can’t win an ‘AI arms race’ with big tech, and there will be many tricks for gaming the system. A holistic approach to academic integrity is needed, where students are encouraged to create original work in a supported learning environment.
We’re already well positioned with our foundations in the UTS model of learning, and learning.futures strategy for teaching and learning (one core aspect being authentic assessment, another being automated feedback). This is an opportunity to go further with enhancing your assessment – to design in aspects that enhance academic integrity and design out practices that encourage cheating.
AI: an ongoing conversation
We encourage everyone to make the most of this time to harness the huge expertise and creativity already evident across the UTS community and beyond. UTS is well placed to contribute to the conversation, having used learning analytics and AI to automate student feedback for years, and building research-based evidence into what works and what doesn’t. We’ve already heard from academics at FEIT on how AI can be applied and would love to hear more ideas and examples of how Generative AI is being used in your field. In what ways have you considered evaluating and adapting your assessments in light of the advancements of ChatGPT?
We’ll be actively supporting all teaching teams with resources and opportunities to connect with IML, CIC, and your peers. Subscribe to the LX.lab digest for weekly newsletters communicating the latest news, resources and events.
February 16 update: the first resource in a new collection on ‘Artificial Intelligence in learning and teaching’ is now available, offering some advice and guidelines to faculties.