Need a quick introduction to Generative AI and its impact on learning and teaching? In our latest ‘in a nutshell’ explainer, we cut through the ChatGPT noise with some need-to-know basics.

Up until fairly recently, AI applications for Higher Ed have been designed to analyse and interpret data to make predictions, decisions or recommendations to staff and students. Generative AI is the latest wave in AI excitement – instead of just recognising or classifying data, it appears to create new images, videos, audio and text, using deep learning techniques. Unsurprisingly, early adopters of Generative AI are overwhelmingly excited and have limited concerns – the performance and potential to assist in several domains is impressive.

What are some popular examples of Generative AI?

  • OpenAI is the creator of the topical ChatGPT (for text generation) and Dall:E 2 (for image generation) amongst a suite of other Generative tools.
  • Created by Stability AI, Stable Diffusion is both an image generator (text to image) and image modifier (image to image). It is not solely cloud-based and its source code is available, unlike other similar tools. This also means its guard rails can be bypassed to produce harmful content.
  • You can explore a plethora of other AI tools in this extensive list.

What are its potential impacts and uses at UTS?

Academic Integrity, the other AI, is the big one that has us taking notice. This impacts assessment design by closing down certain pathways and opening up others, e.g. co-designing an assessment with students in light of ChatGPT. If you’re concerned about whether students are writing their own work, consider UTS Startups member Aaron Shikhule’s tool, AICheatCheck.

As with any online service, the latest AI innovations raise privacy concerns, such as data collection for use by third parties and security breaches. If you are using Generative AI, make sure you are aware of these risks and also brief your students about them.

Where to from here?

There will be many ways in which tools like ChatGPT (and its inevitable successors) will enrich our lives – as well as our learning and teaching efforts. Our graduates will enter a world of work where working with AI will be an expected and normal dimension, so our role is to demonstrate how it can be used ethically to advance and deepen knowledge. More immediately, as a university and teaching community, we are giving deep and considered thought to the implications of ChatGPT and similar AI tools, from teaching practice to assessment, academic integrity and governance.

Kylie Readman, Deputy Vice-Chancellor (Education and Students)

The LX.lab are testing how Generative AI behaves against various assessment types sampled from across UTS. In collaboration with other teams at UTS, we are developing new learning resources and workshops designed to assist academics in alleviating issues arising due to ChatGPT and other emerging Generative AI services.

In the meantime, here are some considerations:

  • Design assignments that ask students to actively engage with AI productively and ethically; you’ll likely be in the position of learning alongside your students.
  • Generative AI does not perform so well when asked to refer to specific cases, situations or source materials and it cannot (yet) produce citations – consider working all these things into an open-book assessment
  • Consider an oral assessment – this could take the form of a marked tutorial discussion or debate, a post-presentation Q&A session or a Viva. While these assessments may require more marking time than automatically marked quizzes, they greatly reduce the risk that students have not understood course content.

For more ideas, check out this collection of short-term solutions compiled by Cynthia Alby. And if this piece has triggered any thoughts or concerns for your learning and teaching, contact the LX.lab for advice and support.

Join the discussion