Co-authored by Simon Buckingham Shum (Connected Intelligence Centre), Sharon Coutts, Ann Wilson, Michaela Zappia (Institute for Interactive Media and Learning) and Susan Gibson (Data Analytics and Insights Unit)
2024 will see universities spinning up their AI environments to apply and tune Generative AI within a secure enterprise environment that calls on other internal data and services. Expect to see a growing number of announcements as universities partner with different providers. For instance, our colleagues at University of Sydney have developed Cogniti, Arizona State University is partnering with OpenAI, and Making AI Generative for Higher Education is a US-based project tracking sector developments.
Secure GenAI at UTS
At UTS, we’re testing Microsoft Azure OpenAI’s capability to help us build internal ChatGPT chatbots for specific tasks. In this post, we focus on a specific instructional task that harnesses the capability of large language models (LLMs) to distill complex text into key themes. The text in this case is a specific ‘genre’ of writing, the Course Intended Learning Outcome (CILO).
CILOs define what students know and can do on successful completion of the course. As part of a well-designed curriculum, each part of a course – subjects, modules and assessments – should all respond to its CILOs. Effective implementation of CILOs requires both the subject matter expertise of academics and the pedagogical knowledge of learning designers.
CILOs can vary widely in quality and quantity between academics. At UTS, we’re working towards summarising all courses consistently using around six CILOs to achieve a better user experience as students make enrolment decisions, course teams develop assessment that evidences the CILOs, and to assist teaching teams in their course design and reviews. Of course, these CILOs can be refined into sub-CILOs as required, but we want to introduce the course with around six.
How LLMs can help
It’s an intellectually and linguistically demanding task to distill a list of 20-30 CILOs (which is not uncommon), down to six well designed CILOs, so this is where we believed LLMs could assist. In a 2-day hackathon, iterative prompt engineering informed by feedback from academics and learning designers led to the refinement of a system prompt that configured ‘CILObot’, a ChatGPT to aid in drafting these new CILOs. The system prompt incorporates widely recognised design principles (e.g. open each CILO with a verb from Bloom’s Taxonomy), with the addition of internal requirements (e.g., UTS Indigenous-CILOs), and the chatbot is grounded in a corpus of documents about CILO design.
The prototype is showing promise, and after a day’s intensive work using the Azure ‘Chat Playground’ (the design environment), the results for several programs in our Health faculty were validated by disciplinary experts.
Going forward with CILObot
CILObot may not be perfect, but in 30 seconds it can generate a coherent first draft, which can be refined conversationally or edited by the teaching team. We reckon that this would normally be a minimum of 3 hours’ work by the course director working with the lead academics in the program – that’s a pretty impressive trade-off! Next steps will put CILObot through its paces with other degree courses, and in the coming months we expect to share our experiences consulting with teaching teams on other prototypes.