Still need help?
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.
Generative AI’s rapid solutions are revolutionising coding education, so careful assessment design and personalised approaches are crucial to ensure authentic comprehension and meaningful learning outcomes for students.
Matthew Beauregard is a lecturer and subject coordinator of Data Processing Using SAS in the Faculty of Business. His primary focus is to provide students with a learning experience that empowers them to rapidly develop their coding skills to a level where they can confidently self-assess the integrity of their work.
Generative AI is revolutionising software development processes and changing how coding is taught, offering swift solutions, and making once-impossible tasks accessible to students. However, these advancements pose challenges for the assessment of students’ learning outcomes, as correct answers no longer necessarily indicate an understanding or mastery of the processes and concepts traditionally taught.
Prepare students to incorporate GAI technology into their practice and design a comprehensive assessment for statistical analysis that considers the impact of generative AI on learning outcomes and promotes authentic comprehension.
Students were required to complete a data analysis report and video reflection task worth 50% of their final grade, involving the analysis of a dataset and producing reliable statistical information. However, determining whether generative AI has affected the learning outcomes of reaching a realistic analysis proved challenging.
To adapt to AI, this task incorporated the follow two aspects for the assessment:
Each student received a unique question requiring them to analyse a small code sample and the given dataset from the course. This personalised approach encouraged original thinking and deeper engagement.
Instead of written reflections, students were asked to create a two-minute video to analyse their code development process. Canvas randomly assigned each student one of two questions from a pool of 60, specifically designed to prompt students to compare results, propose improvements in their presentation, identify the SAS analytic software procedures they employed, and elucidate their comprehension of the variables chosen for coding analysis. This video reflection allowed accurate assessment of the skills taught in the course while reducing the risk of unintended responses from Generative AI.
To enhance the assessment’s robustness:
This new assessment design worked very well against current-generation GPT, had lots of great feedback from students stemming from the revised assessments and has made the overall subject better.
Matthew plans to create an AI Chatbot (a virtual tutor) trained on course and Canvas materials, serving as an additional resource for students through personalised assistance. He also plans on using generative AI to help build the pool of video assessment questions, to diversify the questions. Additionally, he intends to gauge the effectiveness of the short-term changes made in the previous semester against new versions of generative AI, ensuring ongoing adaptability and meaningful learning outcomes in statistical analysis.
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.