This post is co-authored by Kirsty Kitto, Lucy Blakemore and Mais Fatayer.

The UTS Learning Design Meetup recently gathered online to share reflections and examples of learning analytics impact from practitioners Danny Liu (University of Sydney), Kirsty Kitto (Connected Intelligence Centre, UTS), and Donna Rooney (Faculty of Arts & Social Sciences, UTS).

Fostering human relationships through learning analytics was our first re-cap from this event and captured highlights from speaker Danny Liu. This second post features some key points from Kirsty’s presentation on ethics in learning analytics. Hear more and re-visit the recording from her session below.

Ethics is already baked in…

Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care”

(p.184, Bowker, Memory Practices in the Sciences)

This famous quote guides a lot of the work at CIC: there is no raw data, our assumptions are baked into how we use it and even how we collect it. So we need to be careful about how we ‘cook’ data into our work with learning and learners. A lot of work has already been done here; many well known LA researchers have written papers at some point laying out their point of view on ethics and privacy – see the further reading below.

The field is actually very concerned about ethics, privacy and treating data well, and many different frameworks have been developed to this end. One example is the ‘DELICATE’ checklist which emerged from the Learning Analytics Community Exchange (LACE) European project in 2016.

Great principles – but how do I apply it?

When we’re building out learning analytics systems, it is often the case that the many different principles and frameworks that stand well by themselves are in conflict with other principles that we might also consider important. Student privacy should be preserved, for example – that’s a great principle, but respecting privacy might make it impossible to link data from a student’s learning over a lifetime to personalise their learning journey – something that is increasingly important in an era of workforce disruption and adjustment.

Another example can be found if we give students the choice to opt out of data collection – which seems like a great idea until we realise that there might be whole cohorts of students who choose to opt out, at which point our model is likely to end up biased!

Practical ethics: privacy at the core, and at the edges

A few years ago we started to explore a principle of practical ethics for LA (Kitto & Knight, 2019), which aims to help people developing LA solutions to navigate these ethical tensions. We need to make sure that LA practitioners are thinking about ethics right at the core of their work, rather than leaving the space for ethicists and and lawyers to worry about. We believe that everyone in the LA cycle needs to be thinking about what possible harm might come from developing analytics systems, and planning to mitigate against those harms, while also considering what kind of benefits they could engineer into well designed LA systems and tools. 

It’s useful to think about ethical edge cases here: where do principles conflict with each other, and what are the unintended consequences of our work?  A Canvas analytics dashboard seems like a great idea for identifying students at risk, but it could also easily be misinterpreted if we do not know the learning design for a scenario. For example, the course might be running in block mode, with highly engaged students in occasional intensive modes on-campus. Those students were highly engaged in on-campus learning at those points. Those same students may be learning across lots of different environments, using a wide variety of different tools and systems that are not necessarily integrated into the LMS or other analytics measures. Are we collecting all the data that actually describes that rich learning experience? If we’re trying to make predictive models based on this incomplete data, then we might ask how accurate they are likely to be?

Understanding data through human interaction

Even when we have access to lots of data, it is not always useful (Kitto et al. 2020). The best person to interpret a complex data trace is often the person who generated it – the student. If they don’t show up to class for a couple of weeks, they might know it’s because they had a sick grandmother and that they’re planning to catch up now. Alternatively, they may be really struggling. The context for complex data traces can mean everything, and yet it is rare that a data analyst has all of this necessary information.

A word of warning, though: if we’re going to return data to students we have to be really careful. For example, a predictive model automatically alerting a student who may already be feeling worried could just exacerbate the situation – indeed it could generate the very outcome we were trying to avoid and lead to the student dropping out! And yet as Prinsloo and Slade (2017) famously said, we have an obligation to act if the data suggests that a student might be struggling… Experienced student support staff around the university are likely much better placed to work through any issues or interventions in ways that show empathy, understanding and can point the students towards potential solutions.

View the full recording of this presentation, and brief Q&A from the Meetup audience:

Further reading and resources

Stay connected with the UTS Connected Intelligence Centre (CIC) to see their latest work on data science and human-centred design, including learning analytics. Read more about the ‘deliberative democracy‘ approach mentioned in the session and the report on edtech ethics from CIC.

Other papers and resources that are recommended include:

Join the discussion