The introduction of any new technology will always have social as well as material impacts. A recent dispute between Woolworths and its warehouse workers on the use of AI-driven monitoring highlights how new AI technologies can threaten to dehumanise the workplace. The impact of algorithmically driven efficiency metrics on workers and wages also raises questions about the morality of designing such tools in the first place. Who benefits from such changes to a workplace? What tools we choose and how we use them is therefore increasingly foregrounding questions about the kind of future we want to shape using AI technologies.

We should be asking how – as practitioners and as a society – we can be more proactive and less reactive to such changes and impacts. We need to embed practices for designing and developing such AI tools responsibly, integrating careful exploration of the social as well as the technical contexts into which new innovations are introduced. Attitudes seem to be changing and it is more commonplace to see discussion about AI ethics in organisations today than it was even five years ago. As a recent Computer World article flags, for instance, AI ethics is increasingly being recognised as beneficial not only for regulatory compliance but also for driving ethical innovation using these new tools. 

As calls for more ethical applications of AI grow, so too does the need for guidance about what it means to work responsibly with these technologies. There are many ways they can be used to ‘do good’ in our world, but defining – and evidencing – how we seek to use those tools ethically and responsibly are not actions we can outsource to them. Wanting to be ethical and responsible leaves many questions unanswered. What does it mean to be ‘responsible’ and ‘ethical’ when working in these data and AI informed contexts? Who decides what is ‘good’? For whom? How? At what cost?

These are the questions I raised in a seminar a few months ago at UTS where I talked about the importance of being human in the midst of working with evolving AI-enhanced data systems. You can watch the full presentation in this 1.5 hour recording. Below, I share a brief overview of 7 practical actions I believe shape an ethical approach to data science and AI that privileges human and planetary flourishing. 

Being ethical is more than just following a regulation and adhering to a standard; it’s determined by how you handle the unknown, the abstract and the grey areas. While guidelines and standards can support our development and application of safe and responsible AI, we also need to make time for ongoing reflection and awareness about our intentions. Paying attention to these very human practices will help us shape what we are doing individually and collectively to design GenAI tools that put people and planet first.

Emerging from my own research and practice, these 7 actions are designed to move past traditional dichotomies of fast and slow, human and machine, head and heart so we are better equipped to handle the inevitable uncertainties that accompany working with emerging technologies. As we transform our intentions into action, these everyday actions can also help bring imagination and compassion into our work in meaningful ways.

1. Appreciate the fluidity of boundaries

There is power in the names and categories used to classify data that can all too easily contribute to abuse and unintended disadvantage when that data is fed into the AI technologies we create and use. A first step to reversing this trend is to become sensitized to the fluidity of the boundaries around what you are defining – or how you are being defined. Classification is power. We need to recognize that while a categorisation may be effective at a given time and in a particular context, it can be dangerous to assume that its meaning is static. We need to remain aware of the fact that data representations of people and ideas can’t really do justice to the complexity of our social worlds.

2. Embrace the opportunities in uncertainty

Our work with AI is very dynamic and uncertain. Things change fast – data is never complete, information is never certain, and action is still required. In my research I have found that one of the best ways to navigate uncertainty and risk is through honest and open sharing about our experiences. When we can make time to sit with the uncertainty before acting, we become more capable of exploring the situation for opportunities. Based on my research into ways perspectives on uncertainty contribute to innovation, I developed a SHARE heuristic to offer some suggestions for individual reflection and sharing in trusted settings:

  • See uncertainty as signal – the edge of understanding is a chance for growth
  • Have hope and optimism – building our resilience helps us embrace unknowns
  • Aim for a mindset of abundance – recognise the benefits of sharing knowledge
  • Reflect and embrace the pause – make time to sit with the uncertainty before acting
  • Explore uncertainties for opportunities – explore options imaginatively and openly

3. Make ‘green spaces’ for the mind

My research explores the conditions contributing to creativity and innovation through four distinct but interrelated phase states (plan, pressure, play and pause). More often than not, the plan and pressure elements in our lives are privileged at the expense of the space needed for the play and pause phases. AI can speed up processes, thereby luring us into a “more, faster, better” mindset. But what do we need for ourselves? How and where do we do our best thinking?

We have to make more time to nurture pauses in our lives, both personally and professionally. There is real power in allowing ourselves to think at the edge, to use the untapped potential of intuitive judgements. Listen to it, give yourself time to think and remain alert for things that have remained hidden, unnoticed or incomprehensible. You can build in pauses through breath work, reflective writing or stillness. In the case of the AI-driven monitoring technology referred to earlier, for example, imagine how building time to think into the development process may have helped the designers to better understand and anticipate the reactions of the workers.

4. Nurture personal and collective creativity

This is an extension of the ‘pauses’ I referred to earlier. As individuals and within our communities, making more time to engage playfully with the world around us is a powerful reminder of essential human qualities. As automation and AI tools continue to infuse our daily activities, more than ever we need to nurture such connections with the world around us. Aim to challenge your imagination and the ‘obviousness’ of the world by celebrating uncertainty and playfully venturing down unknown pathways. As you slow down your thinking you will find it easier to tap into a ‘felt sense’ that emerges through personal insight and reflection. Such insight connects our values to our actions.

5. Disrupt ‘big, loud and first’ behaviours

If we are to find ways to build a more equitable world, we need to seek ways to create safe spaces for collaboration and sharing. How can you step out of your individual practice and find ways to build the world differently? In the workplace and the classroom, consider the consequences of being ‘the first in’, and how stepping back for others can disrupt inequity. Brainstorming sessions, however well intentioned, can quickly turn into opportunities for the loudest and most confident members of a group to drown out the potential of alternative perspectives. Take time to sit in your own thoughts, then confer with one other person before moving into a larger group setting. In the classroom, consider a Think-Pair-Share activity.

6. Create opportunities to break cycles of disadvantage

We each have the ability to address bias and inequity through our daily actions – to help someone break free of limitations imposed upon them. With the volume turned down on the ‘big, loud and first’, highlight those ‘quieter voices’ or the ones who have been left out altogether. Think about the ways you can open the doors to collaborate and connect new opportunities for the underrepresented or misrepresented. While such daily practice in and of itself does not address significant systemic inequities, small personal actions are not to be underestimated for their power to break cycles of disadvantage. Such actions also heighten our awareness of ways the data and technologies we use could be reconfigured.

7. Find ways to design with people, not just for them

A strong and engaged civil society can be a key enabler for responsible and more equitable use of new technology. Increasingly the people represented in the data sets we work with expect – justifiably so — to have a say in how data about them is collected and used. How do we ‘turn data around’ to build informed, educated and activated communities? I believe that the erosion of public trust is one of the biggest threats to the successful and appropriate uses of data and AI technologies. Strategies for building trust are therefore central to my current work with practitioners and communities. Participatory approaches that get the community involved in the design process from start to finish are powerful tools for building trust into the network.

We must remember that, in the real world of technology, most people live and work under conditions that are not structured for their wellbeing.

Ursula Franklin

As automated and AI-informed toolkits become ever more accessible, we need to give very deliberate thought to the ways the technologies we are creating can help people and planet. These machines can be excellent partners in our work, but they are very poor proxies for the ethical choices we have to make along the way. Thinking and behaving ethically does not have an on/off switch – it should be part of our daily practice. The seven actions presented here can help us become more aware of how we can do that.

I invite you to listen to the full explanation of these seven actions, to ‘’road test’ these further and to reach out to me to share your thoughts about what I have written here.

Contact Details: Associate Professor Theresa Anderson, Adjunct Fellow, Connected Intelligence Centre (LinkedInTheresa.Anderson@uts.edu.au)

Join the discussion