At the first meeting of the new Technology, Media and Strategy Research Group in FASS, Belinda Middleweek, Suneel Jethani and Maureen Taylor presented on the topic of ‘Data Ethics: Implications for Traditional Media, Social Media and the Communication Industry.’ We explore their findings in our three-part ‘AI and data ethics’ series.

What happens if we develop social attachment to a machine?

This question was explored in the research of Senior Lecturer Belinda Middleweek who, in collaboration with Alicia Vidler from UNSW, analysed the impact of bugs and errors in the user experience of AI Companion services. Error reports inform our understanding of human interactions with their AI companion. These reports showed that errors where intimacy was built resulted in a greater impact on users than an error where no intimacy was developed. So, what ethics framework can be proposed from this knowledge?

The terminology: why ‘Intimate AI’?

First off, we need to acknowledge that intimate relationships with Companion AI are not artificial relationships for the users of such tech. We need recognition of the authenticity of the relationship and intimacy.

This means addressing some of the problematic terminology associated with these relationships. Developers often use terms such as:

  • Artificial companion
  • Artificial partner
  • Artificial intimacy
  • Artificial companionship

These terms emphasise ‘Artificial’ which diminishes the reality of genuine romantic attachment experienced by users. This reflects a mentality where the ethics in developing these systems is not considered. Belinda and her research partners proposed the term ‘Intimate AI’ as a starting point to help developers not overlook the ethical implications of their systems.

The impact: 4 bug reports

These are the top 4 bug reports that affect users of Intimate AI.

  1. User experience of connection is diminished by functional errors – login errors, app failure, connectivity issues
  2. Language processing and inappropriate behaviour – the use of ‘pre-canned phraseology’
  3. Lack of contextual understanding – content of previous conversations was not being carried through to the learning of the Intimate AI
  4. Non-cumulative knowledge base – conversations history could be lost due to system errors; for the user, the loss of this data amounts to the erasure of a shared relationship history

To understand the ethical implications of this field, we need to move beyond the assumption from both developers and users that these relationships are artificial.

How ethically sustainable is AI, intimate or otherwise?

In the same way that we might see this form of intimacy as artificial, we also tend to see AI as immaterial. For me, some of these error reports highlighted the need for a closer look into the infrastructure that supports AI systems.

Given the issue of decay of hardware, how can Intimate AI developers ensure the data from relationships is not lost forever when hardware becomes obsolete? Data storage and AI training also require massive amounts of hardware and energy. We definitely know that certain minerals required to build this hardware are becoming more scarce while demand is only increasing.

Is it ethical to build the new world of Companion AI upon yet another resource that either won’t last or will become too expensive and harmful to extract? There is a geophysical reality of a mining hole in the ground which is undeniably linked to these digital technologies. That’s before we even get to the unsustainable e-waste side of things. There’s also the unknown of how protected this AI infrastructure is from ever-increasing unpredictable climate disasters. Yet we often talk about data and AI as if they are immaterial. It’s possible this leaves a big hole in our discussion of ethics as well.

For Intimate AI, the implication of unsustainable infrastructure is data loss and the possible erasure of an entire relationship history. This is individually devastating. The implications for other forms of AI could be far more challenging collectively.

(N)everlasting love

If we develop social attachment to AI, it can have similar impacts to developing social attachment to a human. Neither ourselves nor the humans we love will last forever but we still share our memories. Intimate AI are subject to at least the same level of decay and uncertainty but perhaps AI developers haven’t yet confronted this reality nor its implications for users.

For a detailed insight into the abstracts for this session, check out TMS Research Group_Kickoff_Data_ethics_panel.pdf (sharepoint.com; UTS log-in required).

Parts 2 and 3 in this series on the Data Ethics panel will be posted over the next couple of weeks, so stay tuned!

 Image by Ochir-Erdene Oyunmedeg

Join the discussion