Background readings to the “Melokuhle – good things” short story

==== WARNING: SPOILERS AHEAD ====

If you have not yet read the short story Melokuhle – good things, published recently by East of the Web, you are advised to do so before continuing to read this post, unless you are an academic who really insists on looking at some research first.

========

This post is not about analysing the story about the somewhat culturally aware cooperative care robot, but about some of the papers relating to the theory, technology, and ethical aspects of it. I did intend to practice writing fiction, yet somehow I couldn’t resist weaving some scientific and educational aspects into the story. Occupational hazard, I suppose. (I’ve been teaching computer ethics/Social Issues and Professional Practice since 2016 to some 500-700 first-year students in computing.)

The idea of the story was initially motivated by three research topics that came together into the story it has become. First, computing and AI has ‘scenario banks’ and websites with lists of ethics questions to debate as to what we, as scientists, engineers, system architects, or programmers, should make the machine do, or not. One of my former students picked a few he liked to use in his project, one of which was the ‘Mia the alcoholic’ 1-paragraph scenario described by Millar [Millar16]. In short, it concerns the question: should the care robot [be programmed such that it will] serve more alcoholic beverages to the physically challenged alcoholic user when they ask for it? Melokuhle – good things provides a nontrivial answer that can be fun to discuss.

Perhaps unsurprisingly, not everyone will give the same answer to that question. The probably most popular demonstration of why this may be so is the research conducted with the MIT Moral Machine that broadened the trolley problem to self-driving cars and a range of scenarios, like whether you’d swerve for 5 homeless people and let yourself die or not, or, say, drive into 5 dogs vs 1 toddler if it had to be a binary choice. It turned out that clusters of answers were by found by region and, presumably, culture [Awad18]. Enter the idea of culturally aware robot.

But what is a ‘culturally aware’ robot supposed to do differently from ‘just’ a robot? Around the same time, Stefano Borgo gave a stimulating talk in our department about culturally aware robots, based on his paper about the traits that such a culture module should have [BorgoBlanzieri18]. The appealing idea turned out to be fraught with questions and potential problems, and a lively debate ensued during and after the talk. When is a robot culturally aware and when does one encode tropes and societal prejudices in the robot’s actions? Research is ongoing to answer that question.

Enter the idea to make the user configure the robot upfront somehow as a way to tailor the cultural awareness to their liking. Yet, user-configurable settings for every possible scenario is practically unrealistic: no-one is going to spend days answering questions before being able to use the machine. A possible solution is to have the user enter (somehow) their moral theory and then it would draw the logical conclusion for any particular scenario based on the chosen moral theory. For instance, if the user were to be a devout Muslim and had chosen Divine Command Theory, then with the ‘thou shalt not drink(alcohol)’ command in effect, the carebot’s actions for Mia, or Lubanzi in the short story, would be easy to determine: a resounding no—it wouldn’t even have poured him the first bottle of wine. (refer to the SIPP lecture notes [CSDept19] for summaries of 8 other ethical/moral theories.)

To be able to get this to work in an artificial moral agent, we need to be able to represent a bunch of moral theories computationally and then devise a reasoning module that can take the formally represented theory as input and apply it to the case at hand. We’ve worked on the first part and developed a model for representing moral theories [RautenbachKeet20] and illustrated that, yes, a different moral theory can lead to a different conclusion, and why. The reasoning module doesn’t exist as a piece of software; in fact, there’s not even a design for it on paper yet for how to do it. There are sketchy ideas and the occasional rules-based approaches for one theory, but generalising from that is a few steps beyond still. And there’s the task of theory elicitation, which the short story also alludes to; a student I currently supervise is finishing up his Masters in IT dissertation on that.

The natural language interface issues that passed the revue in the story deserve their own post, or two or three. I wrote an IT outreach and discussion paper on some aspects of it and on just requirements in the context of robots in Sub-Saharan Africa [Keet21], and I conduct research on natural language generation for, mainly, Nguni languages. That comment about lacking resources for isiZulu natural language generation that the programmer supposedly snuck into Melokuhle’s code? That was me, unapologetically: we have plenty good ideas and really would like to receive more research funds…

Overall, much research needs to be done still to realise the capabilities that Melokuhle (the carebot) exhibits in the story—if that sort of robot is one you’d like to have, that is. And so, yes, the East of the Web team that published the short story rightly classified it in the Sci-Fi genre.

Lastly, I did embed a few other bits and pieces of computer ethics, like the notion of behaviour modification or so-called nudging of users by computing devices and a whiff of surveillance in the home if the robot were indeed to be permanently in listening mode. If the story were to be used in an educational setting, they could be elaborated on as well. Further, here are a few questions that may be used to direct a discussion, be it in class or a practical or tutorial group discussion:

  1. What does “culturally aware robot” mean for you?
  2. Is it acceptable to program behaviour modification, or at least ‘nudging’, of a user in a computing device?
  3. Should the care robot always, occasionally, or never comply to the request for more alcoholic beverages? Why?
  4. Would your answers be different if Lubanzi were to have been given a back story of being an alcoholic or an athlete or a retiree in their late 70s?

Also, I used the story for the essay assignment last year, which was on the ethics of deploying robot teachers. The students could receive a few marks if they included a paragraph that contained an answer to “does any issue raised in the short story apply to robot teachers as well?”. I did that partially as a way to reduce the chance that students would farm out the whole task to ChatGPT and partially to make them practice with reasoning by analogy.

To eager learners who are about to register at UCT and will enroll in CSC1016S: I won’t be asking this of you in the second semester, as I’m taking a break from the module in 2024 to teach something else and anyhow we don’t repeat assignments in immediate successive years. I do have a few more draft short stories, though.

References

[Awad18] Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.-F., Rahwan, I. The Moral Machine experiment. Nature, 2018, 563(7729): 59-64

[BorgoBlanzieri18] Borgo, S., Blanzieri, E. Trait-based Culture and its Organization: Developing a Culture Enabler for Artificial Agents. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 2018, pp. 333-338, doi: 10.1109/IROS.2018.8593369.

[CSDept19] Computer Science Department. Social Issues and Professional Practice in IT & Computing. Lecture Notes. 6 December 2019.

[Keet21] Keet, C.M. Natural Language Generation Requirements for Social Robots in Sub-Saharan Africa. IST-Africa 2021, IST-Africa 2021 Conference Proceedings. IST-Africa Institute and IIMC Ireland. Cunningham, M. and Cunningham, P. (Eds). 10-14 May 2021, online. (discussion paper)

[Millar16] Millar, J. An ethics evaluation tool for automating ethical decision-making in robots and self-driving cars. Applied Artificial Intelligence, 2016, 30(8):787–809.

[RautenbachKeet20] Rautenbach, J.G., Keet, C.M. Toward equipping Artificial Moral Agents with multiple ethical theories. RobOntics: International Workshop on Ontologies for Autonomous Robotics, co-located with BoSK’20, Bolzano. CEUR-WS vol. 2708, 5 (7 pages). Extended version: https://arxiv.org/abs/2003.00935

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.