The Stanford Encyclopedia of Philosophy intermittently has new entries that have to do with computing, like on the philosophy of computer science about which I blogged before, ethics of, among others, Internet research, and now Computing and Moral Responsibility by Merel Noorman . The remainder of this post is about the latter entry that was added on July 18, 2012. Overall, the entry is fine, but I had expected more from it, which may well be due to that the ‘computing and moral responsibility’ topic needs some more work to mature and then maybe will give me the answers I was hoping to find already.
Computing—be this the hardware, firmware, software, or IT themes—interferes with the general notion of moral responsibility, hence, affects every ICT user at least to some extent, and the computer scientists, programmers etc who develop the artifacts may themselves be morally responsible, and perhaps even the produced artifacts, too. This area of philosophical inquiry deals with questions such as “Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?”. To this end, the entry has three main sections, covering moral responsibility, the question whether computers can be more agents, and the notion of (and the need for) rethinking the concept of moral responsibility.
First, it reiterates the general stuff about moral responsibility without the computing dimension, like that it has to do with the actions of humans and its consequences: “generally speaking, a person or group of people is morally responsible when their voluntary actions have morally significant outcomes that would make it appropriate to praise or blame them”, where the SEP entry dwells primarily on the blaming. Philosophers roughly agree that the following three conditions have to be met regarding being morally responsible (copied from the entry):
1. There should be a causal connection between the person and the outcome of actions. A person is usually only held responsible if she had some control over the outcome of events.
2. The subject has to have knowledge of and be able to consider the possible consequences of her actions. We tend to excuse someone from blame if they could not have known that their actions would lead to a harmful event.
3. The subject has to be able to freely choose to act in certain way. That is, it does not make sense to hold someone responsible for a harmful event if her actions were completely determined by outside forces.
But how are these to be applied? Few case examples of the difficulty to apply it in praxis are given; e.g., the malfunctioning Therac-25 radiation machine (three people died caused by overdoses of radiation, primarily due to issues regarding the software), the Aegis software system that misidentified an Iranian civilian aircraft in 1988 as an attacking military aircraft and the US military decided to shoot it down (contrary to two other systems that had identified it correctly) and having killed all 209 passengers on board, the software to manage remote-controlled drones, and perhaps even the ‘filter bubble’. Who is to blame, if at all? These examples, and others I can easily think of, are vastly different scenarios, but they have not been identified, categorized, and treated as such. But if we do, then perhaps at least some general patters can emerge and even rules regarding moral responsibility in the context of computing. Here’s my initial list of different kinds of cases:
- The hardware/software was intended for purpose X but is used for purpose Y, with X not being inherently harmful, whereas Y is; e.g., the technology of an internet filter for preventing kids to access adult-material sites is used to make a blacklist of sites that do not support government policy and subsequently the users vote for harmful policies, or, as simpler one: using mobile phones to detonate bombs.
- The hardware/software is designed for malicious intents; ranging from so-called cyber warfare (e.g., certain computer viruses, denial-of-service attacks) to computing for physical war to developing and using shadow-accounting software for tax evasion.
- The hardware/software has errors (‘bugs’):
- The specification was wrong with respect to the intentionally understated or mis-formulated intentions, and the error is simply a knock-on effect;
- The specification was correct, but a part of the subject domain is intentionally wrongly represented (e.g., the decision tree may be correctly implemented given the wrong representation of the subject domain);
- The specification was correct, the subject domain represented correctly, but there’s a conceptual error in the algorithm (e.g., the decision tree was built wrongly);
- The program code is scruffy and doesn’t do what the algorithm says it is supposed to do;
- The software is correct, but has the rules implemented as alethic or hard constraints versus deontic or soft constraints (not being allowed to manually override a default rule), effectively replacing human-bureaucrats with software-bureaucrats;
- Bad interface design to make the software difficult to use, resulting in wrong use and/or overlooking essential features;
- No or insufficient training of the users how to use the hardware/software;
- Insufficient maintenance of the IT system that causes the system to malfunction;
- Overconfidence in the reliability of the hardware/software;
- The correctness of the software, pretending that it always gives the right answer when it may not; e.g., assuming that the pattern matching algorithm for fingerprint matching is 100% reliable when it is actually only, say, 85%;
- Assuming (extreme) high availability, when no extreme high availability system is in place; e.g., relying solely on electronic health records in a remote area whereas the system may be down right when it is crucial to access it in the hospital information system.
- Overconfidence in the information provided by or through the software; this is partially alike 8-i, or the first example in item 1, and, e.g., willfully believing that everything published on the Internet is true despite the so-called ‘information warfare’ regarding the spreading of disinformation.
Where the moral responsibility lies can be vastly different depending on the case, and even within the case, it may require further analysis. For instance (and my opinions follow, not what is written in the SEP entry), regarding maintenance: a database for the electronic health records outgrows it prospective size or the new version of the RDBMS actually requires more hardware resources than the server has, with as consequence that querying the database becomes too slow in a critical case (say, to check whether patient A is allergic to medicine B that needs to be administered immediately): perhaps the system designer should have foreseen this, or perhaps management didn’t sign off on a purchase for a new server, but I think that the answer to the question of where the moral responsibility lies can be found. For mission-critical software, formal methods can be used, and if, as engineer, you didn’t and something goes wrong, then you are to blame. One cannot be held responsible for a misunderstanding, but when the domain expert says X of the subject domain and you have some political conviction that you prefer Y and build that into the software and that, then, results in something harmful, then you can be held morally responsible (item 3-ii). On human vs. software bureaucrat (item 4), the blame can be narrowed down when things go wrong: was it the engineer who didn’t bother with the possibility of exceptions, was there a/no technological solution for it at the time of development (and knowingly ignore it), or was it the client who willfully was happy avoiding such pesky individual exceptions to the rule? Or, another example, as the SEP entry questions (an example of item 1): can one hold the mobile phone companies responsible for having designed cell phones that also can be used to detonate bombs? In my opinion: no. Just in case you want to look for guidance, or even answers, in the SEP entry regarding such kind of questions and/or cases: don’t bother, there are none.
More generally, the SEP entry mentions two problems for attributing blame and responsibility: the so-called problem of ‘many hands’ and the problem with physical and temporal distance. The former concerns the issue that there are many people developing the software, training the users, etc., and it is difficult to identify the individual, or even the group of individuals, who ultimately did the thing that caused the harmful effect. It is true that this is a problem, and especially when the computing hardware or software is complex and developed by hundreds or even thousands of people. The latter concerns the problem that the distance can blur the causal connection between action and event, which “can reduce the sense of responsibility”. But, in my opinion, just because someone doesn’t reflect much on her actions and may be willfully narrow-minded to (not) accept that, yes, indeed, those people celebrating a wedding in a tent in far-away Afghanistan are (well, were) humans, too, does not absolve one from the responsibility—neither the hardware developer, nor the software developer, nor the one who pushed the button—as distance does not reduce responsibility. One could argue it is only the one who pushed the button who made the judgment error, but the drone/fighter jet/etc. computer hardware and software are made for harmful purposes in the first place. Its purpose is to do harm to other entities—be this bombing humans or, say, a water purification plant such that the residents have no clean water—and all developers involved very well know this; hence, one is morally responsible from day one that one is involved in its development and/or use.
I’ll skip the entry’s section on computers as agents (AI software, robots), and whether they can be held morally responsible, just responsible, or merely accountable, or none of them, except for the final remark of that section, credited to Bruno Latour (emphasis mine):
[Latour] suggests that in all forms of human action there are three forms of agency at work: 1) the agency of the human performing the action; 2) the agency of the designer who helped shaped the mediating role of the artifacts and 3) the artifact mediating human action. The agency of artifacts is inextricably linked to the agency of its designers and users, but it cannot be reduced to either of them. For him, then, a subject that acts or makes moral decisions is a composite of human and technological components. Moral agency is not merely located in a human being, but in a complex blend of humans and technologies.
Given the issues with assigning moral responsibility with respect to computing, some philosophers ponder about doing away with it, and replace it with a better framework. This is the topic of the third section of the SEP entry, which relies substantially on Gotterbarn’s work on it. He notes that computing is ethically not a neutral practice, and that the “design and use of technological artifacts is a moral activity” (because the choice of one design and implementation over another does have consequences). Moreover, and more interesting, is that, according to the SEP entry, he introduces the notions of negative responsibility and positive responsibility. The former “places the focus on that which exempts one from blame and liability”, whereas the latter “focuses on what ought to be done”, and entails to “strive to minimize foreseeable undesirable events”. Computing professionals, according to Gotterbarn, should adopt the notion of positive responsibility. Later on in the section, there’s a clue that there’s some way to go before achieving that. Accepting accountability is more rudimentary than taking moral responsibility, or at least a first step toward moral responsibility. Nissenbaum (paraphrased in the SEP entry) has identified four barriers to accountability in society (at least back in 1997 when she wrote it): the above-mentioned problem of many hands, the acceptance of ‘bugs’ as an inherent element of large software applications, using the computer as scapegoat, and claiming ownership without accepting liability (read any software license if you doubt the latter). Perhaps that needs to be addressed before going on to the moral responsibility, or one reinforces the other? Dijkstra vents his irritation in one of his writings about software ‘bugs’—the cute euphemism dating back to the ‘50s—and instead proposes to use one of its correct terms: they are errors. Perhaps users should not be lenient with errors, which might compel developers to deliver a better/error-free product, and/or we have to instill in the students more about the positive responsibility and reduce their tolerance for errors. And/or what about re-writing the license agreements a bit, like accepting responsibility provided it is used in one of the prescribed and tested ways? We already had that when I was working for Eurologic more than 10 years ago: the storage enclosure was supposed to work in certain ways and was tested in a variety of configurations, and that we signed off on for our customers. If it was faulty in one of the tested system configurations after all, then that was our problem, and we’d incur the associated costs to fix it. To some extent, that was also with our suppliers. Indeed, for software, that is slightly harder, but one could include in the license something along the line of ‘X works on a clean machine and when common other packages w, y, and z are installed, but we can’t guarantee it when you’ve downloaded weird stuff from the Internet’; not perfect, but it is a step in the right direction. Anyone has better ideas?
Last, the closing sentence is a useful observation, effectively stretching the standard notion of moral responsibility thanks to computing (emphasis added): “[it] is, thus, not only about how the actions of a person or a group of people affect others in a morally significant way; it is also about how their actions are shaped by technology.”. But, as said, the details are yet to be thought through and worked out in some detail and general guidelines that can be applied.
 Merel Noorman. (forthcoming in 2012). Computing and Moral Responsibility. Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Zalta, E.N. (ed.). Stable URL: http://plato.stanford.edu/archives/fall2012/entries/computing-responsibility/.