Moral responsibility in the Computing era (SEP entry)

The Stanford Encyclopedia of Philosophy intermittently has new entries that have to do with computing, like on the philosophy of computer science about which I blogged before, ethics of, among others, Internet research, and now Computing and Moral Responsibility by Merel Noorman [1]. The remainder of this post is about the latter entry that was added on July 18, 2012. Overall, the entry is fine, but I had expected more from it, which may well be due to that the ‘computing and moral responsibility’ topic needs some more work to mature and then maybe will give me the answers I was hoping to find already.

Computing—be this the hardware, firmware, software, or IT themes—interferes with the general notion of moral responsibility, hence, affects every ICT user at least to some extent, and the computer scientists, programmers etc who develop the artifacts may themselves be morally responsible, and perhaps even the produced artifacts, too. This area of philosophical inquiry deals with questions such as “Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?”. To this end, the entry has three main sections, covering moral responsibility, the question whether computers can be more agents, and the notion of (and the need for) rethinking the concept of moral responsibility.

First, it reiterates the general stuff about moral responsibility without the computing dimension, like that it has to do with the actions of humans and its consequences: “generally speaking, a person or group of people is morally responsible when their voluntary actions have morally significant outcomes that would make it appropriate to praise or blame them”, where the SEP entry dwells primarily on the blaming. Philosophers roughly agree that the following three conditions have to be met regarding being morally responsible (copied from the entry):

 1. There should be a causal connection between the person and the outcome of actions. A person is usually only held responsible if she had some control over the outcome of events.

2. The subject has to have knowledge of and be able to consider the possible consequences of her actions. We tend to excuse someone from blame if they could not have known that their actions would lead to a harmful event.

3. The subject has to be able to freely choose to act in certain way. That is, it does not make sense to hold someone responsible for a harmful event if her actions were completely determined by outside forces.

But how are these to be applied? Few case examples of the difficulty to apply it in praxis are given; e.g., the malfunctioning Therac-25 radiation machine (three people died caused by overdoses of radiation, primarily due to issues regarding the software), the Aegis software system that misidentified an Iranian civilian aircraft in 1988 as an attacking military aircraft and the US military decided to shoot it down (contrary to two other systems that had identified it correctly) and having killed all 209 passengers on board, the software to manage remote-controlled drones, and perhaps even the ‘filter bubble’. Who is to blame, if at all? These examples, and others I can easily think of, are vastly different scenarios, but they have not been identified, categorized, and treated as such. But if we do, then perhaps at least some general patters can emerge and even rules regarding moral responsibility in the context of computing. Here’s my initial list of different kinds of cases:

  1. The hardware/software was intended for purpose X but is used for purpose Y, with X not being inherently harmful, whereas Y is; e.g., the technology of an internet filter for preventing kids to access adult-material sites is used to make a blacklist of sites that do not support government policy and subsequently the users vote for harmful policies, or, as simpler one: using mobile phones to detonate bombs.
  2. The hardware/software is designed for malicious intents; ranging from so-called cyber warfare (e.g., certain computer viruses, denial-of-service attacks) to computing for physical war to developing and using shadow-accounting software for tax evasion.
  3. The hardware/software has errors (‘bugs’):
    1. The specification was wrong with respect to the intentionally understated or mis-formulated intentions, and the error is simply a knock-on effect;
    2. The specification was correct, but a part of the subject domain is intentionally wrongly represented (e.g., the decision tree may be correctly implemented given the wrong representation of the subject domain);
    3. The specification was correct, the subject domain represented correctly, but there’s a conceptual error in the algorithm (e.g., the decision tree was built wrongly);
    4. The program code is scruffy and doesn’t do what the algorithm says it is supposed to do;
  4. The software is correct, but has the rules implemented as alethic or hard constraints versus deontic or soft constraints (not being allowed to manually override a default rule), effectively replacing human-bureaucrats with software-bureaucrats;
  5. Bad interface design to make the software difficult to use, resulting in wrong use and/or overlooking essential features;
  6. No or insufficient training of the users how to use the hardware/software;
  7. Insufficient maintenance of the IT system that causes the system to malfunction;
  8. Overconfidence in the reliability of the hardware/software;
    1. The correctness of the software, pretending that it always gives the right answer when it may not; e.g., assuming that the pattern matching algorithm for fingerprint matching is 100% reliable when it is actually only, say, 85%;
    2. Assuming (extreme) high availability, when no extreme high availability system is in place; e.g., relying solely on electronic health records in a remote area whereas the system may be down right when it is crucial to access it in the hospital information system.
  9. Overconfidence in the information provided by or through the software; this is partially alike 8-i, or the first example in item 1, and, e.g., willfully believing that everything published on the Internet is true despite the so-called ‘information warfare’ regarding the spreading of disinformation.

Where the moral responsibility lies can be vastly different depending on the case, and even within the case, it may require further analysis. For instance (and my opinions follow, not what is written in the SEP entry), regarding maintenance: a database for the electronic health records outgrows it prospective size or the new version of the RDBMS actually requires more hardware resources than the server has, with as consequence that querying the database becomes too slow in a critical case (say, to check whether patient A is allergic to medicine B that needs to be administered immediately): perhaps the system designer should have foreseen this, or perhaps management didn’t sign off on a purchase for a new server, but I think that the answer to the question of where the moral responsibility lies can be found. For mission-critical software, formal methods can be used, and if, as engineer, you didn’t and something goes wrong, then you are to blame. One cannot be held responsible for a misunderstanding, but when the domain expert says X of the subject domain and you have some political conviction that you prefer Y and build that into the software and that, then, results in something harmful, then you can be held morally responsible (item 3-ii). On human vs. software bureaucrat (item 4), the blame can be narrowed down when things go wrong: was it the engineer who didn’t bother with the possibility of exceptions, was there a/no technological solution for it at the time of development (and knowingly ignore it), or was it the client who willfully was happy avoiding such pesky individual exceptions to the rule? Or, another example, as the SEP entry questions (an example of item 1): can one hold the mobile phone companies responsible for having designed cell phones that also can be used to detonate bombs? In my opinion: no. Just in case you want to look for guidance, or even answers, in the SEP entry regarding such kind of questions and/or cases: don’t bother, there are none.

More generally, the SEP entry mentions two problems for attributing blame and responsibility: the so-called problem of ‘many hands’ and the problem with physical and temporal distance. The former concerns the issue that there are many people developing the software, training the users, etc., and it is difficult to identify the individual, or even the group of individuals, who ultimately did the thing that caused the harmful effect. It is true that this is a problem, and especially when the computing hardware or software is complex and developed by hundreds or even thousands of people. The latter concerns the problem that the distance can blur the causal connection between action and event, which “can reduce the sense of responsibility”. But, in my opinion, just because someone doesn’t reflect much on her actions and may be willfully narrow-minded to (not) accept that, yes, indeed, those people celebrating a wedding in a tent in far-away Afghanistan are (well, were) humans, too, does not absolve one from the responsibility—neither the hardware developer, nor the software developer, nor the one who pushed the button—as distance does not reduce responsibility. One could argue it is only the one who pushed the button who made the judgment error, but the drone/fighter jet/etc. computer hardware and software are made for harmful purposes in the first place. Its purpose is to do harm to other entities—be this bombing humans or, say, a water purification plant such that the residents have no clean water—and all developers involved very well know this; hence, one is morally responsible from day one that one is involved in its development and/or use.

I’ll skip the entry’s section on computers as agents (AI software, robots), and whether they can be held morally responsible, just responsible, or merely accountable, or none of them, except for the final remark of that section, credited to Bruno Latour (emphasis mine):

[Latour] suggests that in all forms of human action there are three forms of agency at work: 1) the agency of the human performing the action; 2) the agency of the designer who helped shaped the mediating role of the artifacts and 3) the artifact mediating human action. The agency of artifacts is inextricably linked to the agency of its designers and users, but it cannot be reduced to either of them. For him, then, a subject that acts or makes moral decisions is a composite of human and technological components. Moral agency is not merely located in a human being, but in a complex blend of humans and technologies.

Given the issues with assigning moral responsibility with respect to computing, some philosophers ponder about doing away with it, and replace it with a better framework. This is the topic of the third section of the SEP entry, which relies substantially on Gotterbarn’s work on it. He notes that computing is ethically not a neutral practice, and that the “design and use of technological artifacts is a moral activity” (because the choice of one design and implementation over another does have consequences). Moreover, and more interesting, is that, according to the SEP entry, he introduces the notions of negative responsibility and positive responsibility. The former “places the focus on that which exempts one from blame and liability”, whereas the latter “focuses on what ought to be done”, and entails to “strive to minimize foreseeable undesirable events”. Computing professionals, according to Gotterbarn, should adopt the notion of positive responsibility. Later on in the section, there’s a clue that there’s some way to go before achieving that. Accepting accountability is more rudimentary than taking moral responsibility, or at least a first step toward moral responsibility. Nissenbaum (paraphrased in the SEP entry) has identified four barriers to accountability in society (at least back in 1997 when she wrote it): the above-mentioned problem of many hands, the acceptance of ‘bugs’ as an inherent element of large software applications, using the computer as scapegoat, and claiming ownership without accepting liability (read any software license if you doubt the latter). Perhaps that needs to be addressed before going on to the moral responsibility, or one reinforces the other? Dijkstra vents his irritation in one of his writings about software ‘bugs’—the cute euphemism dating back to the ‘50s—and instead proposes to use one of its correct terms: they are errors. Perhaps users should not be lenient with errors, which might compel developers to deliver a better/error-free product, and/or we have to instill in the students more about the positive responsibility and reduce their tolerance for errors. And/or what about re-writing the license agreements a bit, like accepting responsibility provided it is used in one of the prescribed and tested ways? We already had that when I was working for Eurologic more than 10 years ago: the storage enclosure was supposed to work in certain ways and was tested in a variety of configurations, and that we signed off on for our customers. If it was faulty in one of the tested system configurations after all, then that was our problem, and we’d incur the associated costs to fix it. To some extent, that was also with our suppliers. Indeed, for software, that is slightly harder, but one could include in the license something along the line of ‘X works on a clean machine and when common other packages w, y, and z are installed, but we can’t guarantee it when you’ve downloaded weird stuff from the Internet’; not perfect, but it is a step in the right direction. Anyone has better ideas?

Last, the closing sentence is a useful observation, effectively stretching the standard  notion of moral responsibility thanks to computing (emphasis added): “[it] is, thus, not only about how the actions of a person or a group of people affect others in a morally significant way; it is also about how their actions are shaped by technology.”. But, as said, the details are yet to be thought through and worked out in some detail and general guidelines that can be applied.

References

[1] Merel Noorman. (forthcoming in 2012). Computing and Moral Responsibility. Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Zalta, E.N. (ed.).  Stable URL: http://plato.stanford.edu/archives/fall2012/entries/computing-responsibility/.

Advertisement

UNESCO’s take on engineering and development

UNESCO’s report on Engineering: Issues, Challenges, and Opportunities for Development that was published recently does not have any particular section about or message for computing, but that did not deter me from flicking through the roughly 400 pages and read a few sections. In short (according to the exec summary), the report “is an international response to the pressing need for the engineering community to engage with both these wider audiences and the private sector in promoting such an agenda for engineering – and for the world.”, given that “engineering, innovation and technology are part of the solution to global issues”.

Aside from the need for better statistics and more precisely identifying who is an ‘engineer’, there are sections on the national and international engineering bodies, engineering ethics with, among others, the World Federation of Engineering Organizations’ model code of ethics, engineering and the Millennium Development Goals (MDG), and several country-specific assessments.

Some SA statistics

It was through the latter topic that I stumbled upon the answer to questions and criticisms raised during the Annual NACI symposium on the leadership roles of women in science, technology and innovation that I reported on last summer. Several participants of the symposium wanted to se a breakdown of the numbers of publications by age group, as the suspicion was that it is old white men who produce most papers. Page 182 of the UNESCO report has the details, provided by Johann Mouton and Nelius Boshoff from Stellenbosch University. In 1990-1992, the numbers of engineering papers produced by researchers <30 years of age was 10%, but that gradually went down to a mere 5% in 2002-2004, whereas for the >=50 years age group, this went up in the same years from 26% to 39%, all the while that the percentage of engineering articles in the same period went from 5% to 7%. It has to be noted that the percentages for female authors went up from 6 to 11% and for African authors from 3 to 10%. There is also a table with a race-by-gender distribution of graduates at all third-level education degrees, measuring 1996 and 2006. Many things can be read in the numbers (see table, below), and I will not burn myself on my, relatively uninformed, interpretation of these data, except for noting that, given that there is a doubling of doctorates, to me it seems odd that there is relatively a lower output by young researchers. If anyone has an informed explanation, feel free to leave a comment.

Engineering and the MDG

Section 6 of the report looks into engineering and the MDG. The role of engineering to meeting the MDG goes from building infrastructure to have clean water and sanitation to roads (p253 has a large table with the relationship between physical infrastructure and the MDGs, in case you have any doubts they are related). Instead of prior mega-projects that have loose ends when it comes to ongoing operation and maintenance, one has to go to a needs-based approach using a so-called unified-design approach, argue Jo da Silva and Susan Thomas form Arup (pp250-252): “Taken seriously, a unified approach requires us to address issues in depth, in breadth, at their intersections, and over time. Behavioural psychologists, sociologists, physicists, anthropologists, economists, and public health officials all need to be engaged in a broader definition of the design and engineering.”. The remainder of section 6 considers the main MDGs and several case studies, including touching upon the greening of engineering, education and capacity-building, and the issues and challenges of the WEHAB agenda (Water and sanitation, Energy, Health, Agriculture productivity, and Biodiversity and ecosystem management) that are summarized on p262.

Ron Watermeyer form Soderlund and Schutte illustrates differences in priorities between “the ‘North’ (developed nations) and ‘South’ (developing nations)”, who have “‘green’” and “‘brown’” agendas, respectively (see figure). The former “focuses on the reduction of the environmental impact of urban-based production, consumption and water generation on natural resources and ecosystems, and ultimately on the world’s life support system. As such it addresses the issue of affluence and over consumption”, whereas the latter “focuses on poverty and under development. As such, it addresses the need to reduce the environmental threats to health that arise from poor sanitary conditions, crowding, inadequate water provision, hazardous air and water pollution, and the accumulation of solid waste. It is generally more pertinent in poor, under-serviced cities or regions.”. (p265-266). Now, link that to Silva and Thomas’ needs-based approach, mentioned above.

Further points are made in Section 6.1 about prevention and mitigation of risks, disasters, and emergencies where engineering can help out in such a way that certainly costs less than not doing anything.

Gaps

Perhaps I am a bit biased by my education, but I find it a pity that there is only one 1-page paragraph dedicated to agricultural engineering; a sustainable production column producing healthy food accessible to the people is of vital importance and with which one can prevent many other problems.

Anyway, throughout the text, agriculture is referred to at least still a lot more often than the “computer and systems engineering” and “software engineering” that are mentioned in Section 1.1 as a type of engineering, but somehow did not make it in the assessment for opportunities for development. That is short-changing ICT a bit, I think, as there are many issues and opportunities for ICT usage for development and contributing to meeting the MDG, which I wrote about in earlier posts, such as about ISF’s projects outlined during the Aperitivo Informatico at FUB, micro-credit with Kiva, ICT & peace & gender & Africa, ICT for development and sovereignty, and mobile electronic health records in Kenya, among many other online information sources and books on the topic and the ICT and Development Conference (ICTD 2010) that starts today in London.

Aperitivo Informatico at FUB: new ways of inclusion and participation

One of the 28 events during the 5-day long “UniDays” (it, de) at the Free University of Bozen-Bolzano (FUB) was the “Aperitivo Informatico” (held this morning from 11 to about 2pm) that had as theme informatics & democracy with new forms of inclusion and direct participation, closing the digital divide, and online social networks.

The invited guests were: Gabriella Dodero, rector’s delegate for the “diversamente abili” (disabled), Rosella Gennari, FUB PI of the EU FP7 project TERENCE (Technology Enhanced Learning area) and national project LODE (a LOgic-based e-learning tool for DEaf children), Luca Nicotra, Secretary of Agorà Digitale, Paolo Campostrini, a journalist of the Alto Adige newspaper, and Paolo Mazzucato, a journalist for Radio Rai. My role as invited guest was to represent Informatici Senza Frontiere (ISF, Informaticians without borders, an Italian NGO).

The first topics that passed the revue were about what FUB does for the differently abled, noting that there is (and has been) support for blind and deaf people both in the FUB structures and providing suitable software etc, and Gabriella Dodero is also looking into support for people with dyslexia (even though in Italy it is not categorized as a disability). Rosella Gennari zoomed in on deaf children and development of suitable computer-supported learning environments for young poor comprehenders. Luca Nicotra introduced Agorà Digitale, a political/lobby organization concerned with democracy, privacy, net neutrality, and the way of dissemination of information that is an essential component of a well-functioning democracy.

I introduced various projects of ISF, which does not look so much at so-called trash-ware (shipping [dumping?] old hardware to less computerized locations), but, among others, putting effort into developing suitable software for the locale, such as the (open source) openHospital implemented in countries such as Kenya, Uganda, and Benin for day-to-day management of hospital data, installing financial software for managing microcredit in Madagascar, reconnecting Congo to the internet (hospitals and the University Masi Manimba in particular), openStaff in Chad to, among others, provide assistance to refugees, developing controlInfantil in Ecuador, as well as projects in Italy to narrow the digital divide, such as connecting hospitalized children with a long-term illness to their family, friends, and school in a hospital in Brescia, and a computer room and providing basic IT courses in the casa dell’ospitalità in Mestre. (note: some information about these and other projects is also available in English)

Other topics that passed the revue what the future might bring us for Internet & democratization and if the Internet merits to be awarded the Nobel Peace prize (see also Internet for peace). Response on the second topic was diverse, with Dodero continuing her work regardless if it were awarded a prize or not, Gennari jokingly mentioning that after Obama, then, well, why not, whereas Nicotra was not at all that positive about the idea because the Internet can be used for the worse as well and become monopolized like TV and radio before it. Like with most, if not all, technologies, they can be used for the benefit and detriment of society and humankind, and this holds for the Internet just as much and in all three principal components: regarding the hardware (and limitations to connect due to, e.g., blockades), the software for accessibility by diverse groups of peoples, and the generation & dissemination of (dis)information. That is, Internet technologies themselves are not intrinsically good and just (the first networked computers were at DARPA, a research facility of USA defense forces). And perhaps it is not too far fetched to stretch the information component to the ‘Web of Knowledge’ with its current incarnation as the Semantic Web—thus far, it has been used mostly to indeed share, link, and integrate data, information, and knowledge more efficiently and effectively; let us keep focussing on the positive, constructive, side of the usage of Semantic Web technologies.

Peer-to-peer micro-credit over the Internet with Kiva

Creative software applications for development are on the rise. I mentioned a few such tools in earlier posts (here, here, and here), ranging from electronic health records to Web 2.0 social networking in post conflict situations, and even Google.org is extending its toolset (see also the recent Google.org blog post about it). The earlier mentioned tools operate at the destination and might seem ‘far’ away. Recently I stumbled upon the Kiva site through ISF, which brings one of such applications at your fingertips and lets you connect with those people ‘far’ away.

Kiva is based on the micro-credit financing approach pioneered with the Grameen Bank and for which Grameen and its founder, Muhammad Yunus, received the Nobel Peace Prize in 2006. With Kiva, it is not a single bank in Bangladesh that lends to the local entrepreneurs, but an amalgamation of individuals, groups, and companies from across the world that—through the Kiva website—lend $25 at a time to people’s entrepreneurial projects presented on the website, and who then receive the actual money through ‘field representatives’, i.e., local micro-credit finance organizations (MFI) in the countries that Kiva collaborates with.

That is: you choose how and where your money gets used, and, in almost all cases, have your investment paid back in full—and all that with a few mouse-clicks (see graphical explanation of the business process). In addition, you don’t put all your eggs into one basket, but pool together with other lenders to reach the requested funds; put differently: the risk of not seeing back the full amount of your investments is spread out over the whole group of lenders to the chosen project. Further, the lender lends without interest (and I sure hope the local MFI do have none or a low interest rates), and the financial ‘return on investment’ can vary also due to fluctuations in currency exchange rates. The human and social ‘return on investment’ surely is positive.

Thus far (today), some of the Kiva statistics are: $125, 340,035 is the total value of all loans made, of which 98,48% is the current repayment rate, and the number of entrepreneurs that have received a loan is 316,314 of which 82,32% are women, and the average loan is $397.62. Lenders come from 194 countries and there are 53 countries where Kiva’s field partners operate.

The business proposals presented on the website are accompanied by a brief background of the entrepreneurs, a photo, and what they want to use the funds for. Those aims range from agriculture (e.g., to buy two cows and sell the milk), to groceries (e.g., extending the well-running shop, spare parts for a car to increase house deliveries), clothing (e.g., to buy more and better equipment to make them), and much more. The projects are located primarily in Asia, Africa, or Central- and South America, and a large majority of the entrepreneurs are women.

The website also has a blog and email updates where they put short updates about ongoing projects. Although this concept of Kiva probably will not beat the most profitable industries on the Internet, it at least tries to put the networking to constructive use, and it likely will do so even more when the site will be available in more languages and entrepreneurial projects of more countries will be available. Though with such an increase, the currently reasonable search function will have to be improved upon so as to keep finding information quickly. Overall, perhaps it may become an example of the ‘Internet for peace’.

P.S.: True, the Kiva approach is not without baggage, and it surely is not, nor should it be, the only means to narrow the disparities in living circumstances between the entrepreneurs and (potential) lenders, but, in my opinion, it deserved the benefit of the doubt. So, yes, I did give it a try. At the moment I write this, with my and 112 other lends, a clothing sales person in Honduras, teachers in Sierra Leone, and a seamstress in Nigeria have the opportunity to realize their ideas to have a more fulfilling life and improve their lot, and I wish them all the best with it. It is not ‘we’, who are relatively rich, who tell them what to do, but the entrepreneurs themselves who decide how to make the best use of the money, which hopefully is empowering. As isolated projects, this may seem insignificant in ‘the big picture’, but it is significant for the people involved and many little bits do amount to a lot.