ICTs for South Africa’s indigenous languages should be a national imperative, too

South Africa has 11 official languages with English as the language of business, as decided during the post-Apartheid negotiations. In practice, that decision has resulted in the other 10 being sidelined, which holds even more so for the nine indigenous languages, as they were already underresourced. This trend runs counter to the citizens’ constitutional rights and the state’s obligations, as she “must take practical and positive measures to elevate the status and advance the use of these languages” (Section 6 (2)). But the obligations go beyond just language promotion. Take, e.g., the right to have access to the public health system: one study showed that only 6% of patient-doctor consultations was held in the patient’s home language[1], with the other 94% essentially not receiving the quality care they deserve due to language barriers[2].

Learning 3-4 languages up to practical multilingualism is obviously a step toward achieving effective communication, which therewith reduces divisions in society, which in turn fosters cohesion-building and inclusion, and may contribute to achieve redress of the injustices of the past. This route does tick multiple boxes of the aims presented in the National Development Plan 2030. How to achieve all that is another matter. Moreover, just learning a language is not enough if there’s no infrastructure to support it. For instance, what’s the point of searching the Web in, say, isiXhosa when there are only a few online documents in isiXhosa and the search engine algorithms can’t process the words properly anyway, hence, not returning the results you’re looking for? Where are the spellcheckers to assist writing emails, school essays, or news articles? Can’t the language barrier in healthcare be bridged by on-the-fly machine translation for any pair of languages, rather than using the Mobile Translate MD system that is based on canned text (i.e., a small set of manually translated sentences)?


Rule-based approaches to develop tools

Research is being carried out to devise Human Language Technologies (HLTs) to answer such questions and contribute to realizing those aspects of the NDP. This is not simply a case of copying-and-pasting tools for the more widely-spoken languages. For instance, even just automatically generating the plural noun in isiZulu from a noun in the singular required a new approach that combined syntax (how it is written) with semantics (the meaning) through inclusion of the noun class system in the algorithms[3] [summary]. In contrast, for English, just syntax-based rules can do the job[4] (more precisely: regular expressions in a Perl script). Rule-based approaches are also preferred for morphological analysers for the regional languages[5], which split each word into its constituent parts, and for natural language generation (NLG). An NLG system generates natural language text from structured data, information, or knowledge, such as data in spreadsheets. A simple way of realizing that is to use templates where the software slots in the values given by the data. This is not possible for isiZulu, because the sentence constituents are context-dependent, of which the idea is illustrated in Figure 1[6].

Figure 1. Illustration of a template for the ‘all-some’ axiom type of a logical theory (structured knowledge) and some values that are slotted in, such as Professors, resp. oSolwazi, and eat, resp. adla and zidla; ‘nc’ denotes the noun class of the noun, which governs agreement across related words in a sentence. The four sample sentences in English and isiZulu represent the same information.

Therefore, a grammar engine is needed to generate even the most basic sentences correctly. The core aspects of the workflow in the grammar engine [summary] are presented schematically in Figure 2[7], which is being extended with more precise details of the verbs as a context-free grammar [summary][8]. Such NLG could contribute to, e.g., automatically generating patient discharge notes in one’s own language, text-based weather forecasts, or online language learning exercises.

Figure 2. The isiZulu grammar engine for knowledge-to-text consists conceptually of three components: the verbalisation patterns with their algorithms to generate natural language for a selection of axiom types, a way of representing the knowledge in a structured manner, and the linking of the two to realize the generation of the sentences on-the-fly. It has been implemented in Python and Owlready.


Data-driven approaches that use lots of text

The rules-based approach is known to be resource-intensive. Therefore, and in combination with the recent Big Data hype, data-driven approaches with lost of text are on the rise: it offers the hope to achieve more with less effort, not even having to learn the language, and easier bootstrapping of tools for related languages. This can work, provided one has a lot of good quality text (a corpus). Corpora are being developed, such as the isiZulu National Corpus[9], and the recently established South African Centre for Digital Language Resources (SADiLaR) aims to pool the resources. We investigated the effects of a corpus on the quality of an isiZulu spellchecker [summary], which showed that learning the statistics-driven language model on old texts like the bible does not transfer well to modern-day texts such as news items, nor vice versa[10]. The spellchecker has about 90% accuracy in single-word error detection and it seems to contribute to the intellectualisation[11] of isiZulu [summary][12]. Its algorithms use trigrams and probabilities of their occurrence in the corpus to compute the probability that a word is spelled correctly, illustrated in Figure 3, rather than a dictionary-based approach that is impractical for agglutinating languages. The algorithms were reused for isiXhosa simply by feeding it a small isiXhosa corpus: it achieved about 80% accuracy already even without optimisations.

Figure 3. Illustration of the underlying approach of the isiZulu spellchecker

Data-driven approaches are also pursued in information retrieval to, e.g., develop search engines for isiZulu and isiXhosa[13]. Algorithms for data-driven machine translation (MT), on the other hand, can easily be misled by out-of-domain training data of parallel sentences in both languages from which it has to learn the patterns, such as such as concordial agreement like izi- zi- (see Figure 1). In one of our experiments where the MT system learned from software localization texts, an isiXhosa sentence in the context of health care, Le nto ayiqhelekanga kodwa ngokwenene iyenzeka ‘This is not very common, but certainly happens.’ came out as ‘The file is not valid but cannot be deleted.’, which is just wrong. We are currently creating a domain-specific parallel corpus to improve the MT quality that, it is hoped, will eventually replace the afore-mentioned Mobile Translate MD system. It remains to be seen whether such a data-driven MT or an NLG approach, or a combination thereof, may eventually further alleviate the language barriers in healthcare.


Because of the ubiquity of ICTs in all of society in South Africa, HLTs for the indigenous languages have become a necessity, be it for human-human or human-computer interaction. Profit-driven multinationals such as Google, Facebook, and Microsoft put resources into development of HLTs for African languages already. Languages, and the identities and cultures intertwined with them, are a national resource, however; hence, suggesting the need for more research and the creation of a substantial public good of a wide range of HLTs to assist people in the use of their language in the digital age and to contribute to effective communication in society.

[1] Levin, M.E. Language as a barrier to care for Xhosa-speaking patients at a South African paediatric teaching hospital. S Afr Med J. 2006 Oct; 96 (10): 1076-9.

[2] Hussey, N. The Language Barrier: The overlooked challenge to equitable health care. SAHR, 2012/13, 189-195.

[3] Byamugisha, J., Keet, C.M., Khumalo, L. Pluralising Nouns in isiZulu and Related Languages. 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’16). A. Gelbukh (Ed.). Springer LNCS vol 9623, pp. April 3-9, 2016, Konya, Turkey.

[4] Conway, D.M.: An algorithmic approach to English pluralization. In: Salzenberg, C. (ed.) Proceedings of the Second Annual Perl Conference. O’Reilly (1998), San Jose, USA, 17-20 August, 1998

[5] Pretorius, L. & Bosch, S.E. Enabling computer interaction in the indigenous languages of South Africa: The central role of computational morphology. ACM Interactions, 56 (March + April 2003).

[6] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51(1): 131-157.

[7] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. The Semantic Web: ESWC 2017 Satellite Events, Blomqvist, E et al. (eds.). Springer LNCS vol 10577, 59-64.

[8] Keet, C.M., Khumalo, L. Grammar rules for the isiZulu complex verb. Southern African Linguistics and Applied Language Studies, 2017, 35(2): 183-200.

[9] L. Khumalo. Advances in Developing corpora in African languages. Kuwala, 2015, 1(2): 21-30.

[10] Ndaba, B., Suleman, H., Keet, C.M., Khumalo, L. The effects of a corpus on isiZulu spellcheckers based on N-grams. In IST-Africa.2016. (May 11-13, 2016). IIMC, Durban, South Africa, 2016, 1-10.

[11] Finlayson, R, Madiba, M. The intellectualization of the indigenous languages of South Africa: Challenges and prospects. Current Issues in Language Planning, 2002, 3(1): 40-61.

[12] Keet, C.M., Khumalo, L. Evaluation of the effects of a spellchecker on the intellectualization of isiZulu. Alternation, 2017, 24(2): 75-97.

[13] Malumba, N., Moukangwe, K., Suleman, H. AfriWeb: A Web Search Engine for a Marginalized Language. Proceedings of 2015 Asian Digital Library Conference, Seoul, South Korea, 9-12 December 2015.


Moral responsibility in the Computing era (SEP entry)

The Stanford Encyclopedia of Philosophy intermittently has new entries that have to do with computing, like on the philosophy of computer science about which I blogged before, ethics of, among others, Internet research, and now Computing and Moral Responsibility by Merel Noorman [1]. The remainder of this post is about the latter entry that was added on July 18, 2012. Overall, the entry is fine, but I had expected more from it, which may well be due to that the ‘computing and moral responsibility’ topic needs some more work to mature and then maybe will give me the answers I was hoping to find already.

Computing—be this the hardware, firmware, software, or IT themes—interferes with the general notion of moral responsibility, hence, affects every ICT user at least to some extent, and the computer scientists, programmers etc who develop the artifacts may themselves be morally responsible, and perhaps even the produced artifacts, too. This area of philosophical inquiry deals with questions such as “Who is accountable when electronic records are lost or when they contain errors? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?”. To this end, the entry has three main sections, covering moral responsibility, the question whether computers can be more agents, and the notion of (and the need for) rethinking the concept of moral responsibility.

First, it reiterates the general stuff about moral responsibility without the computing dimension, like that it has to do with the actions of humans and its consequences: “generally speaking, a person or group of people is morally responsible when their voluntary actions have morally significant outcomes that would make it appropriate to praise or blame them”, where the SEP entry dwells primarily on the blaming. Philosophers roughly agree that the following three conditions have to be met regarding being morally responsible (copied from the entry):

 1. There should be a causal connection between the person and the outcome of actions. A person is usually only held responsible if she had some control over the outcome of events.

2. The subject has to have knowledge of and be able to consider the possible consequences of her actions. We tend to excuse someone from blame if they could not have known that their actions would lead to a harmful event.

3. The subject has to be able to freely choose to act in certain way. That is, it does not make sense to hold someone responsible for a harmful event if her actions were completely determined by outside forces.

But how are these to be applied? Few case examples of the difficulty to apply it in praxis are given; e.g., the malfunctioning Therac-25 radiation machine (three people died caused by overdoses of radiation, primarily due to issues regarding the software), the Aegis software system that misidentified an Iranian civilian aircraft in 1988 as an attacking military aircraft and the US military decided to shoot it down (contrary to two other systems that had identified it correctly) and having killed all 209 passengers on board, the software to manage remote-controlled drones, and perhaps even the ‘filter bubble’. Who is to blame, if at all? These examples, and others I can easily think of, are vastly different scenarios, but they have not been identified, categorized, and treated as such. But if we do, then perhaps at least some general patters can emerge and even rules regarding moral responsibility in the context of computing. Here’s my initial list of different kinds of cases:

  1. The hardware/software was intended for purpose X but is used for purpose Y, with X not being inherently harmful, whereas Y is; e.g., the technology of an internet filter for preventing kids to access adult-material sites is used to make a blacklist of sites that do not support government policy and subsequently the users vote for harmful policies, or, as simpler one: using mobile phones to detonate bombs.
  2. The hardware/software is designed for malicious intents; ranging from so-called cyber warfare (e.g., certain computer viruses, denial-of-service attacks) to computing for physical war to developing and using shadow-accounting software for tax evasion.
  3. The hardware/software has errors (‘bugs’):
    1. The specification was wrong with respect to the intentionally understated or mis-formulated intentions, and the error is simply a knock-on effect;
    2. The specification was correct, but a part of the subject domain is intentionally wrongly represented (e.g., the decision tree may be correctly implemented given the wrong representation of the subject domain);
    3. The specification was correct, the subject domain represented correctly, but there’s a conceptual error in the algorithm (e.g., the decision tree was built wrongly);
    4. The program code is scruffy and doesn’t do what the algorithm says it is supposed to do;
  4. The software is correct, but has the rules implemented as alethic or hard constraints versus deontic or soft constraints (not being allowed to manually override a default rule), effectively replacing human-bureaucrats with software-bureaucrats;
  5. Bad interface design to make the software difficult to use, resulting in wrong use and/or overlooking essential features;
  6. No or insufficient training of the users how to use the hardware/software;
  7. Insufficient maintenance of the IT system that causes the system to malfunction;
  8. Overconfidence in the reliability of the hardware/software;
    1. The correctness of the software, pretending that it always gives the right answer when it may not; e.g., assuming that the pattern matching algorithm for fingerprint matching is 100% reliable when it is actually only, say, 85%;
    2. Assuming (extreme) high availability, when no extreme high availability system is in place; e.g., relying solely on electronic health records in a remote area whereas the system may be down right when it is crucial to access it in the hospital information system.
  9. Overconfidence in the information provided by or through the software; this is partially alike 8-i, or the first example in item 1, and, e.g., willfully believing that everything published on the Internet is true despite the so-called ‘information warfare’ regarding the spreading of disinformation.

Where the moral responsibility lies can be vastly different depending on the case, and even within the case, it may require further analysis. For instance (and my opinions follow, not what is written in the SEP entry), regarding maintenance: a database for the electronic health records outgrows it prospective size or the new version of the RDBMS actually requires more hardware resources than the server has, with as consequence that querying the database becomes too slow in a critical case (say, to check whether patient A is allergic to medicine B that needs to be administered immediately): perhaps the system designer should have foreseen this, or perhaps management didn’t sign off on a purchase for a new server, but I think that the answer to the question of where the moral responsibility lies can be found. For mission-critical software, formal methods can be used, and if, as engineer, you didn’t and something goes wrong, then you are to blame. One cannot be held responsible for a misunderstanding, but when the domain expert says X of the subject domain and you have some political conviction that you prefer Y and build that into the software and that, then, results in something harmful, then you can be held morally responsible (item 3-ii). On human vs. software bureaucrat (item 4), the blame can be narrowed down when things go wrong: was it the engineer who didn’t bother with the possibility of exceptions, was there a/no technological solution for it at the time of development (and knowingly ignore it), or was it the client who willfully was happy avoiding such pesky individual exceptions to the rule? Or, another example, as the SEP entry questions (an example of item 1): can one hold the mobile phone companies responsible for having designed cell phones that also can be used to detonate bombs? In my opinion: no. Just in case you want to look for guidance, or even answers, in the SEP entry regarding such kind of questions and/or cases: don’t bother, there are none.

More generally, the SEP entry mentions two problems for attributing blame and responsibility: the so-called problem of ‘many hands’ and the problem with physical and temporal distance. The former concerns the issue that there are many people developing the software, training the users, etc., and it is difficult to identify the individual, or even the group of individuals, who ultimately did the thing that caused the harmful effect. It is true that this is a problem, and especially when the computing hardware or software is complex and developed by hundreds or even thousands of people. The latter concerns the problem that the distance can blur the causal connection between action and event, which “can reduce the sense of responsibility”. But, in my opinion, just because someone doesn’t reflect much on her actions and may be willfully narrow-minded to (not) accept that, yes, indeed, those people celebrating a wedding in a tent in far-away Afghanistan are (well, were) humans, too, does not absolve one from the responsibility—neither the hardware developer, nor the software developer, nor the one who pushed the button—as distance does not reduce responsibility. One could argue it is only the one who pushed the button who made the judgment error, but the drone/fighter jet/etc. computer hardware and software are made for harmful purposes in the first place. Its purpose is to do harm to other entities—be this bombing humans or, say, a water purification plant such that the residents have no clean water—and all developers involved very well know this; hence, one is morally responsible from day one that one is involved in its development and/or use.

I’ll skip the entry’s section on computers as agents (AI software, robots), and whether they can be held morally responsible, just responsible, or merely accountable, or none of them, except for the final remark of that section, credited to Bruno Latour (emphasis mine):

[Latour] suggests that in all forms of human action there are three forms of agency at work: 1) the agency of the human performing the action; 2) the agency of the designer who helped shaped the mediating role of the artifacts and 3) the artifact mediating human action. The agency of artifacts is inextricably linked to the agency of its designers and users, but it cannot be reduced to either of them. For him, then, a subject that acts or makes moral decisions is a composite of human and technological components. Moral agency is not merely located in a human being, but in a complex blend of humans and technologies.

Given the issues with assigning moral responsibility with respect to computing, some philosophers ponder about doing away with it, and replace it with a better framework. This is the topic of the third section of the SEP entry, which relies substantially on Gotterbarn’s work on it. He notes that computing is ethically not a neutral practice, and that the “design and use of technological artifacts is a moral activity” (because the choice of one design and implementation over another does have consequences). Moreover, and more interesting, is that, according to the SEP entry, he introduces the notions of negative responsibility and positive responsibility. The former “places the focus on that which exempts one from blame and liability”, whereas the latter “focuses on what ought to be done”, and entails to “strive to minimize foreseeable undesirable events”. Computing professionals, according to Gotterbarn, should adopt the notion of positive responsibility. Later on in the section, there’s a clue that there’s some way to go before achieving that. Accepting accountability is more rudimentary than taking moral responsibility, or at least a first step toward moral responsibility. Nissenbaum (paraphrased in the SEP entry) has identified four barriers to accountability in society (at least back in 1997 when she wrote it): the above-mentioned problem of many hands, the acceptance of ‘bugs’ as an inherent element of large software applications, using the computer as scapegoat, and claiming ownership without accepting liability (read any software license if you doubt the latter). Perhaps that needs to be addressed before going on to the moral responsibility, or one reinforces the other? Dijkstra vents his irritation in one of his writings about software ‘bugs’—the cute euphemism dating back to the ‘50s—and instead proposes to use one of its correct terms: they are errors. Perhaps users should not be lenient with errors, which might compel developers to deliver a better/error-free product, and/or we have to instill in the students more about the positive responsibility and reduce their tolerance for errors. And/or what about re-writing the license agreements a bit, like accepting responsibility provided it is used in one of the prescribed and tested ways? We already had that when I was working for Eurologic more than 10 years ago: the storage enclosure was supposed to work in certain ways and was tested in a variety of configurations, and that we signed off on for our customers. If it was faulty in one of the tested system configurations after all, then that was our problem, and we’d incur the associated costs to fix it. To some extent, that was also with our suppliers. Indeed, for software, that is slightly harder, but one could include in the license something along the line of ‘X works on a clean machine and when common other packages w, y, and z are installed, but we can’t guarantee it when you’ve downloaded weird stuff from the Internet’; not perfect, but it is a step in the right direction. Anyone has better ideas?

Last, the closing sentence is a useful observation, effectively stretching the standard  notion of moral responsibility thanks to computing (emphasis added): “[it] is, thus, not only about how the actions of a person or a group of people affect others in a morally significant way; it is also about how their actions are shaped by technology.”. But, as said, the details are yet to be thought through and worked out in some detail and general guidelines that can be applied.


[1] Merel Noorman. (forthcoming in 2012). Computing and Moral Responsibility. Stanford Encyclopedia of Philosophy (Fall 2012 Edition), Zalta, E.N. (ed.).  Stable URL: http://plato.stanford.edu/archives/fall2012/entries/computing-responsibility/.

Reports on Digital Inclusion and divide

The Mail & Guardian (SA weekly) reported on a survey about “digital inclusion”/digital divide the other day, with the title “India’s digital divide worst among Brics”. It appeared to be based on a survey from risk analysis firm MapleCroft and their “Digital Inclusion Index” (DII).

Searching for the original survey and related news articles, the first three pages of Google’s result were news articles with pretty much the same title and content (except for one, where the Swedes say they are doing well). As it turns out, the low ranking of India is the first sentence of MapleCroft’s own news item about the DII. Lots of more data is described there, and everything together not only can be interpreted in various ways, but also raises more questions than it answers.

186 countries were surveyed, the Netherlands being number 186 (highest DII) and Niger number 1 (lowest DII). India turned out to have a DII of 39 and is therewith in the “extreme risk” category, China 103, Brazil 110, and Russia 134, which are relatively a lot better and in the “medium risk” category, but China and “to a lesser extent Russia” in the ‘wrong’ way (limited internet freedom). To tease a little: instead of ‘India is the worst’ regarding digital divide, one also can reformulate it in a way that India is important enough to be a full BRICS member [even though it has/irrespective of] a low DII. The place of the new “S” in BRICS—South Africa—is not even mentioned in the Mail & Guardian article, but MapleCroft has put it in the “High Risk” category (see figure here, about halfway on the page).

According to MapleCroft, “Sub-Saharan Africa is by far the worst performing region for digital inclusion with 29 of the 39 countries rated ‘extreme risk’ in the index.”. Summarizing the figure, Africa and South-East Asia are mostly in the high or extreme risk categories, Latin America, East-Europe and North Asia are in the medium or high risk categories, and the US, Canada, West-Europe, Japan, and Australia are in the low risk category. One of my fellow members at Informatici Senza Frontiere (Alessandra Cattani, who did here thesis on the digital divide) provided me the information that internet access in Italy is less than 40%, yet they are also in the low risk category according to the DII.

At the bottom of MapleCroft’s page, there is a paragraph rambling about the position of Tunisia, Egypt, and Libya in the ranking (81, 66, 77, respectively) and that “Internet and mobile phone technologies played a central role in motivating and coordinating the uprisings”. A third in Tunisia uses the internet, 16% is on facebook, whereas only about 5% of the Egyptians and 3% of the Libyans use facebook; all three countries are in the “high risk” category. This data can be ‘explained’ in any direction, even that facebook access is so low that it hardly may have contributed to motivate the uprisings (such as USAid and neoliberal policies in Egypt).

So, what exactly did MapleCroft measure? They used 10 indicators, being: “numbers of mobile cellular and broadband subscriptions; fixed telephone lines; households with a PC and television; internet users and secure internet servers; internet bandwidth; secondary education enrolment; and adult literacy”.

Considering fixed telephone lines is a bit of a joke in sparsely populated areas though, because it is utterly unprofitable for telcoms to lay the cables, so countries with low population density and a geographically more evenly distributed population are at a disadvantage in the DII. (and are all telephone lines and TVs digital nowadays?). Mobile phone use is relatively high in Africa, not just having one and using it to call family and friends, but also, among others, to handle electronic health records, disaster management, banking, and school-student communications, and the number of internet users has increased by some 2350% over the past 10 years (OneWorld news item, in Dutch). Even I can use mobile phone banking from the moment I opened my account here in SA and they were surprised I did not know how to do that (even after about 6.5 year in Italy, I still had to ‘wait a little longer’ for Italian internet banking—they do not offer mobile phone banking). Then there are the ATMs here that offer services that would fall under ‘online baking’ in many a European country. But MapleCroft has not considered the type and intensity of usage, or the inventiveness of people to enhance one technology as a way to ‘counterbalance’ the ‘lack’ of another technology.

Regarding bandwidth, fibre optic cables for fast internet access are not evenly distributed around the globe (picture), and even when they pass close by, some countries are prevented from plugging into the fast lines (most notably Cuba—the lines are owned by US companies who are prevented from doing business with Cuba due to the blockade).

The last two indicators to compute the DII may, to some, come as a surprise, but is not: one thing is to have the equipment, a whole different story is to be literate to read and comprehend the information, and then there’s a whole different story of having developed sufficient critical thinking to be able to separate the wheat from the chaff in the data deluge on the internet. India has and adult literacy of some 63%; this compared to adult literacy of 90% in Brazil, 100% in Russia, 94 in China, and 89% in South Africa (data from UNICEF). Secondary education enrollment is trickier, where UNICEF at least is more detailed, because it makes a difference between enrollment and attendance (and graduation and tertiary education, not covered by either one).

Then there’s digital inclusion, versus a digital divide. Both the bottom and the top echelon are “included”, according to MapleCroft, the former just with an extreme risk and latter with a low risk of falling behind. It certainly has a friendlier tone to it than considering the divide it has created between people and the consequences that follow from it, both economic and social.

Take the underlying social divide: who has access? For instance, if there is one PC in the household, who uses it? Recollecting my even younger years, the PC access pecking order was Father > Mother (practically skipped) > Brother > Sister (me, youngest, female), which obviously has changed over the years for both my brother and me. There are other parameters to consider here, such as occupation, level of higher education, and several countries have whole groups of people that are at a relative (dis)advantage due to socio-economic, political, ethnic, disability etc. factors. However, it is a separate line of inquiry to determine to what extent it affects the inclusion or exacerbates the divide. MapleSoft did not include it in the DII.

And then there is the time dimension. The DII diagram is a snapshot (I do not know which measurement date), but comparison along a time axis may reveal trends. So will percentages. Take, for instance internet users. Worldmapper has two beautiful figures as topically scaled maps (density equalized maps) for 1990 and 2002 data, which I showed earlier: the US shrunk relatively while Asia, Eastern Europe, Latin America, and Africa grew. No doubt they also grew a lot over the past 8 years.

Hence, overall, the coarse-grained ranking of the DII as such does not say much, and raises more questions than that it answers. Aside from serving an underlying political agenda, the real news value of the DII as such is rather limited.

Scientist vs. Engineer: still, again, even more so now, or not

The “World view” article in this week’s Nature amplifies an attack on scientists, focusing on a recurring debate about—by some perceived as a fracture between—science and engineering. Colin Macilwain tries to cast the debate in terms of the financial hardship and the hard choices that have to be made to allocate the diminishing amount of funding of universities’ research [1]. Regarding the funding, the argument goes that the bang-for-your-buck is higher when you bet on engineering, not on the sciences, as, it is claimed, there is much science that does not materialize into increased wealth anyway (‘wealth’ in this context, I presume, is measured in more money, profits, etc.) whereas engineering does, so therefore (myopic) government policy should favour the funding of engineering over the sciences. The UK’s Royal Academy of Engineering made an official statement in that direction (more polite than the previous phrase), and Macilwain, after some deliberations, closes with:

By casting a stone at their rivals, UK engineers have, at least, demanded better. They’ve also started a scrap between disciplines that will grow uglier as the spending cuts begin.

This is a disservice to the overall debate both on spending cuts and on the scientist “vs.” engineer. It is like bringing the recurring, lamentable, poor-on-poor violence into the realm of academia.

Luigi Foschini, scientist at the INAF Osservatorio Astronomico di Brera, already has written a useful blog post on the “two cultures” issue [2] in response to Macilwain’s article: the dichotomy is wrong and it is beneficial when a researcher knows about both science and engineering. He closes with the proposal that

[w]e have to make a Second Renaissance, with men and women able to develop an integrated culture, not rejecting any part because not in their backyard. Someone replies that today this is too complex, because the culture is too vast to be handled by individuals. This is not true. […]. The main obstacles are of social origin

Taking into account the ‘detours’ I made during my education and comments I remember over the years in research: I tend to concur that the obstacles are of social origin. Fair enough, not many people have done as many and diverse degrees as I did, principally because they think they do not have the time or money (which is, essentially, a resource allocation decision—e.g., I paid for some of the studies myself instead of, say, buying a fancy car). But it is not impossible to do both, as Foschini, I, and multiple other researchers can attest. Moreover, having been indoctrinated in more than one paradigm really does have its advantages over mono-discipline training (more about that in a separate post some time later).

The other issue I have with Macilwain’s article is that it pits one group of researchers against another, en passant swallowing and propagating divide and conquer tactics and thereby feeding infighting within academia. But in the end, casting stones will leave everyone mutilated—even if think you are in the position to pride yourself on casting the first stone. And, as the saying goes, be hoist with one’s own petard.

A more constructive step to resolve the debate on the spending cuts was made last week with the open letter to cut military R&D, not science funding, and, more generally, to cut the obscene budgets for war and destruction. The world does not need more nukes, ‘smart’ bombs, chemical weapons and the like, and significantly reducing the size of offence-armies so as to, at least, end the perpetuation of inflicting ‘collateral damage’ and occupation of foreign countries will make one’s home safer for longer. Another place where there is, on average, a lot of money that can, in theory at least, be redistributed more fairly, is the growing pile of assets of the rich, being, by and large, the baby boomers. A different way of phrasing it, is that the trend of resource concentration with a certain dominant age cohort (and their generational egoism) should be reversed so that the resources will be distributed in the benefit of society at large, the latter obviously including science and engineering research. That redistributive taxation is not in vogue anymore in the USA and most of Europe does not mean it is impossible to do.

Indeed, on the one hand, investment in research in the sciences and engineering neither will bring instant gratification nor lift the West out of the recession by tomorrow morning. On the other hand, the bank bailouts did not do the trick to bring the economy back into the zone of profit- and increased employment, the initial élan for green technologies as the magic bullet to pull us all out of the economic crisis did not quite materialize either, and the military-industrial complex destroys more than that it provides toward a healthy sustainable economy anyway. So one might as well give science and engineering a chance—after all, in tandem, they have a proven track record to be to the benefit of society.

The scientist ‘vs.’ engineer should not be a versus but a both-and. (Ab)using the economic hardship as an excuse to pit one against the other is to the detriment of both in the long run, and I am tempted to state that any academic worth his or her education should (have) come to the insight not to fall into this trap. If you do not, then you might want to learn a bit more so as to peek over that disciplinary wall, punch a hole in that wall, or take a step or two to walk through a door that a fellow colleague might just be holding open for you.


[1] Colin Macilwain (2010) Scientists vs engineers: this time it’s financial. Nature, 467, 885.

[2] Luigi Foschini. Scientists vs Engineers or another version of “The Two Cultures”. The Event Horizon blog, October 21, 2010.

ICT, Africa, peace, and gender

Just in case you thought that the terms in the title are rather eclectic, or even mutually exclusive, then you are wrong. ICT4Peace is a well-known combination, likewise for other organisations and events, such as the ICT for peace symposium in the Netherlands that I wrote about earlier. ICT & development activities, e.g., by Informatici Senza Frontiere, and ICT & Africa (or here or here, among many sites) is also well-known. There is even more material for ICT & gender. But what, then, about the combination of them?

Shastry Njeru sees links between them and many possibilities to put ICT to good use in Africa to enhance peaceful societies and post-conflict reconstruction where women play a pivotal role [1]. Not that much has been realized yet; so, if you are ever short on research or implementation topics, then Njeru’s paper undoubtedly will provide you with more topics than you can handle.

So, what, then, can ICT be used for in peacebuilding, in Africa, by women? One topic that features prominently in Njeru’s paper is communication among women to share experiences, exchange information, build communities, keep in contact, have  “discussion in virtual spaces, even when physical, real world meetings are impossible on account of geographical distance or political sensitivities” and so forth, using skype, blogs and other Web 2.0 tools such as Flickr, podcasts, etc., Internet access in their own language, and voice and video to text hardware and software to record the oral histories. A more general suggestion, i.e., not necessarily related to only women or only Africa is that “ICT for peacebuilding should form the repository for documents, press releases and other information related to the peace process”.

Some examples of what has been achieved already are: the use of mobile phone networks in Zambia to advocate women’s rights, Internet access for women entrepreneurs in textile industries in Douala in Cameroon, and ICT and mobile phone businesses are used as instruments of change by rural women in various ways in Uganda [1], including the Ugandan CD-ROM project [2].

Njeru thinks that everything can be done already with existing technologies that have to be used more creatively and such that there are policies, programmes, and funds that can overcome the social, political, and economic hurdles to realise the gendered ICT for peace in Africa. Hardware, maybe yes, but surely not software.

Regarding the hardware, mobile phone usage is growing fast (some reasons why) and Samsung, Sharp and Sanyo have jumped on board already with the solar panel-fuelled mobile phones to solve the problem of (lack of reliable) energy supply. The EeePc and the one laptop per child projects and the likes are nothing new either, nor are the palm pilots that are used for OpenMRS’s electronic health records in rural areas in, among others, Kenya. But this is not my area of expertise, so I will leave it to the hardware developers for the final [yes/no] on the question if extant hardware suffices.

Regarding software, developing a repository for the documents, press releases etc. is doable with current software as well, but a usable repository requires insight into how then the interfaces have to be designed so that it suits best for the intended users and how the data should be searched; thus, overall, it may not be simply a case of deployment of software, but also involve development of new applications. Internet access, including those Web 2.0 applications, in one’s own language requires localization of the software and a good strategy on how one can coordinate and maintain such software. This is very well doable, but it is not already lying on the shelf waiting to be deployed.

More challenging will be figuring out the best way to manage all the multimedia of photos, video reports, logged skype meetings and so forth. If one does not annotate them, then they are bound to end up in a ‘write-only’ data silo. However, those reports should not be (nor have been) made to merely save them, but one also should be able to find, retrieve, and use the information contained in them. A quick-and-dirty tagging system or somewhat more sophisticated wisdom-of-the-crowds tagging methods might work in the short term, but it will not in the long run, and thereby still letting those inadequately annotated multimedia pieces getting dust. An obvious direction for a solution is to create the annotation mechanism and develop an ontology about conflict & peacebuilding, develop a software system to put the two together, develop applications to access the properly annotated material, and train the annotators. This easily can take up the time and resources of an EU FP7 Integrated Project.

Undoubtedly, observation of current practices, their limitations, and subsequent requirements analysis will bring afore more creative opportunities of usage of ICT in a peacebuilding setting targeting women as the, mostly untapped, prime user base. A quick search on ICT jobs in Africa or peacebuilding (on the UN system and its affiliated organizations, and the NGO industry) to see if the existing structures invest in this area did not show anything other than jobs at their respective headquarters such as website development, network administration, or ICT group team leader. Maybe upper management does not realise the potential, or it is seen merely as an afterthought? Or maybe more grassroots initiatives have to be set up, be successful, and then organisations will come on board and devote resources to it? Or perhaps companies and venture capital should be more daring and give it a try—mobile phone companies already make a profit and some ‘philanthropy’ does well for a company’s image anyway—and there is always the option to take away some money from the military-industrial complex.

Whose responsibility would it be (if any) to take the lead (if necessary) in such endeavours? Either way, given that investment in green technologies can be positioned as a way out of the recession, then so can it be for ICT for peace(building) aimed at women, be they in Africa or other continents where people suffer from conflicts or are in the process of reconciliation and peacebuilding. One just has to divert the focus of ICT for destruction, fear-moderation, and the likes to one of ICT for constructive engagement, aiming at inclusive technologies and those applications that facilitate development of societies and empower people.


[1] Shastry Njeru. (2009). Information and Communication Technology (ICT), Gender, and Peacebuilding in Africa: A Case of Missed Connections. Peace & Conflict Review, 3(2), 32-40.

[2] Huyer S and Sikoska T. (2003). Overcoming the Gender Digital Divide: Understanding the ICTs and their potential for the Empowerment of Women. United Nations International Research and Training Institute for the Advancement of Women (UN-INSTRAW), Instraw Research Paper Series No. 1., 36p.

The VIP session at Informatica 2009

In addition to the many keynote speeches, scientific presentations, and panels, Informática 2009 in Cuba also had a 2-hour high-level panel on the conference’s theme, called “National ICT policies for development and sovereignty“, which took place yesterday morning. In the presence of some 1000 attendees and the secretary general of the International Telecommunications Union (ITU), Dr. Hamadoun Touré, the ministers in informatica/ICT/telecommunications of Cuba, Iran, South Africa, Saudi-Arabia, Venezuela, and Vietnam and national telecomms heads of Bulgaria, Nicaragua, and Russia each held a speech. UPDATE (28-2-2009): see also the transcript of the speech of Ramiro Valdéz Menéndez, who is the Commander of the Revolution and Minister of Informatics and Communictions in Cuba.

While each VIP presented country-specific particularities and emphases, there were five recurring topics across the presentations.

  1. Security: the need for a) combatting cybercrime internationally together, and b) reducing the vulnerability of a country’s access to the Internet and the need to decrease the dependence on foreign companies and political whims and policies of certain other countries, i.e., increase sovereignty of the nation’s network and internet services infrastructure, e.g., by launching one’s own sattelite [Vietnam] or laying fibre optic cables [South Africa, Cuba, Venzuela, Jamaica].
  2. Infrastructure: or: the lack thereof. In addition to the aforementioned security and foreign-dominance, the liberalization of the ’90s took its toll because companies do not like to invest in remote areas that do not generate sufficient profits, thereby excluding the many under-priviliged peoples. Governments of several countries now take up that task themselves thanks to changes in the political colour and type of government.
  3. Responsible use of ICT: instead of the latest toys with consumerism, the programmes are aimed at ICT as tool to inform and educate citizens and “civil defense” w.r.t. detection of natural disasters and mitigation of the damages (e.g., inundation calculations, evacuation plans, smooth communications, climate change research).
  4. Trust: well, the lack thereof, both in general and w.r.t. ICT policies, which was tied to the general climate of the lack of trusts with the ongoing economic crisis. How to rebuild it was not particularly elaborated on.
  5. ICT for (socio-)economic recovery: in contrast to, say, Berlusconi’s policies, these VIPs do see the need for investment in this sector, both regarding human capital development and new technologies–and not just talk the talk but also walk the walk.

The country-specific themes included, among others, South Africa’s move from analog to digital with the set-top boxes and Cuba’s focus on knowledge management and open source projects.

I will add links, quotes, and photos later, as the next scientific session is about to start (more about that when I’m back from Cuba in about 2 weeks).

UPDATE (5-3-2009): eventually, a few photos:

overview of the VIP panel during Infromatica 2009

overview of the VIP panel during Informática 2009

From left to right: delegates from Nicaragua (empty seat in the picture), Russia, Minister Ivy Matsepe-Casaburri (South Africa), Jorge Luis Perdomo Di-Lella (president of the organisation of Informatica 2009), Hamadoun Touré (Secretary general of the ITU), Commandante and Minister Ramiro Valdéz Menendez (Cuba), Minister Socorro Hernández (Venezuela), Minister Le Doan Hop (Vietnam), Minister Muhammad Jameel bin Ahmed Mulla (Saudi Arabia), Minister of Iran, Plamen Vatchkov (empty seat in the picture)

photo of the article on the front page of the Granma, about the opening of Informatica 2009

photo of the article on the front page of the Granma, about the opening of Informatica 2009

During the lunch breaks, students from UCI (Universidad de las Ciencias Informáticas) took care of the cultural programme, with dance and music.