ICT, Africa, peace, and gender

Just in case you thought that the terms in the title are rather eclectic, or even mutually exclusive, then you are wrong. ICT4Peace is a well-known combination, likewise for other organisations and events, such as the ICT for peace symposium in the Netherlands that I wrote about earlier. ICT & development activities, e.g., by Informatici Senza Frontiere, and ICT & Africa (or here or here, among many sites) is also well-known. There is even more material for ICT & gender. But what, then, about the combination of them?

Shastry Njeru sees links between them and many possibilities to put ICT to good use in Africa to enhance peaceful societies and post-conflict reconstruction where women play a pivotal role [1]. Not that much has been realized yet; so, if you are ever short on research or implementation topics, then Njeru’s paper undoubtedly will provide you with more topics than you can handle.

So, what, then, can ICT be used for in peacebuilding, in Africa, by women? One topic that features prominently in Njeru’s paper is communication among women to share experiences, exchange information, build communities, keep in contact, have  “discussion in virtual spaces, even when physical, real world meetings are impossible on account of geographical distance or political sensitivities” and so forth, using skype, blogs and other Web 2.0 tools such as Flickr, podcasts, etc., Internet access in their own language, and voice and video to text hardware and software to record the oral histories. A more general suggestion, i.e., not necessarily related to only women or only Africa is that “ICT for peacebuilding should form the repository for documents, press releases and other information related to the peace process”.

Some examples of what has been achieved already are: the use of mobile phone networks in Zambia to advocate women’s rights, Internet access for women entrepreneurs in textile industries in Douala in Cameroon, and ICT and mobile phone businesses are used as instruments of change by rural women in various ways in Uganda [1], including the Ugandan CD-ROM project [2].

Njeru thinks that everything can be done already with existing technologies that have to be used more creatively and such that there are policies, programmes, and funds that can overcome the social, political, and economic hurdles to realise the gendered ICT for peace in Africa. Hardware, maybe yes, but surely not software.

Regarding the hardware, mobile phone usage is growing fast (some reasons why) and Samsung, Sharp and Sanyo have jumped on board already with the solar panel-fuelled mobile phones to solve the problem of (lack of reliable) energy supply. The EeePc and the one laptop per child projects and the likes are nothing new either, nor are the palm pilots that are used for OpenMRS’s electronic health records in rural areas in, among others, Kenya. But this is not my area of expertise, so I will leave it to the hardware developers for the final [yes/no] on the question if extant hardware suffices.

Regarding software, developing a repository for the documents, press releases etc. is doable with current software as well, but a usable repository requires insight into how then the interfaces have to be designed so that it suits best for the intended users and how the data should be searched; thus, overall, it may not be simply a case of deployment of software, but also involve development of new applications. Internet access, including those Web 2.0 applications, in one’s own language requires localization of the software and a good strategy on how one can coordinate and maintain such software. This is very well doable, but it is not already lying on the shelf waiting to be deployed.

More challenging will be figuring out the best way to manage all the multimedia of photos, video reports, logged skype meetings and so forth. If one does not annotate them, then they are bound to end up in a ‘write-only’ data silo. However, those reports should not be (nor have been) made to merely save them, but one also should be able to find, retrieve, and use the information contained in them. A quick-and-dirty tagging system or somewhat more sophisticated wisdom-of-the-crowds tagging methods might work in the short term, but it will not in the long run, and thereby still letting those inadequately annotated multimedia pieces getting dust. An obvious direction for a solution is to create the annotation mechanism and develop an ontology about conflict & peacebuilding, develop a software system to put the two together, develop applications to access the properly annotated material, and train the annotators. This easily can take up the time and resources of an EU FP7 Integrated Project.

Undoubtedly, observation of current practices, their limitations, and subsequent requirements analysis will bring afore more creative opportunities of usage of ICT in a peacebuilding setting targeting women as the, mostly untapped, prime user base. A quick search on ICT jobs in Africa or peacebuilding (on the UN system and its affiliated organizations, and the NGO industry) to see if the existing structures invest in this area did not show anything other than jobs at their respective headquarters such as website development, network administration, or ICT group team leader. Maybe upper management does not realise the potential, or it is seen merely as an afterthought? Or maybe more grassroots initiatives have to be set up, be successful, and then organisations will come on board and devote resources to it? Or perhaps companies and venture capital should be more daring and give it a try—mobile phone companies already make a profit and some ‘philanthropy’ does well for a company’s image anyway—and there is always the option to take away some money from the military-industrial complex.

Whose responsibility would it be (if any) to take the lead (if necessary) in such endeavours? Either way, given that investment in green technologies can be positioned as a way out of the recession, then so can it be for ICT for peace(building) aimed at women, be they in Africa or other continents where people suffer from conflicts or are in the process of reconciliation and peacebuilding. One just has to divert the focus of ICT for destruction, fear-moderation, and the likes to one of ICT for constructive engagement, aiming at inclusive technologies and those applications that facilitate development of societies and empower people.

References

[1] Shastry Njeru. (2009). Information and Communication Technology (ICT), Gender, and Peacebuilding in Africa: A Case of Missed Connections. Peace & Conflict Review, 3(2), 32-40.

[2] Huyer S and Sikoska T. (2003). Overcoming the Gender Digital Divide: Understanding the ICTs and their potential for the Empowerment of Women. United Nations International Research and Training Institute for the Advancement of Women (UN-INSTRAW), Instraw Research Paper Series No. 1., 36p.

Advertisement

Easy widget for keeping track of visited countries

Following up on my whining last month about not being able to find a suitable and easy Web 2.0 widget to record the countries I have visited, I’ve stumbled upon one the other day that comes reasonably close!

Douwe Osinga, who works at Google, made an interactive applet for selecting the countries visited (for the USA and India also the states), and the generated code can then be copied into your home page, blog, and facebook. Updating the generated figure can be done by pasting the previously generated html back into the appropriate box, clicking on the new country/ies, and then pasting that code back into your home page, blog, or facebook. And no login hurdles etc have to be overcome.

Thus, it is not entirely interactive and cross-linked and all that, but it will do fine—and most certainly better than the lame Paint-job I did last month. So, here goes the updated picture (not including the holiday that I would like to take now), where at the bottom you will find the standard link to create your own map: (map updated on 30-6-’12 20-1-2015 [if the picture doesn’t load anymore, click here])


visited 39 states (17.3%)
Create your own visited map of The World or Triposo world travel guide for Android

72010 SemWebTech lecture 12: Social aspects and recap part 2 of the course

You might ask yourself why we should even bother with social aspects in a technologies course. Out there in the field, however, SWT are applied by people with different backgrounds and specialties and they are relatively new technologies that act out in an inter/multi/transdisciplinary environment, which brings with it some learning curves. If you end up working in this area, then it is wise to have some notion about human dynamics in addition to the theoretical and technological details, and how the two are intertwined. Some of the hurdles that may seem ‘merely’ dynamics of human interaction can very well turn out to be scratching the surface of problems that might be solved with extensions or modifications to the technologies or even motivate new theoretical research.

Good and Wilkinson’s paper provides a non-technical introduction to Semantic Web topics, such as LSID, RDF, ontologies, and services. They consider what problems these technologies solve (i.e., the sensible reasons to adopt them), and what the hurdles are both with respect to the extant tools & technologies and the (humans working for some of the) leading biological data providers that appear to be reluctant in taking up the technologies. There are obviously people who have taken the approach of “let’s try and see what come out of the experimentation”, whereas others are more reserved and take the approach of “let’s see what happens, and then maybe we’ll try”. If there are not enough people of the former type, then the latter ones obviously will never try.

Another dimension of the social aspects is described in [2], which is a write-up of Goble’s presentation about the montagues and capulets at the SOFG’04 meeting. It argues that there are, mostly, three different types of people within the SWLS arena (it may just as well be applicable to another subject domain if they were to experiment with SWT, e.g., in public administration): the AI researchers, the philosophers, and the IT-savvy domain experts. They each have their own motivations and goals, which, at times, clash, but with conversation, respect, understanding, compromise, and collaboration, one will, and can, achieve the realisation of theory and ideas in useful applications.

The second part of the lecture will be devoted to a recap of the material of the past 11 lectures (there recap of the first part of the SWT course will be on 19-1).

References

[1] Good BM and Wilkinson MD. The Life Science Semantic Web is Full of Creeps! Briefings in Bioinformatics, 2006 7(3):275-286.

[2] Carole Goble and Chris Wroe. The Montagues and the Capulets. Comparative and Functional Genomics, 5(8):623-632, 2004. doi:10.1002/cfg.442

Note: reference 1 is mandatory reading, 2 is optional.

Lecture notes: none

Course website

72010 SemWebTech lecture 11: BioRDF and Workflows

After considering the background of the combination of ontologies, the Semantic Web, and ‘something bio’ and some challenges and successes in the previous three lectures, we shall take a look at more technologies that are applied in the life sciences and that use SWT to a greater or lesser extent. In particular, RDF and scientific workflows will pass the revue. The former has the flavour of “let’s experiment with the new technologies”, whereas the latter is more alike “where can we add SWT to the system and make things easier?”.

BioRDF

The problems of data integration were not always solved in a satisfactory manner with the ‘old’ technologies, but perhaps SWT can solve them; or so goes the idea. The past three years has seen several experiments to test if the SWT can live up to that challenge. To see where things are heading, let us recollect the data integration strategies that passed the revue in lecture 8, which can be chosen with the extant technologies as well as the newer ones of the Semantic Web: (i) Physical schema mappings with Global As View (GAV), Local As View (LAV), or GLAV, (ii) Conceptual model-based data integration, (iii) Data federation, (iv) Data warehouses, (v) Data marts, (vi) Services-mediated integration, (vii) Peer-to-peer data integration, and (viii) Ontology-based data integration, being i or ii (possibly in conjunction with the others) through an ontology or linked data by means of an ontology.

Early experiments focused on RDF-izing ‘legacy’ data, such as RDBMSs, excel sheets, HTML pages etc., and making one large triplestore out of it, i.e., an RDF-warehouse [1,2], using tools such as D2RQ and Sesame (renamed to Open RDF) as triple store (other triple stores are, e.g., Virtuoso and AllegroGraph, used by [3]). The Bio2RDF experiment took over 20 freely available data sources and converted them with multiple JSP programs into a total of about 163 million triples in a Sesame triplestore, added a myBio2RDF personalization step, and used extant applications to present the data to the users. The warehousing strategy, however, has some well-known drawbacks even in a non-Semantic Web setting. So, following the earlier gradual development of data integration strategies, the time had come to experiment with data federation, RDF-style [3], where the authors note at the end that perhaps the next step—services—may yield interesting results as well. You also may want to have a look at the winners’ solutions to the yearly Billion triple challenge and other Semantic Web challenges (all submissions, each with a paper describing the system and a demo, are filed under the ‘former challenges’ menu).

One of the problems that SWT and its W3C standards aimed to solve was uniform data representation, which can be done well with RDF. Another was locating an entity and identifying it, which can be done with URIs. An emerging problem now is that for a single entity in reality, there are many “semantically equivalent” URIs [1,3]; e.g., Hexokinase had three different URIs, one in the GO, in UniProt, and in the BioPathways (and to harmonise them, Bio2RDF added their own one and linked to the others using owl:sameAs). More generally than only the URI issue, is the observation made by the HCLS IG’s Linking Open Drug Data group, and was a well-know hurdle in earlier non-SWT data integration efforts: “A significant challenge … is the strong prevalence of terminology conflicts, synonyms, and homonyms. These problems are not addressed by simply making data sets available on the Web using RDF as common syntax but require deeper semantic integration.” and “For … applications that rely on expressive querying or automated reasoning deeper integration is essential” [4]. In parallel with request for “more community practices on publishing term and schema mappings” [4], the experimentation with RDF-oriented data integration continues.

Scientific Workflows

You may have come across Business Process Modelling and workflows in government and industry; scientific workflows are an extension to that (see its background and motivation). In addition to general requirements, such as service composition and reuse of workflow design, scalability, and data provenance, in practice, it turns out that such a scientific workflow system must have the ability to handle multiple databases and a range of analysis tools with corresponding interfaces to a diverse range of computational environments, deal with explicit representation of knowledge at different stages, customization of the interface for each researcher, and auditability and repeatability of the workflow.

To cut a long story short (in the writing here, not in the lecture on 11-1): where can we plug SWT into scientific workflows? One can, for instance, use RDF as common data format for linking and integration and SPARQL for querying that data, OWL ontologies for the representation of the knowledge across the workflow (at least the domain knowledge and the workflow knowledge), rules to orchestrate the service execution, and services (e.g., WSDL, OWL-S) to discover useful scripts that can perform a task in the workflow.

This still leaves to choose what to do with the provenance, which may be considered to be a component of the broader notion of trust. Recollecting the Semantic Web layer cake from lecture 1, trust is above the SPARQL, OWL, and RIF pieces. Currently, there is no W3C standard for the trust layer, yet users need it. Scientific workflow systems, such as Kepler and Taverna, invented their own way of managing it. For instance, Taverna uses experiment-, workflow-, and knowledge-provenance models represented using RDF(S) & OWL, and RDF for the individual provenance graphs of a particular workflow [5,6]. The area of scientific workflows, provenance, and trust is lively with workshops and, e.g., the provenance challenges; at the time of writing this post, it may be still too early to identify an established solution (to, say, have interoperability across workflow systems and its components to weave a web of provenance), be it a SWT one or another.

Probably, there will not be enough time during the lecture to also cover Semantic Web Services. In case you are curious how one can efficiently search for the thousands of web services and their use in working systems (i.e., application-oriented papers, not the theory behind it), you may want to have a look at [7, 8] (the latter is lighter on the bio-component than the former). The W3C activities on web services have standards, working groups, and an interest group.

References

[1] Belleau F, Nolin MA, Tourigny N, Rigault P, Morissette J. Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System. Journal of Biomedical Informatics, 2008, 41(5):706-16. online interface: bio2RDF

[2] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

[3] Kei-Hoi Cheung, H Robert Frost, M Scott Marshall, Eric Prud’hommeaux, Matthias Samwald, Jun Zhao, and Adrian Paschke. A journey to Semantic Web query federation in the life sciences. BMC Bioinformatics 2009, 10(Suppl 10):S10

[4] Anja Jentzsch, Bo Andersson, Oktie Hassanzadeh, Susie Stephens, Christian Bizer. Enabling Tailored Therapeutics with Linked Data. LDOW2009, April 20, 2009, Madrid, Spain.

[5] Tom Oinn, Matthew Addis, Justin Ferris, Darren Marvin, Martin Senger, Mark Greenwood, Tim Carver, Kevin Glover, Matthew R. Pocock, Anil Wipat and Peter Li. (2004). Taverna: a tool for the composition and enactment of bioinformatics workflows. Bioinformatics 20 (17): 3045-3055. The Taverna website

[6] Carole Goble et al. Knowledge Discovery for biology with Taverna. In: Semantic Web: Revolutionizing knowledge discovery in the life sciences. 2007, pp355-395.

[7] Michael DiBernardo, Rachel Pottinger, and Mark Wilkinson. (2008). Semi-automatic web service composition for the life sciences using the BioMoby semantic web framework. Journal of Biomedical Informatics, 41(5): 837-847.

[8] Sahoo., S.S., Shet, A. Hunter, B., and York, W.S. SEMbrowser–semantic biological web services registry. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, pp 317-340.

Note: references 1 and (5 or 6) are mandatory reading, (2 or 3) was mandatory for an earlier lecture, and 4, 7, and 8 are optional.

Lecture notes: lecture 11 – BioRDF and scientific workflows

Course website