A strike against the ‘realism-based approach’ to ontology development

The ontology engineering course starting this Monday at the Knowledge Representation and Reasoning group at Meraka commences with the question What is an ontology? In addition to assessing definitions, it touches upon long-standing disagreements concerning if ontologies are about representing reality, our conceptualization of entities in reality, or some conceptualization that does not necessarily ascribe to existence of reality. The “representation of reality school” is advocated in ontology engineering most prominently by Barry Smith and cs. and their foundational ontology BFO, the “conceptualization of entities in reality school” by various people and research groups, such as the LOA headed by Nicola Guarino and their DOLCE foundational ontology, whereas the “conceptualization irrespective regardless reality school” can be (but not necessarily is) encountered in organisations developing, e.g., medical ontologies that do not ascribe to evidence-based medicine to decide what goes in the ontology and how (but instead base it on, say, the outcome of power plays between big pharma and health insurance companies).

Due to the limited time and scope of this and previous courses on ontology engineering I taught, I mention[ed] only succinctly that those differences exist (e.g., pp10-11 of the UH slides), and briefly illustrate some of the aspects of the debate and their possible consequences in practical aspects of ontology engineering. This information is largely based on a few papers and extracting consequences from that, the examples they describe and that I encountered, and the discussions that took place at the various meetings, workshops, conferences, and summer schools that I participated in. But there was no nice, accessible, paper that describes de debate—or even part of it—more precisely and is readable also by ontologists who are not philosophers. Until last week, that is. The Applied Ontology journal published a paper by Gary Merrill, entitled Ontological realism: Methodology or misdirection? [1], that assess critically the ontological realism advocated by Barry Smith and his colleague Werner Ceusters. Considering its relevance in ontology engineering, the article has been made freely available, and in the announcement of the journal issue, its editors in chief (Nicola Guarino and Mark Musen) mentioned that Smith and Ceusters are busy preparing a response on Merrill’s paper, which will be published in a subsequent issue of Applied Ontology. Merrill, in turn, promised to respond to this rebuttal.

But for now, there are 30 pages of assessment on the merits of, and problems with, the philosophical underpinnings of the “realism-based approach” that is used in particular in the realm of ontology engineering within the OBO Foundry project and its large set of ontologies, BFO, and the Relation Ontology. The abstract gives an idea of the answer to the question in the paper’s title:

… The conclusion reached is that while Smith’s and Ceusters’ criticisms of prior practice in the treatment of ontologies and terminologies in medical informatics are often both perceptive and well founded, and while at least some of their own proposals demonstrate obvious merit and promise, none of this either follows from or requires the brand of realism that they propose.

The paper’s contents backs this up with analysis, arguments, examples, and bolder statements than the abstracts suggests.
For anyone involved in ontology development and interested in the debate—even if you think you’re tired of it—I recommend reading the paper, and to at least follow how the debate will unfold with responses and rebuttals.

My opinion? Well, I have one, of course, but this post is an addendum to the general course page of MOWS’10, hence I try to refrain from adding too much bias to the course material.

UPDATE (27-7-2010): On whales and apples, and on ontology and reality: you might enjoy also “Moby Dick: an exercise in ontology”, written by Lorne A. Smith.

References

[1] Gary H. Merrill. Ontological realism: Methodology or misdirection? Applied Ontology, 5 (2010) 79–108.

Advertisement

Failure of the experiment for the SemWebTech course

At the beginning of the SWT course, I had the illusion that we could use the blog as another aspect of the course and, more importantly, that students (and other interested people) were free to leave comments and links to pages and other related blogs and blog posts that they had encountered. It did not really happen though, so as experiment formulated as such, it failed miserably.

But I can scrape together some data that demonstrate all was not for naught entirely. I have received several off-line comments from colleagues who thought it to be useful, non-SWT-course students who use it to study as means of distance education, or kindly pointed me to updates and extensions of various topics—but the plural of anecdote is not data. So here go some figures.

34 students had enrolled in the Moodle, of which some 10-15 attended class initially, dwindling down to 4-8 the more midterms of other courses and holiday season interfered with their study schedule (and perhaps my teaching skills or the topics of the course), 12 students did a mini-project for the lab within the deadline for this exam session, 12 registered for the exam, and 11 showed up to actually do the exam. FUB strives to have a 1:6 ratio for lecturer:student, so with the SWT course (as well as most other MSc courses) we are at the good end of that.

The aggregated data for explicit blog post accesses (i.e., not counting those who read it through accessing the home page) and slides downloads on 17-2-2010 as sampled during invigilating the SWT exam are as follows: average visit of an SWT course blog post is 112, with OWL, top-down and bottom-up ontology development, and part-whole relations well above the average, and average slide download of 41 with OWL and top-down and bottom-up above average again. At the moment, one can only speculate why.

Clearly, there have been many more people accessing the pages and the slides than can be accounted for by the students only, even if one would take up an assumption that they accessed each of the blog posts, say, twice and entertained themselves with downloading both the normal slides and the same ones in hand-out format. The content of the slides were not the only topics that passed the revue during the lectures and the labs, but maybe they have been, are, or will be of use to other people as well. People who are interested in ontology engineering topics more generally, especially regarding course development and course content, will find Ontolog’s Ontology Summit’s current virtual panel sessions on “Creating the ontologists of the future” worthwhile to consult.

Finally, will I go through the trouble of writing blog posts for another course I may have to teach? Probably not.

72010 SemWebTech lecture 12: Social aspects and recap part 2 of the course

You might ask yourself why we should even bother with social aspects in a technologies course. Out there in the field, however, SWT are applied by people with different backgrounds and specialties and they are relatively new technologies that act out in an inter/multi/transdisciplinary environment, which brings with it some learning curves. If you end up working in this area, then it is wise to have some notion about human dynamics in addition to the theoretical and technological details, and how the two are intertwined. Some of the hurdles that may seem ‘merely’ dynamics of human interaction can very well turn out to be scratching the surface of problems that might be solved with extensions or modifications to the technologies or even motivate new theoretical research.

Good and Wilkinson’s paper provides a non-technical introduction to Semantic Web topics, such as LSID, RDF, ontologies, and services. They consider what problems these technologies solve (i.e., the sensible reasons to adopt them), and what the hurdles are both with respect to the extant tools & technologies and the (humans working for some of the) leading biological data providers that appear to be reluctant in taking up the technologies. There are obviously people who have taken the approach of “let’s try and see what come out of the experimentation”, whereas others are more reserved and take the approach of “let’s see what happens, and then maybe we’ll try”. If there are not enough people of the former type, then the latter ones obviously will never try.

Another dimension of the social aspects is described in [2], which is a write-up of Goble’s presentation about the montagues and capulets at the SOFG’04 meeting. It argues that there are, mostly, three different types of people within the SWLS arena (it may just as well be applicable to another subject domain if they were to experiment with SWT, e.g., in public administration): the AI researchers, the philosophers, and the IT-savvy domain experts. They each have their own motivations and goals, which, at times, clash, but with conversation, respect, understanding, compromise, and collaboration, one will, and can, achieve the realisation of theory and ideas in useful applications.

The second part of the lecture will be devoted to a recap of the material of the past 11 lectures (there recap of the first part of the SWT course will be on 19-1).

References

[1] Good BM and Wilkinson MD. The Life Science Semantic Web is Full of Creeps! Briefings in Bioinformatics, 2006 7(3):275-286.

[2] Carole Goble and Chris Wroe. The Montagues and the Capulets. Comparative and Functional Genomics, 5(8):623-632, 2004. doi:10.1002/cfg.442

Note: reference 1 is mandatory reading, 2 is optional.

Lecture notes: none

Course website

72010 SemWebTech lecture 11: BioRDF and Workflows

After considering the background of the combination of ontologies, the Semantic Web, and ‘something bio’ and some challenges and successes in the previous three lectures, we shall take a look at more technologies that are applied in the life sciences and that use SWT to a greater or lesser extent. In particular, RDF and scientific workflows will pass the revue. The former has the flavour of “let’s experiment with the new technologies”, whereas the latter is more alike “where can we add SWT to the system and make things easier?”.

BioRDF

The problems of data integration were not always solved in a satisfactory manner with the ‘old’ technologies, but perhaps SWT can solve them; or so goes the idea. The past three years has seen several experiments to test if the SWT can live up to that challenge. To see where things are heading, let us recollect the data integration strategies that passed the revue in lecture 8, which can be chosen with the extant technologies as well as the newer ones of the Semantic Web: (i) Physical schema mappings with Global As View (GAV), Local As View (LAV), or GLAV, (ii) Conceptual model-based data integration, (iii) Data federation, (iv) Data warehouses, (v) Data marts, (vi) Services-mediated integration, (vii) Peer-to-peer data integration, and (viii) Ontology-based data integration, being i or ii (possibly in conjunction with the others) through an ontology or linked data by means of an ontology.

Early experiments focused on RDF-izing ‘legacy’ data, such as RDBMSs, excel sheets, HTML pages etc., and making one large triplestore out of it, i.e., an RDF-warehouse [1,2], using tools such as D2RQ and Sesame (renamed to Open RDF) as triple store (other triple stores are, e.g., Virtuoso and AllegroGraph, used by [3]). The Bio2RDF experiment took over 20 freely available data sources and converted them with multiple JSP programs into a total of about 163 million triples in a Sesame triplestore, added a myBio2RDF personalization step, and used extant applications to present the data to the users. The warehousing strategy, however, has some well-known drawbacks even in a non-Semantic Web setting. So, following the earlier gradual development of data integration strategies, the time had come to experiment with data federation, RDF-style [3], where the authors note at the end that perhaps the next step—services—may yield interesting results as well. You also may want to have a look at the winners’ solutions to the yearly Billion triple challenge and other Semantic Web challenges (all submissions, each with a paper describing the system and a demo, are filed under the ‘former challenges’ menu).

One of the problems that SWT and its W3C standards aimed to solve was uniform data representation, which can be done well with RDF. Another was locating an entity and identifying it, which can be done with URIs. An emerging problem now is that for a single entity in reality, there are many “semantically equivalent” URIs [1,3]; e.g., Hexokinase had three different URIs, one in the GO, in UniProt, and in the BioPathways (and to harmonise them, Bio2RDF added their own one and linked to the others using owl:sameAs). More generally than only the URI issue, is the observation made by the HCLS IG’s Linking Open Drug Data group, and was a well-know hurdle in earlier non-SWT data integration efforts: “A significant challenge … is the strong prevalence of terminology conflicts, synonyms, and homonyms. These problems are not addressed by simply making data sets available on the Web using RDF as common syntax but require deeper semantic integration.” and “For … applications that rely on expressive querying or automated reasoning deeper integration is essential” [4]. In parallel with request for “more community practices on publishing term and schema mappings” [4], the experimentation with RDF-oriented data integration continues.

Scientific Workflows

You may have come across Business Process Modelling and workflows in government and industry; scientific workflows are an extension to that (see its background and motivation). In addition to general requirements, such as service composition and reuse of workflow design, scalability, and data provenance, in practice, it turns out that such a scientific workflow system must have the ability to handle multiple databases and a range of analysis tools with corresponding interfaces to a diverse range of computational environments, deal with explicit representation of knowledge at different stages, customization of the interface for each researcher, and auditability and repeatability of the workflow.

To cut a long story short (in the writing here, not in the lecture on 11-1): where can we plug SWT into scientific workflows? One can, for instance, use RDF as common data format for linking and integration and SPARQL for querying that data, OWL ontologies for the representation of the knowledge across the workflow (at least the domain knowledge and the workflow knowledge), rules to orchestrate the service execution, and services (e.g., WSDL, OWL-S) to discover useful scripts that can perform a task in the workflow.

This still leaves to choose what to do with the provenance, which may be considered to be a component of the broader notion of trust. Recollecting the Semantic Web layer cake from lecture 1, trust is above the SPARQL, OWL, and RIF pieces. Currently, there is no W3C standard for the trust layer, yet users need it. Scientific workflow systems, such as Kepler and Taverna, invented their own way of managing it. For instance, Taverna uses experiment-, workflow-, and knowledge-provenance models represented using RDF(S) & OWL, and RDF for the individual provenance graphs of a particular workflow [5,6]. The area of scientific workflows, provenance, and trust is lively with workshops and, e.g., the provenance challenges; at the time of writing this post, it may be still too early to identify an established solution (to, say, have interoperability across workflow systems and its components to weave a web of provenance), be it a SWT one or another.

Probably, there will not be enough time during the lecture to also cover Semantic Web Services. In case you are curious how one can efficiently search for the thousands of web services and their use in working systems (i.e., application-oriented papers, not the theory behind it), you may want to have a look at [7, 8] (the latter is lighter on the bio-component than the former). The W3C activities on web services have standards, working groups, and an interest group.

References

[1] Belleau F, Nolin MA, Tourigny N, Rigault P, Morissette J. Bio2RDF: Towards A Mashup To Build Bioinformatics Knowledge System. Journal of Biomedical Informatics, 2008, 41(5):706-16. online interface: bio2RDF

[2] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

[3] Kei-Hoi Cheung, H Robert Frost, M Scott Marshall, Eric Prud’hommeaux, Matthias Samwald, Jun Zhao, and Adrian Paschke. A journey to Semantic Web query federation in the life sciences. BMC Bioinformatics 2009, 10(Suppl 10):S10

[4] Anja Jentzsch, Bo Andersson, Oktie Hassanzadeh, Susie Stephens, Christian Bizer. Enabling Tailored Therapeutics with Linked Data. LDOW2009, April 20, 2009, Madrid, Spain.

[5] Tom Oinn, Matthew Addis, Justin Ferris, Darren Marvin, Martin Senger, Mark Greenwood, Tim Carver, Kevin Glover, Matthew R. Pocock, Anil Wipat and Peter Li. (2004). Taverna: a tool for the composition and enactment of bioinformatics workflows. Bioinformatics 20 (17): 3045-3055. The Taverna website

[6] Carole Goble et al. Knowledge Discovery for biology with Taverna. In: Semantic Web: Revolutionizing knowledge discovery in the life sciences. 2007, pp355-395.

[7] Michael DiBernardo, Rachel Pottinger, and Mark Wilkinson. (2008). Semi-automatic web service composition for the life sciences using the BioMoby semantic web framework. Journal of Biomedical Informatics, 41(5): 837-847.

[8] Sahoo., S.S., Shet, A. Hunter, B., and York, W.S. SEMbrowser–semantic biological web services registry. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, pp 317-340.

Note: references 1 and (5 or 6) are mandatory reading, (2 or 3) was mandatory for an earlier lecture, and 4, 7, and 8 are optional.

Lecture notes: lecture 11 – BioRDF and scientific workflows

Course website

72010 SemWebTech lecture 10: SWLS and text processing and ontologies

There is a lot to be said about how Ontology, ontologies, and natural language interact from a philosophical perspective up to the point that different commitments lead to different features and, moreover, limitations of a (Semantic Web) application. In this lecture on 22 Dec, however, we shall focus on the interaction of NLP and ontologies within a bio-domain from an engineering perspective.

During the bottom-up ontology development and methodologies lectures, it was already mentioned that natural language processing (NLP) can be useful for ontology development. In addition, NLP can be used as a component in an ontology-driven information system and an NLP application can be enhanced with an ontology. Which approaches and tools suit best depends on the goal (and background) of its developers and prospective users, ontological commitment, and available resources.

Summarising the possibilities for “something natural language text” and ontologies or ontology-like artifacts, we can:

  • Use ontologies to improve NLP: to enhance precision and recall of queries (including enhancing dialogue systems [1]), to sort results of an information retrieval query to the digital library (e.g. GoPubMed [2]), or to navigate literature (which amounts to linked data [3]).
  • Use NLP to develop ontologies (TBox): mainly to search for candidate terms and relations, which is part of the suite of techniques called ‘ontology learning’ [4].
  • Use NLP to populate ontologies (ABox): e.g., document retrieval enhanced by lexicalised ontologies and biomedical text mining [5].
  • Use it for natural language generation (NLG) from a formal language: this can be done using a template-based approach that works quite well for English but much less so for grammatically more structured languages such as Italian [6], or with a full-fledged grammar engine as with the Attempto Controlled English and bi-directional mappings (see for a discussion [7]).

Intuitively, one may be led to think that simply taking the generic NLP or NLG tools will do fine also for the bio(medical) domain. Any application does indeed use those techniques and tools—Paul Buitelaar’s slides have examples and many references to NLP tools—but, generally, they do not suffice to obtain ‘acceptable’ results. Domain specific peculiarities are many and wide-ranging. For instance, to deal with the variations of terms (scientific name, variant, common misspellings) and the grounding step (linking a term to an entity in a biological database) in the ontology-NLP preparation and instance classification side [5], to characterize the question in a question answering system correctly [1], and to find ways to deal with the rather long strings that denote a biological entity or concept or universal [4]. Some of such peculiarities actually generate better overall results than in generic or other domain-specific usages of NLP tools, but it requires extra manual preparatory work and a basic understanding of the subject domain and its applications.

References

[1] K. Vila, A. Ferrández. Developing an Ontology for Improving Question Answering in the Agricultural Domain. In: Proceedings of MTSR’09. Springer CCIS 46, 245-256.

[2] Heiko Dietze, Dimitra Alexopoulou, Michael R. Alvers, Liliana Barrio-Alvers, Bill Andreopoulos, Andreas Doms, Joerg Hakenberg, Jan Moennich, Conrad Plake, Andreas Reischuck, Loic Royer, Thomas Waechter, Matthias Zschunke, and Michael Schroeder. GoPubMed: Exploring PubMed with Ontological Background Knowledge. In Stephen A. Krawetz, editor, Bioinformatics for Systems Biology. Humana Press, 2008.

[3] Allen H. Renear and Carole L. Palmer. Strategic Reading, Ontologies, and the Future of Scientific Publishing. Science 325 (5942), 828. [DOI: 10.1126/science.1157784] (but see also some comments on the paper)

[4] Dimitra Alexopoulou, Thomas Waechter, Laura Pickersgill, Cecilia Eyre, and Michael Schroeder. Terminologies for text-mining: an experiment in the lipoprotein metabolism domain. BMC Bioinformatics, 9(Suppl4):S2, 2008

[5] Witte, R. Kappler, T. And Baker, C.J.O. Ontology design for biomedical text mining. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, pp 281-313.

[6] M. Jarrar, C.M. Keet, and P. Dongilli. Multilingual verbalization of ORM conceptual models and axiomatized ontologies. STARLab Technical Report, Vrije Universiteit Brussels, Belgium. February 2006.

[7] R. Schwitter, K. Kaljurand, A. Cregan, C. Dolbear, G. Hart. A comparison of three controlled natural languages for OWL 1.1. Proc. of OWLED 2008 DC.

Note: references 4 and 5 are mandatory reading, and 1-3 and 6 are optional (recommended for the EMLCT students).

Lecture notes: lecture 10 – Text processing

Course website

72010 SemWebTech lecture 9: Successes and challenges for ontologies in the life sciences

To be able to talk about successes and challenges of SWT for health care and life sciences (or any other subject domain), we first need to establish when something can be deemed a success, when it is a challenge, and when it is an outright failure. Such measures can be devised in an absolute sense (compare technology x with an SWT one: does it outperform on measure y?) and relative (to whom is technology x deemed successful?) Given these considerations, we shall take a closer look at several attempts, being two successes and a few challenges in representation and reasoning. What were the problems and how did they solve it, and what are the problems and can that be resolved, respectively?

As success stories we take the experiments by Wolstencroft and coauthors about classifying protein phosphatases [1] and Calvanese et al for graphical, web-based, ontology-based data access applied to horizontal gene transfer data [2]. They each focus on different ontology languages and reasoning services to solve different problems. What they have in common is that there is an interaction between the ontology and instances (and that it was a considerable amount of work by people with different specialties): the former focuses on classifying instances and the latter on querying instances. In addition, modest results of biological significance have been obtained with the classification of the protein phosphatases, whereas with the ontology-based data analysis we are tantalizingly close.

The challenges for SWT in general and for HCLS in particular are quite diverse, of which some concern the SWT proper and others are by its designers—and W3C core activities on standardization—considered outside their responsibility but still need to be done. Currently, for the software aspects, the onus is put on the software developers and industry to pick up on the proof-of-concept and working-prototype tools that have come out of academia and to transform them into the industry-grade quality that a widespread adoption of SWT requires. Although this aspect should not be ignored, we shall focus on the language and reasoning limitations during the lecture.

In addition to the language and corresponding reasoning limitations that passed the revue in the lectures on OWL, there are language “limitations” discussed and illustrated at length in various papers, with the most recent take [3], where it might well be that the extensions presented in lecture 6 and 7 (parts, time, uncertainty, and vagueness) can ameliorate or perhaps even solve the problem. Some of the issues outlined by Schultz and coauthors are ‘mere’ modelling pitfalls, whereas others are real challenges that can be approximated to a greater or lesser extent. We shall look at several representation issues that go beyond the earlier examples of SNOMED CT’s “brain concussion without loss of consciousness”; e.g. how would you represent in an ontology that in most but not all cases hepatitis has as symptom fever, or how would you formalize the defined concept “Drug abuse prevention”, and (provided you are convinced it should be represented in an ontology) that the world-wide prevalence of diabetes mellitus is 2.8%?

Concerning challenges for automated reasoning, we shall look at two of the nine identified required reasoning scenarios [4], being the “model checking (violation)” and “finding gaps in an ontology and discovering new relations”, thereby reiterating that it is the life scientists’ high-level goal-driven approach and desire to use OWL ontologies with reasoning services to, ultimately, discover novel information about nature. You might find it of interest to read about the feedback received from the SWT developers upon presenting [4] here: some requirements are met in the meantime and new useful reasoning services were presented.

References

[1] Wolstencroft, K., Stevens, R., Haarslev, V. Applying OWL reasoning to genomic data. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, 225-248.

[2] Calvanese, D., Keet, C.M., Nutt, W., Rodriguez-Muro, M., Stefanoni, G. Web-based Graphical Querying of Databases through an Ontology: the WONDER System. ACM Symposium on Applied Computing (ACM SAC’10), March 22-26 2010, Sierre, Switzerland.

[3] Stefan Schulz, Holger Stenzhorn, Martin Boekers and Barry Smith. Strengths and Limitations of Formal Ontologies in the Biomedical Domain. Electronic Journal of Communication, Information and Innovation in Health (Special Issue on Ontologies, Semantic Web and Health), 2009.

[4] Keet, C.M., Roos, M. and Marshall, M.S. A survey of requirements for automated reasoning services for bio-ontologies in OWL. Third international Workshop OWL: Experiences and Directions (OWLED 2007), 6-7 June 2007, Innsbruck, Austria. CEUR-WS Vol-258.

[5] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

p.s.: the first part of the lecture on 21-12 will be devoted to the remaining part of last week’s lecture; that is, a few discussion questions about [5] that are mentioned in the slides of the previous lecture.

Note: references 1 and 3 are mandatory reading, 2 and 4 recommended to read, and 5 was mandatory for the previous lecture.

Lecture notes: lecture 9 – Successes and challenges for ontologies

Course website

72010 SemWebTech lecture 8: SWT for HCLS background and data integration

After the ontology languages and general aspects of ontology engineering, we now will delve into one specific application area: SWT for health care and life sciences. Its frontrunners in bioinformatics were adopters of some of the Semantic Web ideas even before Berners-Lee, Hendler, and Lassila wrote their Scientific American paper in 2001, even though they did not formulate their needs and intentions in the same terminology: they did want to have shared, controlled vocabularies with the same syntax, to facilitate data integration—or at least interoperability—across Web-accessible databases, have a common space for identifiers, it needing to be a dynamic, changing system, to organize and query incomplete biological knowledge, and, albeit not stated explicitly, it all still needed to be highly scalable [1].

Bioinformaticians and domain experts in genomics already organized themselves together in the Gene Ontology Consortium, which was set up officially in 1998 to realize a solution for these requirements. The results exceeded anyone’s expectations in its success for a range of reasons. Many tools for the Gene Ontology (GO) and its common KR format, .obo, have been developed, and other research groups adopted the approach to develop controlled vocabularies either by extending the GO, e.g., rice traits, or adding their own subject domain, such as zebrafish anatomy and mouse developmental stages. This proliferation, as well as the OWL development and standardization process that was going on at about the same time, pushed the goal posts further: new expectations were put on the GO and its siblings and on their tools, and the proliferation had become a bit too wieldy to keep a good overview what was going on and how those ontologies would be put together. Put differently, some people noticed the inferencing possibilities that can be obtained from moving from obo to OWL and others thought that some coordination among all those obo bio-ontologies would be advantageous given that post-hoc integration of ontologies of related and overlapping subject domains is not easy. Thus came into being the OBO Foundry to solve such issues, proposing a methodology for coordinated evolution of ontologies to support biomedical data integration [2].

People in related disciplines, such as ecology, have taken on board experiences of these very early adopters, and instead decided to jump on board after the OWL standardization. They, however, were not only motivated by data(base) integration. Referring to Madin et al’s paper [3] again, I highlight three points they made: “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science”, i.e., identification of some serious problems they have in ecological research; “Formal ontologies provide a mechanism to address the drawbacks of terminological ambiguity in ecology”, i.e., what they expect that ontologies will solve for them (disambiguation); and “and fill an important gap in the management of ecological data by facilitating powerful data discovery based on rigorously defined, scientifically meaningful terms”, i.e., for what purpose they want to use ontologies and any associated computation (discovery). That is, ontologies not as a—one of many possible—tool in the engineering/infrastructure means, but as a required part of a method in the scientific investigation that aims to discover new information and knowledge about nature (i.e., in answering the who, what, where, when, and how things are the way they are in nature).

What has all this to do with actual Semantic Web technologies? On the one hand, there are multiple data integration approaches and tools that have been, and are being, tried out by the domain experts, bioinformaticians, and interdisciplinary-minded computer scientists [4], and, on the other hand, there are the W3C Semantic Web standards XML, RDF(S), SPARQL, and OWL. Some use these standards to achieve data integration, some do not. Since this is a Semantic Web course, we shall take a look at two efforts who (try to) do, which came forth from the activities of the W3C’s Health Care and Life Sciences Interest Group. More precisely, we take a closer look at a paper written about 3 years ago [5] that reports on a case study to try to get those Semantic Web Technologies to work for them in order to achieve data integration and a range of other things. There is also a more recent paper from the HCLS IG [6], where they aimed at not only linking of data but also querying of distributed data, using a mixture of RDF triple stores and SKOS. Both papers reveal their understanding of the purposes of SWT, and, moreover, what their goals are, their experimentation with various technologies to achieve them, and where there is still some work to do. There are notable achievements described in these, and related, papers, but the sought-after “killer app” is yet to be announced.

The lecture will cover a ‘historical’ overview and what more recent ontology-adopters focus on, the very basics of data integration approaches that motivated the development of ontologies, and we shall analyse some technological issues and challenges mentioned in [5] concerning Semantic Web (or not) technologies.

References:

[1] The Gene Ontology Consortium. Gene ontology: tool for the unification of biology. Nature Genetics, May 2000;25(1):25-9.

[2] Barry Smith, Michael Ashburner, Cornelius Rosse, Jonathan Bard, William Bug, Werner Ceusters, Louis J. Goldberg, Karen Eilbeck, Amelia Ireland, Christopher J Mungall, The OBI Consortium, Neocles Leontis, Philippe Rocca-Serra, Alan Ruttenberg, Susanna-Assunta Sansone, Richard H Scheuermann, Nigam Shah, Patricia L. Whetzel, Suzanna Lewis. The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nature Biotechnology 25, 1251-1255 (2007).

[3] Joshua S. Madin, Shawn Bowers, Mark P. Schildhauer and Matthew B. Jones. (2008). Advancing ecological research with ontologies. Trends in Ecology & Evolution, 23(3): 159-168.

[4] Erhard Rahm. Data Integration in Bioinformatics and Life Sciences. EDBT Summer School, Bolzano, Sep. 2007.

[5] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

[6] Kei-Hoi Cheung, H Robert Frost, M Scott Marshall, Eric Prud’hommeaux, Matthias Samwald, Jun Zhao, and Adrian Paschke. A journey to Semantic Web query federation in the life sciences. BMC Bioinformatics 2009, 10(Suppl 10):S10

Note: references 1, 2, and (5 or 6) are mandatory reading, and 3 and 4 are recommended to read.

Lecture notes: lecture 8 – SWLS background and data integration

Course website

72010 SemWebTech lecture 7: Dealing with uncertainty and vagueness

The third advanced ontology engineering topic concerns how to cope with uncertainty and vagueness in ontology languages and their reasoners—and what we can gain from all the extra effort.

For instance, consider information retrieval: to which degree is a web site, a page, a text passage, an image, or a video segment relevant to the information need and an acceptable answer to what the user was searching for? In the context of ontology alignment, one would want to know (automatically) to which degree the focal concepts of two or more ontologies represent the same thing, or are sufficiently overlapping. In an electronic health record system, one may want to classify patients based on their symptoms, such as throwing up often, having a high blood pressure, and yellowish eye colour. How can software agents do the negotiation for your holiday travel plans that are specified imprecisely, alike “I am looking for a package holiday of preferably less than 1000 euro, but really no more that 1150 euro, for about 12 days in a warm country”?

The main problem to solve, then, is what and how to incorporate such vague or uncertain knowledge in OWL and its reasoners. To clarify these two terms upfront:

  • Uncertainty: statements are true or false, but due to lack of knowledge we can only estimate to which probability / possibility / necessity degree they are true or false;
  • Vagueness: statements involve concepts for which there is no exact definition (such as tall, small, close, far, cheap, expensive), which are then true to some degree, taken from a truth space.

The two principal approaches regarding uncertainty and the semantic web are probabilistic and possibilistic languages, ontologies, and reasoning services, where the former way of dealing with uncertainty receives a lot more attention than the latter. The two principal approaches regarding vagueness and the semantic web are fuzzy and rough extensions, where fuzzy receives more attention compared to the rough approach. The lecture will cover all four approaches to a greater (probabilistic, fuzzy) and lesser (possibilistic, rough) extent.

None of the extant languages and automated reasoners that can cope with vague or uncertain knowledge have made it into ‘mainstream’ Semantic Web tools yet. There was a W3C incubator group on uncertainty, but it remained at that. This has not stopped research in this area; on the contrary. There are two principle strands in these endeavours: one with respect to extending DL languages and its reasoners, such as Pronto that combines the pellet reasoner with a probabilistic extension and FuzzyDL that is a reasoner for fuzzy SHIF(D), and another strand to uses different techniques underneath OWL, such as Bayesian networks and constraint programming-based reasoning for probabilistic ontologies (e.g., PR-OWL), and Mixed Integer Logic Programming for fuzzy ontologies. Within the former approach, one can make a further distinction between extensions of tableaux algorithms and rewritings to a non-uncertain/non-vague standard OWL language so that one of the generic DL reasoners can be used. For each of these branches, there are differences as to which aspects of probabilistic/possibilistic/fuzzy/rough are actually included—just like we saw in the previous lecture about temporal logics.

We shall not cover all such permutations in the lecture, but instead focus on general aspects of the languages and tools. A good introductory overview can be found in [1] (which also has a very long list of references to start delving into the topics [you may skip the DLP section]).  Depending on your background education and the degree programme you are studying now, you may find the more technical overview [2] of interest as well. To get an idea of one of the more recent results on rough DL-based ontologies, you might want to glance over [3]. Last, I assume you have a basic knowledge of probability theory and fuzzy sets; if there are many people who do not, I will adjust the lecture somewhat, but you are warmly advised to look it up before the lecture if you do not know about it (even if it is only the respective Wikipedia entry here and here).

References

[1] Umberto Straccia. Managing Uncertainty and Vagueness in Description Logics, Logic Programs and Description Logic Programs. In Reasoning Web, 4th International Summer School, 2008.
[2] Thomas Lukasiewicz and Umberto Straccia. 2008. Managing Uncertainty and Vagueness in Description Logics for the Semantic Web. Journal of Web Semantics, 6:291-308.
[3] Jiang, Y., Wang, J., Tang, S., and Xiao, B. 2009. Reasoning with rough description logics: An approximate concepts approach. Information Sciences, 179:600-612.

Note: references 1 or 2 is mandatory reading, 3 optional.

Lecture notes: lecture 7 – Uncertainty and vagueness

Course website

72010 SemWebTech lecture 6: Parts and temporal aspects

The previous three lectures covered the core topics in ontology engineering. There are many ontology engineering topics that zoom in on one specific aspect of the whole endeavour, such as modularization, the semantic desktop, ontology integration, combining data mining and clustering with ontologies, and controlled natural language interfaces to OWL. In the next two lectures on Dec 1 and Dec 14, we will look at three such advanced topics in modelling and language and tool development, being the (ever recurring) issues with part-whole relations, temporalizations and its workarounds, and languages and tools for dealing with vagueness and uncertainty.

Part-whole relations

On the one hand, there is a SemWeb best practices document about part-whole relations and further confusion by OWL developers [1, 2] that was mentioned in a previous lecture.  On the other hand, part-whole relations are deemed essential by the most active adopters of ontologies—i.e., bio- and medical scientist—while its full potential is yet to be discovered by, among others, manufacturing. A few obvious examples are how to represent plant or animal anatomy, geographic information data, and components of devices. And then the need to reason over it. When we can deduce which part of the device is broken, then only that part has to be replaced instead of the whole it is part of (saving a company money). One may want to deduce that when I have an injury in my ankle, I have an injury in my limb, but not deduce that if you have an amputation of your toe, you also have an amputation of your foot that the toe is (well, was) part of. If a toddler swallowed a Lego brick, it is spatially contained in his stomach, but one does not deduce it is structurally part of his stomach (normally it will leave the body unchanged through the usual channel). This toddler-with-lego-brick gives a clue why, from an ontological perspective, equation 23 in [2] is incorrect.

To shed light on part-whole relations and sort out such modelling problems, we will look first at mereology (the Ontology take on part-whole relations), and to a lesser extent meronymy (from linguistics), and subsequently structure the different terms that are perceived to have something to do with part-whole relations into a taxonomy of part-whole relations [3]. This, in turn, is to be put to use, be it with manual or software-supported guidelines to choose the most appropriate part-whole relation for the problem, and subsequently to make sure that is indeed represented correctly in an ontology. The latter can be done by availing of the so-called RBox Reasoning Service [3]. All this will not solve each modelling problem of part-whole relations, but at least provide you with a sound basis.

Temporal knowledge representation and reasoning

Compared to part-whole relations, there are fewer loud and vocal requests for including a temporal dimension in OWL, even though it is needed. For instance, you can check the annotations in the OWL files of BFO and DOLCE (or, more conveniently, search for “time” in the pdf) where they mention temporality that cannot be represented in OWL, or SNOMED CT’s concepts like “Biopsy, planned” and “Concussion with loss of consciousness for less than one hour” where the loss of consciousness still can be before or after the concussion, or a business rule alike ‘RentalCar must be returned before Deposit is reimbursed’ or the symptom HairLoss during the treatment Chemotherapy, and Butterfly is a transformation of Caterpillar.

Unfortunately, there is no single (computational) solution to address all these examples at once. Thus far, it is a bit of a patchwork, with, among many aspects, the Allen’s interval algebra (qualitative temporal relations, such as before, during, etc.), Linear Temporal Logics (LTL), and Computational Tree Logics (CTL, with branching time), and a W3C Working draft of a time ontology.

If one assumes that recent advances in temporal Description Logics may have the highest chance of making it into a temporal OWL (tOWL)—although there are no proof-of-concept temporal DL modelling tools or reasoners yet—then the following is ‘on offer’. A very expressive (undecidable) DL language is DLRus (with the until and since operators), which already has been used for temporal conceptual data modelling [4] and for representing essential and immutable parts and wholes [5]. A much simpler language is TDL-Lite [6], which is a member of the DL-Lite family of DL languages of which one is the basis for OWL 2 QL; but these first results are theoretical, hence no “lite tOWL” yet. It is already known that EL++ (the basis for OWL 2 EL) does not keep the nice computational properties when extended with LTL, and results with EL++ with CTL are not out yet. If you are really interested in the topic, you may want to have a look at a recent survey [7] or take a broader scope with any of the four chapters in [8] (that cover temporal KR&R, situation calculus, event calculus, and temporal action logics), and several people with the KRDB Research Centre work on temporal knowledge representation & reasoning.  Depending on the remaining time during the lecture, more or less about time and temporal ontologies will pass the revue.

References

[1] I. Horrocks, O. Kutz, and U. Sattler. The Even More Irresistible SROIQ. In Proc. of the 10th International Conference of Knowledge Representation and Reasoning (KR-2006), Lake District UK, 2006.

[2] B. Cuenca Grau, I. Horrocks, B. Motik, B. Parsia, P. Patel-Schneider, and U. Sattler. OWL 2: The next step for OWL. Journal of Web Semantics: Science, Services and Agents on the World Wide Web, 6(4):309-322, 2008

[3] Keet, C.M. and Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, IOS Press, 2008, 3(1-2): 91-110.

[4] Alessandro Artale, Christine Parent, and Stefano Spaccapietra. Evolving objects in temporal information systems. Annals of Mathematics and Artificial Intelligence (AMAI), 50:5-38, 2007, Springer.

[5] Artale, A., Guarino, N., and Keet, C.M. Formalising temporal constraints on part-whole relations. 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08). Gerhard Brewka, Jerome Lang (Eds.) AAAI Press, pp 673-683. Sydney, Australia, September 16-19, 2008

[6] Alessandro Artale, Roman Kontchakov, Carsten Lutz, Frank Wolter and Michael Zakharyaschev. Temporalising Tractable Description Logics. Proc. of the 14th International Symposium on Temporal Representation and Reasoning (TIME-07), Alicante, June 2007.

[7] Carsten Lutz, Frank Wolter, and Michael Zakharyaschev.  Temporal Description Logics: A Survey. In  Proceedings of the Fifteenth International Symposium on Temporal Representation and Reasoning. IEEE Computer Society Press, 2008.

[8] Frank van Harmelen, Vladimir Lifschitz and Bruce Porter (Eds.). Handbook of Knowledge Representation. Elsevier, 2008, 1034p. (also available from the uni library)

Note: reference 3 is mandatory reading, 4 optional reading, 2 was mandatory and 1 recommended for an earlier lecture, and 5-8 are optional.

Lecture notes: lecture 6 – Parts and temporal issues

Course webpage

72010 SemWebTech lecture 5: Methods and Methodologies

The previous two lectures have given you a basic idea about the two principal approaches for starting developing an ontology—top-down and bottom-up—but they do not constitute an encompassing methodology to develop ontologies. In fact, there is no proper, up-to-date comprehensive methodology for ontology development like there is for conceptual model development (e.g., [1]) or ‘waterfall’ versus ‘agile’ software development methodologies. There are many methods and, among others, the W3C’s Semantic Web best practices, though, which to a greater or lesser extent can form part of a comprehensive ontology development methodology.

As a first step towards methodologies that gives a general scope, we will look at a range of parameters that affect ontology development in one way or another [2]. There are four influential factors to enhance the efficiency and effectiveness of developing ontologies, which have to do with the purpose(s) of the ontology; what to reuse from existing ontologies and ontology-like artifacts and how to reuse them; the types of approaches for bottom-up ontology development from other legacy sources; and the interaction with the choice of representation language and reasoning services.

Second, methods that helps the ontologist in certain tasks of the ontology engineering process include, but are not limited to, assisting the modelling itself, how to integrate ontologies, and supporting software tools. We will take a closer look at OntoClean [3] that contributes to modelling taxonomies. One might ask oneself: who cares, after all we have the reasoner to classify our taxonomy anyway, right? Indeed, but that works only if you have declared many properties for the classes, which is not always the case, and the reasoner sorts out the logical issues, but not the ontological issues. OntoClean uses several notions from philosophy, such as rigidity, identity criteria, and unity [4, 5] to provide modelling guidelines. For instance, that anti-rigid properties cannot subsume rigid properties; e.g., if we have, say, both Student and Person in our ontology, the former is subsumed by the latter. The lecture will go into some detail of OntoClean.

If, on the other hand, you do have a rich ontology and not mostly a bare taxonomy, ‘debugging’ by availing of an automated reasoner is useful in particular with larger ontologies and ontologies represented in an expressive ontology language. Such ‘debugging’ goes under terms like glass box reasoning [6], justification [7], explanation [8], and pinpointing errors. While they are useful topics, we will spend comparatively little time on it, because it requires some more knowledge of Description Logics and its (mostly tableaux-based) reasoning algorithms that will be introduced only in the 2nd semester (mainly intended for the EMCL students). Those techniques use the automated reasoner to at least locate modelling errors and explain in the most succinct way why this is so, instead of just returning a bunch of inconsistent classes; proposing possible fixes is yet a step further (one such reasoning service will be presented in lecture 6 on Dec. 1).

Aside from parameters, methods, and tools, there are only few methodologies, which are even coarse-grained: they do not (yet) contain all the permutations at each step, i.e. what and how to do each step, given the recent developments. A comparatively comprehensive one is Methontology [10], which has been applied to various subject domains (e.g., chemicals, legal domain [9,11]) since its development in the late 1990s. While some practicalities are superseded with new [12] and even newer languages and tools, some of the core aspects still hold. The five main steps are: specification, conceptualization (with intermediate representations, such as in text or diagrams, like with ORM [1] and pursued by the modelling wiki MOKI that was developed during the APOSDLE project for work-integrated learning), formalization, implementation, and maintenance. Then there are various supporting tasks, such as documentation and version control.

Last, but not least, there are many tools around that help you with one method or another. WebODE aims to support Methontology, the NeOn toolkit aims to support distributed development of ontologies, RacerPlus for sophisticated querying, Protégé-PROMPT for ontology integration (there are many other plug-ins for Protégé), SWOOGLE to search across ontologies, OntoClean with Protégé, and so on and so forth. For much longer listings of tools, see the list of semantic web development tools, the plethora of ontology reasoners and editors, and range of semantic wiki projects engines and features for collaborative ontology development. Finding the right tool to solve the problem at hand (if it exists) is a skill of its own and it is a necessary one to find a feasible solution to the problem at hand. From a technologies viewpoint, the more you know about the goals, features, strengths, and weaknesses of available tools (and have the creativity to develop new ones, if needed), the higher the likelihood you bring a potential solution of a problem to successful completion.

References

[1] Halpin, T., Morgan, T.: Information modeling and relational databases. 2nd edn. Morgan Kaufmann (2008)

[2] Keet, C.M. Ontology design parameters for aligning agri-informatics with the Semantic Web. 3rd International Conference on Metadata and Semantics (MTSR’09) — Special Track on Agriculture, Food & Environment, Oct 1-2 2009 Milan, Italy. F. Sartori, M.A. Sicilia, and N. Manouselis (Eds.), Springer CCIS 46, 239-244.

[3] Guarino, N. and Welty, C. An Overview of OntoClean. in S. Staab, R. Studer (eds.), Handbook on Ontologies, Springer Verlag 2004, pp. 151-172

[4] Guarino, N., Welty, C.: A formal ontology of properties. In: Dieng, R., Corby, O. (eds.) EKAW 2000. LNAI, vol. 1937, pp. 97–112. Springer, Heidelberg (2000)

[5] Guarino, N., Welty, C.: Identity, unity, and individuality: towards a formal toolkit for ontological analysis. In: Proc. of ECAI 2000. IOS Press, Amsterdam (2000)

[6] Parsia, B., Sirin, E., Kalyanpur, A. Debugging OWL ontologies. World Wide Web Conference (WWW 2005). May 10-14, 2005, Chiba, Japan.

[7] M. Horridge, B. Parsia, and U. Sattler. Laconic and Precise Justifications in OWL. In Proc. of the 7th International Semantic Web Conference (ISWC 2008), Vol. 5318 of LNCS, Springer, 2008.

[8] Alexander Borgida, Diego Calvanese, and Mariano Rodriguez-Muro. Explanation in the DL-Lite family of description logics. In Proc. of the 7th Int. Conf. on Ontologies, DataBases, and Applications of Semantics (ODBASE 2008), LNCS vol 5332, 1440-1457. Springer, 2008.

[9] Fernandez, M.; Gomez-Perez, A. Pazos, A.; Pazos, J. Building a Chemical Ontology using METHONTOLOGY and the Ontology Design Environment. IEEE Expert: Special Issue on Uses of Ontologies, January/February 1999, 37-46.

[10] Gomez-Perez, A.; Fernandez-Lopez, M.; Corcho, O. Ontological Engineering. Springer Verlag London Ltd. 2004.

[11] Oscar Corcho, Mariano Fernández-López, Asunción Gómez-Pérez, Angel López-Cima. Building legal ontologies with METHONTOLOGY and WebODE. Law and the Semantic Web 2005. Springer LNAI 3369, 142-157.

[12] Corcho, O., Fernandez-Lopez, M. and Gomez-Perez, A. (2003). Methodologies, tools and languages for building ontologies. Where is their meeting point?. Data & Knowledge Engineering 46(1): 41-64.

Note: references 2, 3, and 9 are mandatory reading, 6, 7, and 10 recommended, and 1, 4, 5, 8, 11, and 12 are optional.

Lecture notes: lecture 5 – Methodologies

Course webpage