## Archive for the ‘Description Logics’ Category

### Logical and ontological reasoning services?

The SubProS and ProChainS compatibility services for OWL ontologies to check for good and ‘safe’ OWL object property expression [5] may be considered ontological reasoning services by some, but according others, they are/ought to be plain logical reasoning services. I discussed this issue with Alessandro Artale back in 2007 when we came up with the RBox Compatibility service [1]—which, in the end, we called an ontological reasoning service—and it came up again during EKAW’12 and the Ontologies and Conceptual Modelling Workshop (OCM) in Pretoria in November. Moreover, in all three settings, the conversation was generalized to the following questions:

1. Is there a difference between a logical and an ontological reasoning service (be that ‘onto’-logical or ‘extra’-logical)? If so,
1. Why, and what, then, is an ontological reasoning service?
2. Are there any that can serve at least as prototypical example of an ontological reasoning service?

There’s still no conclusive answer on either of the questions. So, I present here some data and arguments I had and that I’ve heard so far, and I invite you to have your say on the matter. I will first introduce a few notions, terms, tools, and implicit assumptions informally, then list the three positions and their arguments I am aware of.

Some aspects about standard, non-standard, and ontological reasoning services

Let me first introduce a few ideas informally. Within Description Logics and the Semantic Web, a distinction is made between so-called ‘standard’ and ‘non-standard’ reasoning services. The standard reasoning services—which most of the DL-based reasoners support—are subsumption reasoning, satisfiability, consistency of the knowledge base, instance checking, and instance retrieval (see, e.g., [2,3] for explanations). Non-standard reasoning services include, e.g., glass-box reasoning and computing the least common subsumer, they are typically designed with the aim to facilitate ontology development, and tend to have their own plugin or extension to an existing reasoner. What these standard and non-standard reasoners have in common, is that they all focus on the (subset of first order predicate logic) logical theory only.

Take, on the other hand, OntoClean [4], which assigns meta-properties (such as rigidity and unity) to classes, and then, according to some rules involving those meta-properties, computes the class taxonomy. Those meta-properties are borrowed from Ontology in philosophy and the rules do not use the standard way of computing subsumption (where every instance of the subclass is also an instance of its super class and, thus, practically, the subclass has more or features or has the same features but with more constrained values/ranges). Moreover, OntoClean helps to distinguish between alternative logical formalisations of some piece of knowledge so as to choose the one that is better with respect to the reality we want to represent; e.g., why it is better to have the class Apple that has as quality a color green, versus the option of a class GreenObject that has shape apple-shaped. This being the case, OntoClean may be considered an ontological reasoning service. My SubProS and ProChainS [5] put constraints on OWL object property expressions so as to have safe and good hierarchies of object properties and property chains, based on the same notion of class subsumption, but then applied to role inclusion axioms: the OWL object sub-property (relationship, DL role) must be more constrained than its super-property and the two reasoning services check if that holds. But some of the flawed object property expressions do not cause a logical inconsistency (merely an undesirable deduction), so one might argue that the compatibility services are ontological.

The arguments so far

The descriptions in the previous paragraph contain implicit assumptions about the logical vs ontological reasoning, which I will spell out here. They are a synthesis from mine as well as other people’s voiced opinions about it (the other people being, among others and in alphabetical order, Alessandro Artale, Arina Britz, Giovanni Casini, Enrico Franconi, Aldo Gangemi, Chiara Ghidini, Tommie Meyer, Valentina Presutti, and Michael Uschold). It goes without saying they are my renderings of the arguments, and sometimes I state the things a little more bluntly to make the point.

1. If it is not entailed by the (standard, DL/other logic) reasoning service, then it is something ontological.

Logic is not about the study of the truth, but about the relationship of the truth of one statement and that of another. Effectively, it doesn’t matter what terms you have in the theory’s vocabulary—be this simply A, B, C, etc. or an attempt to represent Apple, Banana, Citrus, etc. conformant to what those entities are in reality—as it uses truth assignments and the usual rules of inference. If you want some reasoning that helps making a distinction between a good and a bad formalisation of what you aim to represent (where both theories are consistent), then that’s not the logician’s business but instead is relegated to the domain of whatever it is that ontologists get excited about. A counter-argument raised to that was that the early logicians were, in fact, concerned with finding a way to formalize reality in the best way; hence, not only syntax and semantics of the logic language, but also the semantics/meaning of the subject domain. A practical counter-example is that both Glimm et al [6] and Welty [7] managed to ‘hack’ OntoClean into OWL and use standard DL reasoners for it to obtain de desired inferences, so, presumably, then even OntoClean cannot be considered an ontological reasoning service after all?

2. Something ‘meta’ like OntoClean can/might be considered really ontological, but SubProS and ProChainS are ‘extra-logical’ and can be embedded like the extra-logical understanding of class subsumption, so they are logical reasoning services (for it is the analogue to class subsumption but then for role inclusion axioms).

This argument has to do with the notion of ‘standard way’ versus ‘alternative approach’ to compute something and the idea of having borrowed something from Ontology recently versus from mathematics and Aristotle somewhat longer ago. (note: the notion of subsumption in computing was still discussed in the 1980s, where the debate got settled in what is now considered the established understanding of class subsumption.) We simply can apply the underlying principles for class-subclass to one for relationships (/object properties/roles). DL/OWL reasoners and the standard view assume that the role box/object property expressions are correct and merely used to compute the class taxonomy only. But why should I assume the role box is fine, even when I know this is not always the case? And why do I have to put up with a classification of some class elsewhere in the taxonomy (or be inconsistent) when the real mistake is in the role box, not the class expression? Differently, some distinction seems to have been drawn between ‘meta’ (second order?), ‘extra’ to indicate the assumptions built into the algorithms/procedures, and ‘other, regular’ like satisfiability checking that we have for all logical theories. Another argument raised was that the ‘meta’ stuff has to do with second order logics, for which there are no good (read: sound and complete) reasoners.

3. Essentially, everything is logical, and services like OntoClean, SubProS, ProChainS can be represented formally with some clearly, precisely, formally, defined inferencing rules, so then there is no ontological reasoning, but there are only logical reasoning services.

This argument made me think of the “logic is everywhere” mug I still have (a goodie from the ICCL 2005 summer school in Dresden). More seriously, though, this argument raises some old philosophical debates whether everything can indeed be formalized, and provided any logic is fine and computation doesn’t matter. Further, it conflates the distinction, if any, between plain logical entailment, the notion of undesirable deductions (e.g., that a CarChassis is-a Perdurant [some kind of a process]), and the modeling choices and preferences (recall the apple with a colour vs. green object that has an apple-shape). But maybe that conflation is fine and there is no real distinction (if so: why?).

In my paper [5] and in the two presentations of it, I had stressed that SubProS and ProChainS were ontological reasoning services, because before that, I had tried but failed to convince logicians of the Type-I position that there’s something useful to those compatibility services and that they ought to be computed (currently, they are mostly not computed by the standard reasoners). Type-II adherents were plentiful at EKAW’12 and some at the OCM workshop. I encountered the most vocal Type-III adherent (mathematician) at the OCM workshop. Then there were the indecisive ones and people who switched and/or became indecisive. At the moment of writing this, I still lean toward Type-II, but I’m open to better arguments.

References

[1] Keet, C.M., Artale, A.: Representing and reasoning over a taxonomy of part-whole relations. Applied Ontology, 2008, 3(1-2), 91–110.

[2] F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider (Eds). The Description Logics Handbook. Cambridge University Press, 2009.

[3] Pascal Hitzler, Markus Kroetzsch, Sebastian Rudolph. Foundations of Semantic Web Technologies. Chapman & Hall/CRC, 2009,

[4] Guarino, N. and Welty, C. An Overview of OntoClean. In S. Staab, R. Studer (eds.), Handbook on Ontologies, Springer Verlag 2009, pp. 201-220.

[5] Keet, C.M. Detecting and Revising Flaws in OWL Object Property Expressions. Proc. of EKAW’12. Springer LNAI vol 7603, pp2 52-266.

[6] Birte Glimm, Sebastian Rudolph, and Johanna Volker. Integrated metamodeling and diagnosis in OWL 2. In Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, and Birte Glimm, editors, Proceedings of the 9th International Semantic Web Conference, volume 6496 of LNCS, pages 257-272. Springer, November 2010.

[7] Chris Welty. OntOWLclean: cleaning OWL ontologies with OWL. In B. Bennet and C. Fellbaum, editors, Proceedings of Formal Ontologies in Information Systems (FOIS’06), pages 347-359. IOS Press, 2006.

### Lecture notes for the ontologies and knowledge bases course

The regular reader may recollect earlier posts about the ontology engineering courses I have taught at FUB, UH, UCI, Meraka, and UKZN. Each one had some sort of syllabus or series of blog posts with some introductory notes. I’ve put them together and extended them significantly now for the current installment of the Ontologies and Knowledge Bases Honours module (COMP718) at UKZN, and they are bound and printed into lecture notes for the enrolled students. These lecture notes are now online and I will add accompanying slides on the module’s webpage as we go along in the semester.

Given that the target audience is computer science students in their 4th year (honours), the notes are of an introductory nature. There are essentially three blocks: logic foundations, ontology engineering, and advanced topics. The logic foundations contain a recap of FOL, basics of Description Logics with ALC, all the DL-based OWL species, and some automated reasoning. The ontology engineering block covers top-down and bottom-up ontology development, and methods and methodologies, with top-down ontology development including mainly foundational ontologies and part-whole relations, and bottom-up the various approaches to extract knowledge from ‘legacy’ representations, such as from databases and thesauri. The advanced topics are balanced in two directions: one is toward ontology-based data access applications (i.e., an ontology-drive information system) and the other one has more theory with temporal ontologies.

Each chapter has a section with recommended/required reading and a set of exercises.

Unsurprisingly, the lecture notes have been written under time constraints and therefore the level of relative completeness of sections varies slightly. Suggestions and corrections are welcome!

### Book chapter on conceptual data modelling for biological data

My invited book chapter, entitled “Ontology-driven formal conceptual data modeling for biological data analysis” [1], recently got accepted for publication in the Biological Knowledge Discovery Handbook: Preprocessing, Mining and Postprocessing of Biological Data, edited by Mourad Elloumi and Albert Y. Zomaya, and is scheduled for printing by Wiley early 2012.

All this started off with my BSc(Hons) in IT & Computing thesis back in 2003 and my first paper about the trials and tribulations of conceptual data modelling for bio-databases [2] (which is definitely not well-written, but has some valid points and has been cited a bit). In the meantime, much progress has been made on the topic, and I’ve learned, researched, and published a few things about it, too. So, what is the chapter about?

The main aspect is the ‘conceptual data modelling’ with EER, ORM, and UML Class Diagrams, i.e., concerning implementation-independent representations of the data to be managed for a specific application (hence, not ontologies for application-independence).

The adjective ‘formal’ is to point out that the conceptual modeling is not just about drawing boxes, roundtangles, and lines with some adornments, but there is a formal, logic-based, foundation. This is achieved with the formally defined CMcom conceptual data modeling language, which has the greatest common denominator between ORM, EER, and UML Class Diagrams. CMcom has, on the one hand, a mapping the Description Logic language DLRifd and, on the other hand, mappings to the icons in the diagrammatic languages. The nice aspect of this it that, at least in theory and to some extent in practice as well, one can subject it to automated reasoning to check consistency of the classes, of the whole conceptual data model, and derive implicit constraints (an example) or use it in ontology-based data access (an example and some slides on ‘COMODA’ [COnceptual MOdel-based Data Access], tailored to ORM and the horizontal gene transfer database as example).

Then there is the ‘ontology-driven’ component: Ontology and ontologies can aid in conceptual data modeling by providing solution to recurring modeling problems, an ontology can be used to generate several conceptual data models, and one can integrate (a section of) an ontology into a conceptual data model that is subsequently converted into data in database tables.

Last, but not least, it focuses on ‘biological data analysis’. A non-(biologist or bioinformatician) might be inclined to say that should not matter, but it does. Biological information is not as trivial as the typical database design toy examples like “Student is enrolled in Course”, but one has to dig deeper and figure out how to represent, e.g., catalysis, pathway information, the ecological niche. Moreover, it requires an answer to ‘which language features are ‘essential’ for the conceptual data modeling language?’ and if it isn’t included yet, how do we get it in? Some of such important features are n-aries (n>2) and the temporal dimension. The paper includes a proposal for more precisely representing catalysis, informed by ontology (mainly thanks to making the distinction between the role and its bearer), and shows how certain temporal information can be captured, which is illustrated by enhancing the model for SARS viral infection, among other examples.

The paper is not online yet, but I did put together some slides for the presentation at MAIS’11 reported on earlier, which might serve as a sneak preview of the 25-page book chapter, or you can contact me for the CRC.

References

[1] Keet, C.M. Ontology-driven formal conceptual data modeling for biological data analysis. In: Biological Knowledge Discovery Handbook: Preprocessing, Mining and Postprocessing of Biological Data. Mourad Elloumi and Albert Y. Zomaya (Eds.). Wiley (in print).

[2] Keet, C.M. Biological data and conceptual modelling methods. Journal of Conceptual Modeling, Issue 29, October 2003. http://www.inconcept.com/jcm.

### The rough ontology language rOWL and basic rough subsumption reasoning

Following the feasibility assessments on marrying Rough Sets with Description Logic languages last year [1,2], which I blogged about before, I looked into ‘squeezing’ into OWL 2 DL the very basic aspects of rough sets. The resulting language is called, rOWL, which is described in a paper [3] accepted at SAICSIT’11—the South African CS and IT conference (which thus also gives me the opportunity to meet the SA research community in CS and IT).

DLs are not just about investigating decidable languages, but, perhaps more importantly, also about reasoning over the logical theories.  The obvious addition to the basic crisp automated reasoning services is to add the roughness component, somehow. There are various ways to do that. Crisp subsumption (and definite and possible satisfiability) of rough concepts have been defined by Jiang and co-authors [4], and there was a presentation at DL 2011 about paraconsistent rough DL [5]. I have added the notion of rough subsumption.

There are two principal cases to consider (the “$\wr$” before the OWL class name denotes it is a rough class):

• If $\wr C \sqsubseteq \wr D$ is asserted in the ontology, what can be said about the subsumption relations among their respective approximations?
• Given a subsumption between any of the lower and upper approximations of C and D, then can one deduce $\wr C \sqsubseteq \wr D$?

Addressing this raises questions: because being rough or not depends entirely on the chosen properties for C together with the available data, should these two cases be solved only at the TBox level or necessarily include the ABox for it to make sense? And should that be under the assumption of standard instantiation and instance checking, or in the presence of a novel DL notion of rough instantiation and rough instance checking?

These questions are answered in the second part of the paper Rough Subsumption Reasoning with rOWL [3]. In an attempt to make the proofs more readable and because the presence of instances is intuitively tied to the matter, the proofs are done by counterexample, which is relatively ‘easy’ to grasp. But maybe I should have obfuscated it with another proof technique to make the results look more profound.

Last, but not least: just in case you thought there is little motivation to bother with rough ontologies: the hypothesis testing and experimentation described in [2] still holds, and a small example is added to [3].

The succinct paper abstract is as follows:

There are various recent efforts to broaden applications of ontologies with vague knowledge, motivated in particular by applications of bio(medical)-ontologies, as well as to enhance rough set information systems with a knowledge representation layer by giving more attention to the intension of a rough set. This requires not only representation of vague knowledge but, moreover, reasoning over it to make it interesting for both ontology engineering and rough set information systems. We propose a minor extension to OWL 2 DL, called rOWL, and define the novel notions of rough subsumption reasoning and classification for rough concepts and their approximations.

I’ll continue looking into the topic, and more is in the pipeline w.r.t. the logic aspects of rough ontologies (in collaboration with Arina Britz).

References

[1] C. M. Keet. On the feasibility of description logic knowledge bases with rough concepts and vague instances. Proceedings of the 23rd International Workshop on Description Logics (DL’10), CEUR-WS, pages 314-324, 2010. 4-7 May 2010, Waterloo, Canada.

[2] C. M. Keet. Ontology engineering with rough concepts and instances. P. Cimiano and H. Pinto, editors, 17th International Conference on Knowledge Engineering and Knowledge Management (EKAW’10), volume 6317 of LNCS, pages 507-517. Springer, 2010. 11-15 October 2010, Lisbon, Portugal.

[3] C.M. Keet. Rough Subsumption Reasoning with rOWL. SAICSIT Annual Research Conference 2011 (SAICSIT’11), Cape Town, South Africa, October 3-5, 2011. ACM Conference Proceedings. (accepted).

[4] Y. Jiang, J. Wang, S. Tang, and B. Xiao. Reasoning with rough description logics: An approximate concepts approach. Information Sciences, 179:600-612, 2009.

[5] H. Viana, J. Alcantara, and A.T. Martins. Paraconsistent rough description logic. Proceedings of the 24th International Workshop on Description Logics (DL’11), 2011. Barcelona, Spain, July 13-16, 2011.

### Nontransitive vs. intransitive direct part-whole relations in OWL

Confusing is-a with part-of is known to be a common mistake by novice ontology developers. Each time I taught the ontology engineering course, I had included a session of 1-2 hours to explain some basic aspects of part-whole relations and, lo and behold, none of the participants made that mistake in the labs or mini-projects! One awkward thing did pop-up there and at other occasions, though, which had to do with modelling direct parthood that does not go well at the moment, to say the least, for a plethora of reasons. Inclusion of direct parthood is not without philosophical quarrels, and the more I think of it, the more I dislike the relation, but somehow the issue appears often in the context of part-whole relations in ontologies. The observed underlying modelling issue—representing intransitivity versus nontransitivity—holds for any OWL object property anyway, so I will proceed with the general case with an example about giraffes.

Preliminaries

First of all, to clarify terms in the post’s title: INtransitive means that for all x, y, z, if Rxy and Ryz then Rxz does not hold; formally $\forall x, y, z (R(x,y) \land R(y,z) \rightarrow \neg R(x,z)$ and an option to state this in a Description Logic is to use role chaining: $R \circ R \sqsubseteq \neg R$NONtransitive means that we cannot say either way if the property is transitive or intransitive, i.e., in some cases is may be transitive but not in other occasions. Direct parthood is to be understood as follows: if some part x is a direct part of a y, then there is no other object z such that x is a part of z and z is a part of y; formally, $\forall x,y (dpo(x, y) \equiv \neg \exists z (partof(x,z) \land partof(z,y)))$. If direct parthood is in- or non-transitive is beside the point at this stage, so let us look now at what happens with it in an OWL ontology when one tries to model it one way or another.

The OWL ontology and the reasoner

Given that I used the African Wildlife Ontology as a tutorial ontology earlier and the theme appeals to people, I will use it again here. Depending on what we do with the direct parthood relation in the ontology, Giraffe is, or is not, classified automatically as a subclass of Herbivore. Herbivore is a defined class, equivalent to, in Protégé 4.1 notation, (eats only plant) or (eats only (is-part-of some plant)), and Giraffe is a subclass of both Animal and eats only (leaf or Twig). Leaves are part of a twig, twigs of a branch, and branches of a tree that in turn is a subclass of plant. The is-part-of is, correctly according to mereology, included in the ontology as being transitive. Instead of all the is-part-of and is-proper-part-of between plant parts and plants in the AfricanWildlifeOntology1.owl, we model them using direct-part. AfricanWildlifeOntology4a.owl has direct-part as sister object property to is-part-of, AfricanWildlifeOntology4b.owl has it as sub-object property of is-part-of, and neither ontology has any “characteristics” (relational properties) checked for direct-part. Before running the reasoner to classify the taxonomy, what do you think will happen with our Giraffe in both cases?

In AfricanWildlifeOntology4a.owl, Giraffe is still a mere direct subclass of Animal, whereas with AfricanWildlifeOntology4b.owl, we do obtain the (desired) deduction that Giraffe is a Herbivore. That is, we obtain different results depending on where we put the uncharacterized direct-part object property in the RBox. Why is this so?

By not clicking the checkbox “transitive”, an object property is non­-transitive, but not in-transitive. In fact, we cannot represent explicitly that an object property is intransitive in OWL (see OWL guide and related documents). If we put the object property at the top level (or, as in Protégé 4.1, as immediate subproperty of topObjectProperty), then we obtain the behaviour as if the property were intransitive (and therefore Giraffe is not classified as a subclass of Herbivore). However, the direct-part property is really nontransitive in the ontology. When direct-part is put as subproperty of is-part-of, then it inherits the transitivity characteristic from is-part-of and therefore Giraffe is classified as a Herbivore (because now leaf and Twig are part of plant thanks to the transitivity).

Obviously, it holds for any OWL/OWL2 object property that one cannot assert intransitivity explicitly, that an object property’s characteristics are inherited to its subproperties, and this kind of behaviour of nontransitive object properties depends on where you place it in the RBox—whether you like it or not.

How to go forward?

Direct parthood is called isComponentOf in the componency ontology design pattern and is a subproperty of isPartOf. Its inverse is called haspart_directly in the W3C best practices document on Simple Part-Whole relations [1], and is a subproperty of the transitive haspart. The componency.owl notes that isComponentOf is “hasPart relation without transitivity”, the ODP page’s “intent” of the pattern is that it is intended to “represent (non-transitively) that objects either are proper parts of other objects, or have proper parts”, and the W3C best Practices note that, unlike mereological parthood, it is “not transitive”. Hence, if you include either one in your OWL ontology, you will not obtain the intended behaviour. Therefore, I do not recommend using either suggestion.

Setting aside the W3C’s best practices motivation for inclusion of haspart_directly—easier querying for immediate parts, but for the ontology purist this ought not to be the motivation for its inclusion—it is worth digging a little deeper into the semantics of the direct parthood. Maybe a modeller actually wants to represent collections with their members, like each Fleet has as direct parts more than one Ship, or constitution of objects, like clay is directly part of some vase? In both cases, however, we deal with meronymic part-whole relations, not mereological ones (see [2] and references therein); hence, they should not be subsumed by the mereological part-of relation anyway. They can be modelled as sister properties of the part-of relation and have the intended nontransitive behaviour as in, e.g., the pwrelations.owl ontology with a taxonomy of part-whole relations (that can be imported into the wildlife ontology).

Alternatively, there is always the option to choose a sufficiently expressive non-OWL language to represent the direct parthood and the rest of the subject domain and use one of the many first/second order theorem provers.

References

[1] Alan Rector and Chris Welty. Simple Part-Whole relations in OWL ontologies. W3C Editor’s draft, 11 August 2005.

[2] C. Maria Keet and Alessandro Artale. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2): 91-110.

### Automating approximations in DL-Lite ontologies

As the avid keet blog reader or attendee to one of my ontology engineering courses may remember, I politely aired my frustration when one has an OWL 2 DL ontology that needs to be ‘slimmed’ to a DL-Lite (roughly: OWL 2 QL) one to make it useable for Ontology-Based Data Access (OBDA)—already since the experiment with the ontology/OBDA for disabilities [1]. This is a difficult and time-consuming exercise to do manually, especially when one has to go back and forward between the slimmed and expressive version of the ontology. Back in 2008, the difficulties were due both to a flaky Protégé 4.0-alpha and a mere syntactic approximation. Finally, things have improved and a preliminary semantic approximation is available [2] (and recently presented at AIMSA’10), which was developed by my colleagues at the KRDB Research centre.

Well, ok, only some aspects of the sound and complete approximations are addressed (more precisely: chains of existential role restrictions) and for DL-LiteA only, but they have been implemented already. The implementations are available in three forms: a Java API, a command line application suitable for batch approximations, and as a plug-in for Protégé 4.0. Note though, that the approximation algorithm is exponential, so with a large ontology it might take some time to simplify the expressive ontology. I did not test this myself yet, however, so if you have any comments or suggestions, please contact the authors of [2] directly. More is in the pipeline, and I am looking forward to more of such results—sure, this is with some self-interest: it will ease not only transparent, coordinated ontology management and development of ontology-driven information systems, but also facilitate implementation scenarios for rough ontologies [3].

References

[1] Keet, C.M., Alberts, R., Gerber, A., Chimamiwa, G. Enhancing web portals with Ontology-Based Data Access: the case study of South Africa’s Accessibility Portal for people with disabilities. Fifth International Workshop OWL: Experiences and Directions (OWLED’08). 26-27 Oct. 2008, Karlsruhe, Germany.

[2] Elena Botoeva, Diego Calvanese, and Mariano Rodriguez-Muro. Expressive Approximations in DL-Lite Ontologies. Proc. of the 14th Int. Conf. on Artificial Intelligence: Methodology, Systems, Applications (AIMSA’10). Sept 8-10, 2010, Varna, Bulgaria.

[3] Keet, C.M. Ontology engineering with rough concepts and instances17th International Conference on Knowledge Engineering and Knowledge Management (EKAW’10). 11-15 October 2010, Lisbon, Portugal. Springer LNAI 6317, 507-517.

### From the Description Logics Workshop 2010, Waterloo

The 23rd International Workshop on Description Logics was held from 4-7 May at the University of Waterloo, in Canada. The full proceedings are online as one large pdf and as individual files for each paper, which contain the papers of the 29 oral presentations (including mine) and 14 posters. Unsurprisingly, the following brief report contains only a selection of the very latest research outcomes in the DL arena that passed the revue in the past 3 days.

Keynotes

Ian Horrocks’ keynote was about his quest to search for the “holy grail” and the lessons learned along the way. That is, he started his research with the problems of the GRAIL language and the too slow classification of the GALEN terminology. With much persistence and desire to solve the problems, eventually his FaCT reasoner managed to get the classification of GALEN core down from 24 hours to 400 seconds. The next steps were to extend the language and introduce optimizations to improve the performance (whereby careful study of typical inputs were crucial for successful optimization)—in an ongoing virtuous spiral. Moving on in the time line, the Semantic Web is, according to Horrocks, alike a “grand challenge” and “killer app” for DLs. Closing the presentation, OWL 2 DL finally contains all the features that GRAIL has (in particular role chaining), but the reasoners were still unable to classify GALEN (until Kazakov’s recent approach with consequence-driven reasoning that reduced it to < 10 seconds). So, while most papers that Horrocks wrote are not particularly written for (nor particularly readable according to) bio- and biomedical ontologists, they might find it nice to know that the base motivation comes from trying to solve the problems they brought in.

The keynote by Phokion Kolaitis was purely database-oriented and focused on schema mappings in the context of database integration (comprising the data federation and translation approaches) and schema evolution, which concerned a line of research originally motivated by the experiences obtained with the CLIO project. During the talk, the emphasis was on the composition and inverse operators and for the former the consequences of chaining different kinds of mappings (e.g., GAV + GAV, GAV + GLAV).
Unfortunately, I missed the keynote by Roberto Sebastiani due to the fuzzy notion of “nearby within walking distance” between the accommodation and the conference venue on the rather large and spacious campus.

Papers

The papers were grouped into sessions about theory, extensions, ontology, reasoning, EL, systems, querying, DL-Lite, OWL, and modules.

Extensions included, among others, complexity of temporal description logics in relation to temporal conceptual modelling and tractable reasoning (i.e., temporal extensions to the DL-Lite family that are the basis for the OWL 2 QL profile) [1], presented by Alessandro Artale. Other extensions, such as fuzzy, rough, and probabilistic, passed the revue in other sessions. For instance, using a probabilistic DL (that is, the option to represent defaults) for repairing TBoxes that was presented by Thomas Scharrenbach [2], approximate least common subsumer [3] by Anni-Yasmin Turham, and my paper in the ontologies section. My paper was about the feasibility of DL knowledge bases with rough concept or vague instances [4]—yes, or and not and, because there are both theoretical and practical limitations to have rough DL knowledge bases in their full glory even when we take into account only the basic aspects of rough sets. The upside is that several research lines on DL languages & tools on the interaction between ontologies and data (and the interest shown by reasoner developers, such as Volker Haarslev of RacerPro, in the experimentation) as well as other avenues, such as semantic scientific workflows, will be very useful to improve the situation so that the combination of ontologies and data can be used better for hypothesis testing to advance science at a faster pace.

Mariano Rodriguez presented a new case study of Ontology-Based Data Access in industry [5], which considers additional features of the system, such as dealing with incompleteness of the data and integrity constraints, and addressing performance issues by assessing the query structure better. Performance optimization was also a motivation for the query answering for expressive DLs by creating “islands” in the ABox [6] presented by Ralf Moeller, and for developing a scalable reasoner for OWL 2 EL and RL using Java and database technologies (MySQL), called OREL [7], presented by Sebastian Rudolph.

Two papers dealt with the topic of (ultimately) helping the modeller to figure out in the case when there is an inconsistency, why this is so. One paper dealt with the complexity of pinpointing (which is not great, as many a modeller who used Protégé 4.0-alpha) in the tractable DL-Lite [8], which was presented by Rafael Peñaloza, and the other one (presented by Matthew Horridge) was about masking the “irrelevant” parts of the justification so as to keep the explanation as short as possible [9]. Another requested feature is dealing with updates of the ontology, for which several strategies are possible, and one such approach for DL-lite ontologies [10] was presented by Dmitriy Zheleznyakov. Also modularization and extraction of sections of an ontology is a well-known request, and an empirical study was presented jointly by Chiara del Vescovo and Thomas Schneider discussing how well the algorithms work: full automated modularization does not look good from a practical perspective, and computing only some modules will be more feasible [11]. This is still fine, I think, because, generally, full modularization is not what the modelers are after anyway, but they only would want to have one or a few subsections extracted from the larger ontology. (In addition, one could use granularity to modularise a large ontology aside from letting one be guided solely by the syntactical features of the ontology.)

That’s it for this year’s DL workshop. DL’11 will be held in Barcelona (colocated with IJCAI’11).

References

[1] Alessandro Artale, Roman Kontchakov, Vladislav Ryzhikov and Michael Zakharyaschev. Temporal Conceptual Modelling with DL-Lite. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp9-19.
[2] Thomas Scharrenbach, Rolf Grütter, Bettina Waldvogel and Abraham Bernstein. Structure preserving TBox repair using defaults. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp384-395.
[3] Anni-Yasmin Turhan and Rafael Penaloza. Role-depth Bounded Least Common Subsumers by Completion for EL- and prob-EL-TBoxes. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp255-266.
[4] C. Maria Keet. On the feasibility of Description Logic knowledge bases with rough concepts and vague instances. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp314-324.
[5] Domenico Fabio Savo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Mariano Rodriguez-Muro, Vittorio Romagnoli, Marco Ruzzi and Gabriele Stella. Mastro at Work: Experiences on Ontology-Based Data Access. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp20-31.
[6] Sebastian Wandelt and Ralf Moeller. Distributed Island-based Query Answering for Expressive Ontologies. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp185-196.
[7] Markus Krotzsch, Anees Mehdi and Sebastian Rudolph. Orel: Database-Driven Reasoning for OWL 2 Profiles. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp114-124.
[8] Rafael Peñaloza and Baris Sertkaya. Complexity of Axiom Pinpointing in the DL-Lite Family. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp173-184.
[9] Matthew Horridge, Bijan Parsia and Ulrike Sattler. Justification Masking in OWL. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp32-42.
[10] Dmitriy Zheleznyakov, Diego Calvanese, Evgeny Kharlamov and Werner Nutt. Updating TBoxes in DL-Lite. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp102-113.
[11] Chiara Del Vescovo, Bijan Parsia, Ulrike Sattler and Thomas Schneider. The modular structure of an ontology: an empirical study. Proc. of DL’10, 4-7 May 2010, Waterloo, Canada. pp232-243.

### The WONDER system for ontology browsing and graphical query formulation

Did you ever not want to bother knowing how the data is stored in a database, but simply want to know what kind of things are stored in the database at, say, the conceptual or ontological layer of knowledge? And did you ever not want to bother writing queries in SQL or SPARQL, but have a graphical point-and-click interface with which you can compose a query using that what layer of knowledge and that the system generates automatically the SQL/SPARQL query for you, in the correct syntax? And all that not with a downloaded desktop application but in a Web browser?

Our domain experts in genetics as well as in healthcare informatics, at least, wanted that. We have designed and implemented it now [1], which we have enthusiastically named Web ONtology mediateD Extraction of Relational data (WONDER). Moreover, we have a working system for the use case about the 4GB horizontal gene transfer database [2] and its corresponding ‘application ontology’. (pdf)

Subscribers to this blog might remember I mentioned a that we were working towards this goal, using Ontology-Based Data Access tools to access a database through an ontology and learning from (and elaborating on) its preliminary case studies [3]. In short, we added a usability extension to the OBDA implementations so that not only savvy Semantic Web engineers can use it, but also—actually, moreover—that the domain experts who want to get information from their database(s) can do so. By building upon the OBDA framework [4], we can avail of its solid formal foundations; that is, WONDER is not merely a software application, but there is a logic-based representation behind both the graphics in the ontology browser and the query pane.

In addition, WONDER is scalable because the ontology language (roughly: OWL 2 QL) is ‘simple’. Yes, we had to drop a few things from the original ORM conceptual model, but they have—at least for our case study—no effect on querying the data. The ‘difficult’ constraints are (and generally: should be anyway) implemented in the database, so there will be no instances violating the constraints we had to drop. Trade-offs, indeed, but now one can use an ontology to access a large database over the Web and retrieve the results quickly.

For instance, take the query “For the Firmicutes, retrieve the organisms and their genes that have a GCtotal contents higher than 60”, which is for various reasons not possible through the current web interface of the source database.

Fig.1 shows the ontology pane with three relevant elements selected. (click on the figures to enlarge)

Fig.1. WONDER's ontology pane with three elements selected

Fig.2 shows the constrained adder, where I’m adding that the GCValue has to be > 60.

Fig. 2. WONDER's constraint adder, where I’m adding that the GCValue has to be > 60

Fig.3 shows the query ready for execution: the attributes with a green border are those that will appear in the query answer (I could have selected all, if I wanted to). In the menu bar on the right you can see I have customized the names of the attributes, so that the columns in the results pane will have a query-relevant name in your preferred language (not necessary to do), as well as the automatically generated query.

Fig.3. WONDER's query pane, where the query is ready for execution

Fig.4 shows a section of the results of the first page and Fig.5 of the second page; the “Family” column that has all the Firmicutes (out of about 500 organisms in the database) gives you the whole section of the species tree, because that is how the taxonomy information is stored in the database (refining the database is a separate topic). Alternatively, I could have selected the organism Name from the ontology browser (see Fig.1), de-selected the taxonomic classification in the query pane, and included the Name of the organism in the query answer to have the species name only but not all the taxonomic information; in this case, I wanted to have all that taxonomy information. The genes are the relevant selection (made with the other constraints) out of about the 2 million genes that are stored in the database.

Fig.4. Section of the results, the first page

Fig.5. Section of the results, the second page

There is also a constraint manager for the AND, OR, NOT and nesting. For instance, for the query “Give me the names of the organisms of which the abbreviation starts with a b, but not being a Bacillus, and the prediction and KEGG code of those organisms’ genes that are putatively either horizontally transferred or highly expressed” (Fig.6), we have the constraint manager as shown in Fig.7.

Fig.6. Graphical and textual representation of the second query

Fig.7. Constraint manager for the second query

You can also save and load queries when you’re logged in, and download the results set in any case.

For those who want to play with it: feel free to drop me a line and I will send you the URL. (The reason for not linking the URL here is that the current URL is still for the beta version, whereas the operational one is expected to have a more stable URL soon.)

Last, but not least, the “we” I used in the previous sentences is not some ‘standard writing in plural’, but several people were involved in various ways to realize the WONDER system. In alphabetical order, they are: Diego Calvanese, Marijke Keet, Werner Nutt, Mariano Rodriguez-Muro, and Giorgio Stefanoni, all at FUB. I also want to thank our domain experts of the case study (with whom we’re writing a bio-oriented paper): Santi Garcia-Vallvé (with the Evolutionary Genomics Group, ‘Rovira i Virgilli’ University, Tarragona, Spain) and Mark van Passel (with the Laboratory for Microbiology, Wageningen University and Research Centre, the Netherlands).

References

[1] Calvanese, D., Keet, C.M., Nutt, W., Rodriguez-Muro, M., Stefanoni, G. Web-based Graphical Querying of Databases through an Ontology: the WONDER System. ACM Symposium on Applied Computing (ACM SAC’10), March 22-26 2010, Sierre, Switzerland.

[2] Garcia-Vallve, S, Guzman, E., Montero, MA. and Romeu, A. 2003. HGT-DB: a database of putative horizontally transferred genes in prokaryotic complete genomes. Nucleic Acids Research 31: 187-189.

[3] R. Alberts, D. Calvanese, G. De Giacomo, A. Gerber, M. Horridge, A. Kaplunova, C. M. Keet, D. Lembo, M. Lenzerini, M. Milicic, R. Moeller, M. Rodríguez-Muro, R. Rosati, U. Sattler, B. Suntisrivaraporn, G. Stefanoni, A.-Y. Turhan, S. Wandelt, M. Wessel. Analysis of Test Results on Usage Scenarios. Deliverable TONES-D27 v1.0, Oct. 10 2008.

[4] Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Mariano Rodriguez-Muro, and Riccardo Rosati. Ontologies and databases: The DL-Lite approach. In Sergio Tessaris and Enrico Franconi, editors, Semantic Technologies for Informations Systems – 5th Int. Reasoning Web Summer School (RW 2009), volume 5689 of Lecture Notes in Computer Science, pages 255-356. Springer, 2009.

### Object-Role Modeling and Description Logics for conceptual modelling

Object-Role Modeling (ORM) is a so-called “true” conceptual modelling language in the sense that it is independent of the application scenario and it has been mapped into both UML class diagrams and ER [1]. That is, ORM and its successor ORM2 can be used in the conceptual analysis stage for database development, application software development, requirements engineering, business rules, and other areas [1-5]. If we can reason over such ORM conceptual data models, then we can guarantee the model (i.e., first order logic theory) is satisfiable and consistent, so that the corresponding application based on it definitely does behave correctly with respect to its specification (I summarised a more comprehensive argumentation and examples earlier). And, well, from a push-side: it widens the scope of possible scenarios where to use automated reasoners.

Various strategies and technologies are being developed to reason over conceptual data models to meet the same or slightly different requirements and aims. An important first distinction is between the assumption that modellers should be allowed to keep total freedom to model what they deem necessary to represent and subsequently put constraints on which parts can be used for reasoning or accept slow performance versus the assumption that it is better to constrain the language a priori to a subset of first order logic so as to achieve better performance and a guarantee that the reasoner terminates. The former approach is taken by Queralt and Teniente [6] using a dependency graph of the constraints in a UML Class Diagram + OCL and by first order logic (FOL) theorem provers. The latter approach is taken by [7-15] who experiment with different techniques. For instance, Smaragdakis et al and Kaneiwa et al [7-8] use special purpose reasoners for ORM and UML Class Diagrams, Cabot et al and Cadoli et al [9-10] encode a subset of UML class diagrams as a Constraint Satisfaction Problem, and [11-16] use a Description Logic (DL) framework for UML Class Diagrams, ER, EER, and ORM.

Perhaps not surprisingly, I also took the DL approach on this topic, on which I started working in 2006. I had put the principal version of the correspondence between ORM and the DL language DLRifd online on arXiv in February 2007 and got the discussion of the fundamental transformation problems published at DL’07 [15]. Admittedly, that technical report won’t ever win the beauty prize for its layout or concern for readability. In the meantime, I have corrected the typos, improved on the readability, proved correctness of encoding, and updated the related research with the recent works. On the latter, it also contains a discussion of a later, similar, attempt by others and the many errors in it. On the bright side: addressing those errors helps explaining the languages and trade-offs better (there are advantages to using a DL language to represent an ORM diagram, but also disadvantages). This new version (0702089v2), entitled “Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd” [17] is now also online at arXiv.

As appetizer, here’s the abstract:

In recent years, several efforts have been made to enhance conceptual data modelling with automated reasoning to improve the model’s quality and derive implicit information. One approach to achieve this in implementations, is to constrain the language. Advances in Description Logics can help choosing the right language to have greatest expressiveness yet to remain within the decidable fragment of first order logic to realise a workable implementation with good performance using DL reasoners. The best fit DL language appears to be the ExpTime-complete DLRifd. To illustrate trade-offs and highlight features of the modelling languages, we present a precise transformation of the mappable features of the very expressive (undecidable) ORM/ORM2 conceptual data modelling languages to exactly DLRifd. Although not all ORM2 features can be mapped, this is an interesting fragment because it has been shown that DLRifd can also encode UML Class Diagrams and EER, and therefore can foster interoperation between conceptual data models and research into ontological aspects of the modelling languages.

And well, for those of you who might be disappointed that not all ORM features can be mapped: computers have their limitations and people have a limited amount of time and patience. To achieve ‘scalability’ of reasoning over initially large theories represented in a very expressive language, modularisation of the conceptual models and ontologies is one of the lines of research. But it is a separate topic and not quite close to implementation just yet.

References

[1] Halpin, T.: Information Modeling and Relational Databases. San Francisco: Morgan Kaufmann Publishers (2001)

[2] Balsters, H., Carver, A., Halpin, T., Morgan, T.: Modeling dynamic rules in ORM. In: OTM Workshops 2006. Proc. of ORM’06. Volume 4278 of LNCS., Springer (2006) 1201-1210

[3] Evans, K.: Requirements engineering with ORM. In: OTM Workshops 2005. Proc. of ORM’05. Volume 3762 of LNCS., Springer (2005) 646-655

[4] Halpin, T., Morgan, T.: Information modeling and relational databases. 2nd edn. Morgan Kaufmann (2008)

[5] Pepels, B., Plasmeijer, R.: Generating applications from object role models. In: OTM Workshops 2005. Proc. of ORM’05. Volume 3762 of LNCS., Springer (2005) 656-665

[6] Queralt, A., Teniente, E.: Decidable reasoning in UML schemas with constraints. In: Proc. of CAiSE’08. Volume 5074 of LNCS., Springer (2008) 281-295

[7] Smaragdakis, Y., Csallner, C., Subramanian, R.: Scalable automatic test data generation from modeling diagrams. In: Proc. of ASE’07. (2007) 4-13

[8] Kaneiwa, K., Satoh, K.: Consistency checking algorithms for restricted UML class diagrams. In: Proc. of FoIKS ’06, Springer Verlag (2006)

[9] Cabot, J., Clariso, R., Riera, D.: Verification of UML/OCL class diagrams using constraint programming. In: Proc. of MoDeVVA 2008. (2008)

[10] Cadoli, M., Calvanese, D., De Giacomo, G., Mancini, T.: Finite model reasoning on UML class diagrams via constraint programming. In: Proc. of AI*IA 2007. Volume 4733 of LNAI., Springer (2007) 36-47

[11] Calvanese, D., De Giacomo, G., Lenzerini, M.: On the decidability of query containment under constraints. In: Proc. of PODS’98. (1998) 149-158

[12] Artale, A., Calvanese, D., Kontchakov, R., Ryzhikov, V., Zakharyaschev, M.: Reasoning over extended ER models. In: Proc. of ER’07. Volume 4801 of LNCS., Springer (2007) 277-292

[13] Jarrar, M.: Towards automated reasoning on ORM schemes–mapping ORM into the DLRidf Description Logic. In: ER’07. Volume 4801 of LNCS. (2007) 181-197

[14] Franconi, E., Ng, G.: The ICOM tool for intelligent conceptual modelling. In: Proc. of KRDB’00. (2000) Berlin, Germany, 2000.

[15] Keet, C.M.: Prospects for and issues with mapping the Object-Role Modeling language into DLRifd. In: Proc. of DL’07. Volume 250 of CEUR-WS. (2007) 331-338

[16] Berardi, D., Calvanese, D., De Giacomo, G.: Reasoning on UML class diagrams. Artificial Intelligence 168(1-2) (2005) 70-118

[17] Keet, C.M. Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd. KRDB Research Centre, Free University of Bozen-Bolzano, Italy. 22 April 2009. arXiv:cs.LO/0702089v2.

### Live from ISWC 2008 in Karlsruhe

It is already the last day of ISWC’08, which had some really good papers, comments from the attendees during the sessions, and ample ambience for networking. I will discuss the keynote speeches first, then mention a few research papers, and close with a few general remarks.

Ramesh Jain gave a good keynote speech on semantic multimedia searches—or: the lack thereof and how to bridge the semantic gap between mere images and the meaning we attribute to them so that we can find the right multimedia in the sea of images, video, etc., perhaps by what he denoted as the “Event Web” as multimedia items are ‘snapshots’ of larger events that give context, and meaning, to those multimedia items. In addition to the extant ontologies, such as LSCOM, he is developing an ontology for events so as to better annotate the items and, consequently, obtain better search results. John Giannandrea’s keynote on Freebase on the other hand, can indeed be summarized by the quote from Babbage he gave: “errors using inadequate data are much less than those using no data at all”. While obviously the wisdom of the crowds and domain expert input for building knowledge bases is a laudable idea and has achieved remarkable successes toward the proverbial “80%”—but it is the remaining “20%” that is the hard part to take it from a ‘web 2’ version to a `web 3’ version of semantic searches (cf. string matching) to retrieve the right set of answers instead of a sea of links, software agent collaboration to plan your trip based on your requirements, and so forth. To take an entertaining example from another knowledge base, SNOMED CT, which is adopted in several countries: while Stefan Schulz and I were searching for suspended concepts and relations (suspended sensu [1]), we came across a congenital absence of one tooth that is a subtype (is a in SNOMED CT) of congenital absence of mouth, of jaw, and of alimentary tract… never mind that acquired absence is a body structure, and the concoction of previous known suicide attempt that throws together temporal, epistemic, and intentional notions into one concept.
The third keynote speech was by Stefan Decker from DERI, and rather provokingly about “how to save the Semantic Web?”. Based on an analysis of the successes of physics, he identified five points: (i) appealing unified message, (ii) credibility, (iii) concerted lobbying efforts, (iv) potentially transformational power, and (v) doable agenda for successes. His answers for AI in general and Semantic Web in particular are, respectively: yes-?-yes-yes-no and no-?-yes-yes-yes. In addition, his vision for the Semantic Web is to aim for a network of knowledge and collaborative problem solving and recollecting that the Semantic Web is, ultimately, for humans. However, part of the latter point was that he dismissed (well, ridiculed in a not so entertaining way) the required theoretical foundations, which annoyed quite a few people in the audience. During the break afterwards, one put forward that it is precisely because of theoretical foundations that physics continues to do well. After all, building tools on quicksand—compared to fundaments on solid ground—is not sustainable in the long run. Surely, the human and engineering components should, will, and gradually already are receiving more attention as the topics of the papers attest, be it here or ESWC and emerging workshops about them; e.g. there was a session on user interfaces and one semantic social networks. On the other hand, is the “semantic desktop” that Decker proposes really a sexy “appealing unified message”? Surely we can—and do—do more, be it to, from a end-user perspective, facilitate bioscientists in their research or focus on goals to streamline public administration and open up and enhance e-learning, to name just three sub-areas.

Of the presented papers, several were more detailed or improved versions of earlier works, such as the one about testing with probabilistic reasoner pronto using P-$mathcal{SHIQ}(D)$ (see here), approximating RCC in OWL [4] and details about how IBM managed to have the SHER scalable reasoner for expressive ontologies (represented in the $mathcal{SHIN}$ DL language) SHER [3] of which earlier work had been presented or discussed during OWLED’07 and this year’s introduction of Anatomylens as real application. SHER achieves scalability via summarization of the ABox and filtering. The RCC & OWL paper [4] seeks to solve the problem of performing spatio-thematic queries by approximating RCC8 (the full-blown version cannot be fully represented in OWL) and use that for consistency checking w.r.t. assertions in the ABox.

Putting data types in an ontology is from a formal ontology (and, eventually database and ontology interoperation) perspective problematic, but many developers seem to want to have them (treating an ontology as if it were a formal conceptual data model) and better than currently possible in OWL. For those who want more of it: your requests have been heard, and with data types in OWL 2, you will be able to state, e.g. $geq_5 land leq_{10}$, name data ranges, it redefines XSD numeric data types, rdf:text is added as well as date/time, and there will be a data type checker [6].
A nice feature that even I have used during development of the ADOLENA ontology, are the semantic explanations for the deductions (originally in SWOOP, and later also in, e.g., in Protégé 4 where after classifying, one clicks on the “?” that appears with the inconsistent and inferred classes). More precisely, Matthew Horridge presented the work on laconic and precise justifications [2], which has been nominated for the best paper award. Their work enhances the way how explanations are computed and what information about it the justifications is needed so as to give only the minimal required information for repair; put differently: toward minimizing the haystack where to find the needle to fix your ontology.

Several presentations are, or will be, made available on Video lectures.

Last, some indication of where the semantic web still has to go to, just a tiny practical example: the conference site called for tagging blog posts with iswc2008 or ISWC 2008, and if you click their link to do a Google blog search you are supposed to get a long list. But it does not. In fact, their defined Google blog search searches on iswc2008 or “iswc 2008” that does not work when I had it in the text of two posts—well, I had used ISWC’08 and that particular permutation of semantically the same thing was not in the pre-defined search term. Even after changing it on 28-10-2008, it still has not been recognized. A non-blog web search does return lots of hits. Not that I want to insist having my two-seconds fame on the ISWC website as one of the results, but something like that simply should work by now, or ought to… I will add both their desired tags this time, and let’s see what happens. UPDATE: the tagging worked, so there are just few bloggers who bother with the manual tagging, it seems…

Overall, it was an entertaining and very interesting conference, with—from a research perspective—both encouraging results and plenty of topics for further research.

[1] Artale, A., Guarino, N., and Keet, C.M. Formalising temporal constraints on part-whole relations. 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08). Gerhard Brewka, Jerome Lang (Eds.) AAAI Press. Sydney, Australia, September 16-19, 2008.
[2] M. Horridge, B. Parsia, U. Sattler. Laconic and Precise Justifications in OWL. Proc. Of ISWC’08, 28-30 Oct. 2008, Karlsruhe, Germany.
[3] Julian Dolby, Achille Fokoue, Aditya Kalyanpur, Li Ma, Edith Schonberg, Kavitha Srinivas, and Xingzhi Sun. Scalable Conjunctive Query Evaluation Over Large and Expressive Knowledge Bases. Proc. Of ISWC’08, 28-30 Oct. 2008, Karlsruhe, Germany.
[4] Rolf Grütter, Thomas Scharrenbach, and Bettina Bauer-Messmer. Improving an RCC-Derived Geospatial Approximation by OWL Axioms.
[5] Boris Motik and Ian Horrocks. OWL datatypes: design and implementation. Proc. Of ISWC’08, 28-30 Oct. 2008, Karlsruhe, Germany.