Forum for AI Research 2015, Cape Town

In 10 day’s time, the (CAIR-driven) Forum for Artificial Intelligence Research 2015 (FAIR’15) Workshop will be held at UCT in Cape Town, South Africa, from March 30 to April 2. There are still some spaces available; registration is free, but please register (for catering purposes). What will you get for this ‘bargain price’? A lot of food for the mind!

FAIR’15 follows the same format as the previous 7 editions that went under various acronyms since 2008 (among others, MOWS, MOSS, MAIS, FAIR), with a mini-course, a tutorial, and postgraduate student presentations. This edition has the following on offer.

Ulrike Sattler (University of Manchester, UK) will present a mini-course on automated reasoners in the mornings. She will go into the details of what really happens when you click that menu option “start reasoner” and Protégé’s “?” that explains the deductions, and what are the factors that influence the reasoner’s performance.

David Toman (University of Waterloo, Canada) will present a 2-hour tutorial on using knowledge representation and reasoning (logic) for query optimization in relational databases and ontology-based data access (i.e., advanced aspects of database systems implementation).

Further, there are several sessions with postgraduate student presentations. Among others, Catherine Chavula will talk about new results (cf. [1]) in multilingual ontologies, Zubeida Khan will talk about foundational ontology interchangeability (details in [2]), and (very recently MSc cum laude graduated!) Nasubo Ongoma will present her thesis on logic-based temporal conceptual data modeling (including material from [3]). Gavin Rens will talk about probabilistic belief change, Kody Moodley on defeasible reasoning for description logics, Henriette Harmse about scenario testing with OWL, and Nishal Morar on taxonomic classification.

Aurona Gerber will give an overview of Data Science at CSIR, and for some more variety in the programme, I’ll talk about the stuff ontology [4]. Check the programme for all titles of the presentations and the abstracts of the mini-course and tutorial.

An important aim of FAIR is the networking among people in Southern Africa, and share and discuss informally our research in (predominantly) KR&R and related areas—so if the above topics sound interesting, or made you curious, or you would like to meet a potential MSc/PhD supervisor, you’re welcome to join (note: some basic knowledge of logics will be needed to understand the talks, though). If you have any questions, please don’t hesitate to contact one of the organisers, Arina Britz and me.

References

[1] Chavula, C., Keet, C.M. Is Lemon Sufficient for Building Multilingual Ontologies for Bantu Languages? 11th OWL: Experiences and Directions Workshop (OWLED’14). Keet, C.M., Tamma, V. (Eds.). Riva del Garda, Italy, Oct 17-18, 2014. CEUR-WS vol. 1265, 61-72.

[2] Khan, Z.C., Keet, C.M. Feasibility of automated foundational ontology interchangeability. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI 8876, 225-237.

[3] Keet, C.M., Ongoma, E.A.N. Temporal Attributes: their Status and Subsumption. Asia-Pacific Conference on Conceptual Modelling (APCCM’15). Koehler, H., Saeki, M. (Eds.), Conferences in Research and Practice in Information Technology (CRPIT), Vol. 165. 27-30 January, 2015, Sydney, Australia.

[4] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

Advertisement

Dabbling into evaluating reasoners with the DMOP ontology

The Data Mining OPtimization ontology (DMOP) is a highly axiomatised ontology that uses almost all features of OWL 2 DL and the domain entities are linked to DOLCE, using all four main ‘branches’ of DOLCE. Some details are described in last year’s OWLED’13 paper [1] and a blog post. We did observe ‘slow’ reasoner performance to classify the ontology, however, like, between 10 and 20 minutes, varying across versions and machines. The Ontology Reasoner Evaluation (ORE’14) workshop (part of the Vienna Summer of Logic) was a nice motivation to have a look at trying to figure out what’s going on, and some initial results are described briefly in the 6 pages-short paper [2], which is co-authored with Claudia d’Amato, Agnieszka Lawrynowicz, and Zubeida Khan.

Those results are definitely what can be called interesting, even though we’re still at the level of dabbling into it from a reasoner user-centric viewpoint, and notably, from a modeller-centric viewpoint. The latter is what made us pose questions like “what effect does using feature x have on performance of the reasoner?”. No one knew, except for the informal feedback back I received at DL 2010 on [3] that reasoning with data types slows down things, and likewise when the cardinalities are high. That’s not an issue with DMOP, though.

So, the first thing we did was determining a baseline on a good laptop—your average modeller doesn’t have a HPC cluster readily at hand—and in an Ontology Development Environment, where the reasoner is typically accessed from. Some 9 minutes to classify the ontology (machine specs and further details in the paper).

The second steps were the analysis of one specific modeling construct (inverses), and what effect DOLCE has on the overall performance.

The reason why we chose representation of inverses is because in OWL 2 DL (cf. OLW DL), one can use the objectInverseOf(OP) to use the inverse of an object property instead of extending the ontology’s vocabulary and using InverseObjectProperties(OPE1 OPE2) to relate the property and its inverse. For instance, to use the inverse the property addresses in an axiom, one used to have to introduce a new property, addressed by, declare it inverse to addresses, and then use that in the axiom, whereas in OWL 2 DL, one can use ObjectInverseOf(addresses) in the axiom (in Protégé, the syntax is inverse(addresses)). That slashed computing the class hierarchy by at least over a third (and about half for the baseline). Why? We don’t know. Other features used in DMOP, such as punning and property chains, were harder to remove and are heavily used, so we didn’t test those.

The other one, removing DOLCE, is a bit tricky. But to give away the end results upfront: that made it 10 times faster! The ‘tricky’ part has to do with the notion of ‘linking to a foundational ontology’ (deserving of its own blog post). For DMOP, we had not imported but merged, and we did not merge everything from DOLCE and its ExtendedDnS, but only what was deemed relevant, being, in numbers, 43 classes, 78 object properties and 593 axioms. To make matters worse—from an evaluation viewpoint, that is—is that we reused heavily three DOLCE object properties, so we kept those three DOLCE properties in the evaluation file, as we suspected it otherwise would have affected the deductions too much and interfere with the DOLCE-or-not question (one also could argue that those three properties can be considered an integral part of DMOP). So, it was not a simple case of ‘remove the import statement and run the reasoner again’, but a ‘remove almost everything with a DOLCE URI manually and then run the reasoner again’.

Because computation was so ‘slow’, we wondered whether maybe cleverly modularizing DMOP could be the way to go, in case someone wants to use only a part of DMOP. We got as far as trying to modularize the ontology, which already was not trivial because DMOP and DOCLE are both highly axiomatised and with few, if any, relatively isolated sections amenable to modularization. Moreover, what it did show, is that such automated modularization (when it was possible) only affects the number of class and number of axioms, not the properties and individuals. So, the generated modules are stuck with properties and individuals that are not used in, or not relevant for, that module. We did not fix that manually. Also, putting back together the modules did not return it to the original version we started out with, missing 225 axioms out of the 4584.

If this wasn’t enough already, the DMOP with/without DOLCE test was performed with several reasoners, out of curiosity, and they gave different output. FaCT++ and MORe had a “Reasoner Died” message. My ontology engineering students know that, according to DOLCE, death is an achievement, but I guess that its reasoners’ developers would deem otherwise. Pellet and TrOWL inferred inconsistent classes; HermiT did not. Pellet’s hiccup had to do with datatypes and should not have occurred (see paper for details). TrOWL fished out a modeling issue from all of those 4584 axioms (see p5 of the paper), of the flavour as described in [4] (thank you), but with the standard semantics of OWL—i.e., not caring at all about the real semantics of object property hierarchies—it should not have derived an inconsistent class.

Overall, it feels like having opened up a can of worms, which is exciting.

References

[1] Keet, C.M., Lawrynowicz, A., d’Amato, C., Hilario, M. Modeling issues and choices in the Data Mining OPtimisation Ontology. 8th Workshop on OWL: Experiences and Directions (OWLED’13), 26-27 May 2013, Montpellier, France. CEUR-WS vol 1080.

[2] Keet, C.M., d’Amato, C., Khan, Z.C., Lawrynowicz, A. Exploring Reasoning with the DMOP Ontology. 3rd Workshop on Ontology Reasoner Evaluation (ORE’14). July 13, 2014, Vienna, Austria. CEUR-WS vol (accepted).

[3] Keet, C.M. On the feasibility of Description Logic knowledge bases with rough concepts and vague instances. 23rd International Workshop on Description Logics (DL’10), 4-7 May 2010, Waterloo, Canada.

[4] Keet, C. M. (2012). Detecting and revising flaws in OWL object property expressions. In Proc. of EKAW’12, volume 7603 of LNAI, pages 252–266. Springer.

CFP for WS on Logics and reasoning for conceptual models (LRCM’13)

From the ‘advertising department’ of promoting events I co-organise: here’s the Call for Papers for the LRCM’13 workshop.

================================================================
First Workshop on Logics and Reasoning for Conceptual Models (LRCM 2013)
14th of December 2013, Stellenbosch, South Africa
http://www.cair.za.net/LRCM2013/
co-located with the 19th International Conference on Logic for Programming,
Artificial Intelligence and Reasoning (LPAR-19), Stellenbosch, South Africa
==============================================================

There is an increase in complexity of information systems due to, among others, company mergers with information system integration, upscaling of scientific collaborations, e-government etc., which push the necessity for good quality information systems. An information system’s quality is largely determined in the conceptual modeling stage, and avoiding or fixing errors of the conceptual model saves resources during design, implementation, and maintenance. The size and high expressivity of conceptual models represented in languages such as EER, UML, and ORM require a logic-based approach in the representation of information and adoption of automated reasoning techniques to assist in the development of good quality conceptual models. The theory to achieve this is still in its infancy, however, with only a limited set of theories and tools that address subtopics in this area. This workshop aims at bringing together researchers working on the logic foundations of conceptual data modeling languages and the reasoning techniques that are being developed so as to discuss the latest results in the area.

**** Topics ****

Topics of interest include, but are not limited to:
– Logics for temporal and spatial conceptual models and BPM
– Deontic logics for SBVR
– Other logic-based extensions to standard conceptual modeling languages
– Unifying formalisms for conceptual schemas
– Decidable reasoning over conceptual models
– Dealing with finite and infinite satisfiability of a conceptual model
– Reasoning over UML state and behaviour diagrams
– Reasoning techniques for EER/UML/ORM
– Interaction between ontology languages and conceptual data modeling languages
– Tools for logic-based modeling and reasoning over conceptual models
– Experience reports on logic-based modelling and reasoning over conceptual models

To this end, we solicit mainly theoretical contributions with regular talks and implementation/system demonstrations and some modeling experience reports to facilitate cross-fertilization between theory and praxis. Selection of presentations is based on peer-review of submitted papers by at least 2 reviewers, with a separation between theory and implementation & experience-type of papers.

**** Submissions ****

We welcome submissions in LNCS style in the following two formats for oral presentation:
– Extended abstracts of maximum 2 pages;
– Research papers of maximum 10 pages.
Both can be submitted in pdf format via the EasyChair website at https://www.easychair.org/conferences/?conf=lrcm13

**** Important dates ****

Submission of papers/abstracts: 14 October 2013
Notification of acceptance:     14 November 2013
Camera-ready copies:            2 December 2013
Workshop:                       14 December 2013

**** Organisation ****

Maria Keet, University of KwaZulu-Natal, South Africa, keet@ukzn.ac.za
Diego Calvanese, Free University of Bozen-Bolzano, Italy, calvanese@inf.unibz.it
Szymon Klarman, CAIR, UKZN / CSIR-Meraka Institute, South Africa, szymon.klarman@gmail.com
Arina Britz, CAIR, UKZN / CSIR-Meraka Institute, South Africa, abritz@csir.co.za

**** Programme Committee ****

Diego Calvanese, Free University of Bozen-Bolzano, Italy
Szymon Klarman, CAIR, UKZN / CSIR-Meraka Institute, South Africa
Maria Keet, University of KwaZulu-Natal, South Africa
Marco Montali, Free University of Bozen-Bolzano, Italy
Mira Balaban, Ben-Gurion University of the Negev, Israel
Meghyn Bienvenu, CNRS and Universite Paris-Sud, France
Terry Halpin, INTI International University, Malaysia
Anna Queralt, Barcelona Supercomputing Center, Spain
Vladislav Ryzhikov, Free University of Bozen-Bolzano, Italy
Till Mossakowski, University of Bremen, Germany
Alessandro Artale, Free University of Bozen-Bolzano, Italy
Giovanni Casini, CAIR, UKZN / CSIR-Meraka Institute, South Africa
Pablo Fillottrani, Universidad Nacional del Sur, Argentina
Chiara Ghidini, Fondazione Bruno Kessler, Italy
Roman Kontchakov, Birkbeck, University of London, United Kingdom
Oliver Kutz, University of Bremen, Germany
Tommie Meyer, CAIR, UKZN / CSIR-Meraka Institute, South Africa
David Toman, University of Waterloo, Canada

Logical and ontological reasoning services?

The SubProS and ProChainS compatibility services for OWL ontologies to check for good and ‘safe’ OWL object property expression [5] may be considered ontological reasoning services by some, but according others, they are/ought to be plain logical reasoning services. I discussed this issue with Alessandro Artale back in 2007 when we came up with the RBox Compatibility service [1]—which, in the end, we called an ontological reasoning service—and it came up again during EKAW’12 and the Ontologies and Conceptual Modelling Workshop (OCM) in Pretoria in November. Moreover, in all three settings, the conversation was generalized to the following questions:

  1. Is there a difference between a logical and an ontological reasoning service (be that ‘onto’-logical or ‘extra’-logical)? If so,
    1. Why, and what, then, is an ontological reasoning service?
    2. Are there any that can serve at least as prototypical example of an ontological reasoning service?

There’s still no conclusive answer on either of the questions. So, I present here some data and arguments I had and that I’ve heard so far, and I invite you to have your say on the matter. I will first introduce a few notions, terms, tools, and implicit assumptions informally, then list the three positions and their arguments I am aware of.

Some aspects about standard, non-standard, and ontological reasoning services

Let me first introduce a few ideas informally. Within Description Logics and the Semantic Web, a distinction is made between so-called ‘standard’ and ‘non-standard’ reasoning services. The standard reasoning services—which most of the DL-based reasoners support—are subsumption reasoning, satisfiability, consistency of the knowledge base, instance checking, and instance retrieval (see, e.g., [2,3] for explanations). Non-standard reasoning services include, e.g., glass-box reasoning and computing the least common subsumer, they are typically designed with the aim to facilitate ontology development, and tend to have their own plugin or extension to an existing reasoner. What these standard and non-standard reasoners have in common, is that they all focus on the (subset of first order predicate logic) logical theory only.

Take, on the other hand, OntoClean [4], which assigns meta-properties (such as rigidity and unity) to classes, and then, according to some rules involving those meta-properties, computes the class taxonomy. Those meta-properties are borrowed from Ontology in philosophy and the rules do not use the standard way of computing subsumption (where every instance of the subclass is also an instance of its super class and, thus, practically, the subclass has more or features or has the same features but with more constrained values/ranges). Moreover, OntoClean helps to distinguish between alternative logical formalisations of some piece of knowledge so as to choose the one that is better with respect to the reality we want to represent; e.g., why it is better to have the class Apple that has as quality a color green, versus the option of a class GreenObject that has shape apple-shaped. This being the case, OntoClean may be considered an ontological reasoning service. My SubProS and ProChainS [5] put constraints on OWL object property expressions so as to have safe and good hierarchies of object properties and property chains, based on the same notion of class subsumption, but then applied to role inclusion axioms: the OWL object sub-property (relationship, DL role) must be more constrained than its super-property and the two reasoning services check if that holds. But some of the flawed object property expressions do not cause a logical inconsistency (merely an undesirable deduction), so one might argue that the compatibility services are ontological.

The arguments so far

The descriptions in the previous paragraph contain implicit assumptions about the logical vs ontological reasoning, which I will spell out here. They are a synthesis from mine as well as other people’s voiced opinions about it (the other people being, among others and in alphabetical order, Alessandro Artale, Arina Britz, Giovanni Casini, Enrico Franconi, Aldo Gangemi, Chiara Ghidini, Tommie Meyer, Valentina Presutti, and Michael Uschold). It goes without saying they are my renderings of the arguments, and sometimes I state the things a little more bluntly to make the point.

1. If it is not entailed by the (standard, DL/other logic) reasoning service, then it is something ontological.

Logic is not about the study of the truth, but about the relationship of the truth of one statement and that of another. Effectively, it doesn’t matter what terms you have in the theory’s vocabulary—be this simply A, B, C, etc. or an attempt to represent Apple, Banana, Citrus, etc. conformant to what those entities are in reality—as it uses truth assignments and the usual rules of inference. If you want some reasoning that helps making a distinction between a good and a bad formalisation of what you aim to represent (where both theories are consistent), then that’s not the logician’s business but instead is relegated to the domain of whatever it is that ontologists get excited about. A counter-argument raised to that was that the early logicians were, in fact, concerned with finding a way to formalize reality in the best way; hence, not only syntax and semantics of the logic language, but also the semantics/meaning of the subject domain. A practical counter-example is that both Glimm et al [6] and Welty [7] managed to ‘hack’ OntoClean into OWL and use standard DL reasoners for it to obtain de desired inferences, so, presumably, then even OntoClean cannot be considered an ontological reasoning service after all?

2. Something ‘meta’ like OntoClean can/might be considered really ontological, but SubProS and ProChainS are ‘extra-logical’ and can be embedded like the extra-logical understanding of class subsumption, so they are logical reasoning services (for it is the analogue to class subsumption but then for role inclusion axioms).

This argument has to do with the notion of ‘standard way’ versus ‘alternative approach’ to compute something and the idea of having borrowed something from Ontology recently versus from mathematics and Aristotle somewhat longer ago. (note: the notion of subsumption in computing was still discussed in the 1980s, where the debate got settled in what is now considered the established understanding of class subsumption.) We simply can apply the underlying principles for class-subclass to one for relationships (/object properties/roles). DL/OWL reasoners and the standard view assume that the role box/object property expressions are correct and merely used to compute the class taxonomy only. But why should I assume the role box is fine, even when I know this is not always the case? And why do I have to put up with a classification of some class elsewhere in the taxonomy (or be inconsistent) when the real mistake is in the role box, not the class expression? Differently, some distinction seems to have been drawn between ‘meta’ (second order?), ‘extra’ to indicate the assumptions built into the algorithms/procedures, and ‘other, regular’ like satisfiability checking that we have for all logical theories. Another argument raised was that the ‘meta’ stuff has to do with second order logics, for which there are no good (read: sound and complete) reasoners.

3. Essentially, everything is logical, and services like OntoClean, SubProS, ProChainS can be represented formally with some clearly, precisely, formally, defined inferencing rules, so then there is no ontological reasoning, but there are only logical reasoning services.

This argument made me think of the “logic is everywhere” mug I still have (a goodie from the ICCL 2005 summer school in Dresden). More seriously, though, this argument raises some old philosophical debates whether everything can indeed be formalized, and provided any logic is fine and computation doesn’t matter. Further, it conflates the distinction, if any, between plain logical entailment, the notion of undesirable deductions (e.g., that a CarChassis is-a Perdurant [some kind of a process]), and the modeling choices and preferences (recall the apple with a colour vs. green object that has an apple-shape). But maybe that conflation is fine and there is no real distinction (if so: why?).

In my paper [5] and in the two presentations of it, I had stressed that SubProS and ProChainS were ontological reasoning services, because before that, I had tried but failed to convince logicians of the Type-I position that there’s something useful to those compatibility services and that they ought to be computed (currently, they are mostly not computed by the standard reasoners). Type-II adherents were plentiful at EKAW’12 and some at the OCM workshop. I encountered the most vocal Type-III adherent (mathematician) at the OCM workshop. Then there were the indecisive ones and people who switched and/or became indecisive. At the moment of writing this, I still lean toward Type-II, but I’m open to better arguments.

References

[1] Keet, C.M., Artale, A.: Representing and reasoning over a taxonomy of part-whole relations. Applied Ontology, 2008, 3(1-2), 91–110.

[2] F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider (Eds). The Description Logics Handbook. Cambridge University Press, 2009.

[3] Pascal Hitzler, Markus Kroetzsch, Sebastian Rudolph. Foundations of Semantic Web Technologies. Chapman & Hall/CRC, 2009,

[4] Guarino, N. and Welty, C. An Overview of OntoClean. In S. Staab, R. Studer (eds.), Handbook on Ontologies, Springer Verlag 2009, pp. 201-220.

[5] Keet, C.M. Detecting and Revising Flaws in OWL Object Property Expressions. Proc. of EKAW’12. Springer LNAI vol 7603, pp2 52-266.

[6] Birte Glimm, Sebastian Rudolph, and Johanna Volker. Integrated metamodeling and diagnosis in OWL 2. In Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, and Birte Glimm, editors, Proceedings of the 9th International Semantic Web Conference, volume 6496 of LNCS, pages 257-272. Springer, November 2010.

[7] Chris Welty. OntOWLclean: cleaning OWL ontologies with OWL. In B. Bennet and C. Fellbaum, editors, Proceedings of Formal Ontologies in Information Systems (FOIS’06), pages 347-359. IOS Press, 2006.