Our ESWC17 demos: TDDonto2 and an OWL verbaliser for isiZulu

Besides the full paper on heterogeneous alignments for 14th Extended Semantic Web Conference (ESWC’17) that will take place next week in Portoroz, Slovenia, we also managed to squeeze out two demo papers. You may already know of TDDonto2 with Kieren Davies and Agnieszka Lawrynowicz, which was discussed in an earlier post that has been updated with a tutorial video. It now has a demo paper as well [1], which describes the rationale and a few scenarios. The other demo, with Musa Xakaza and Langa Khumalo, is new-new, but the regular reader might have seen it coming: we finally managed to link the verbalisation patterns for certain Description Logic axiom types [2,3] to those in OWL ontologies. The tool takes as input an ontology in isiZulu and the verbalisation algorithms, and out come the isiZulu sentences, be this in plain text for further processing or in a GUI for inspection by a domain expert [4]. There is a basic demo-screencast to show it’s all working.

The overall architecture may be of interest, for it deviates from most OWL verbalisers. It is shown in the following figure:

For instance, we use the Python-based OWL API Owlready, rather than a Java-based app, for Python is rather popular in NLP and the verbalisation algorithms may be used elsewhere as well. We made more such decisions with the aim to make whatever we did as multi-purpose usable as possible, like the list of nouns with noun classes (surprisingly, and annoyingly, there is no such readily available list yet, though isizulu.net probably will have it somewhere but inaccessible), verb roots, and exceptions in pluralisation. (Problems for integrating the verbaliser with, say, Protégé will be interesting to discuss during the demo session!)

The text-based output doesn’t look as nice as the GUI interface, so I will show here only the GUI interface, which is adorned with some annotations to illustrate that those verbalisation algorithms in the background are far from trivial templates:

For instance, while in English the universal quantification is always ‘Each’ or ‘All’ regardless the named class quantified over, in isiZulu it depends on the noun class of the noun that is the name of the OWL class. For instance, in the figure above, izingwe ‘leopards’ is in noun class 10, so the ‘Each/All’ is Zonke, amavazi ‘vases’ is in noun class 6, so ‘Each/All’ then becomes Onke, and abantu ‘people’/’humans’ is in noun class 2, making Bonke. There are 17 noun classes. They also determine the subject concords (SC, alike conjugation) for the verbs, with zi- for noun class 10, ­a- for noun class 6, and ba- for noun class 2, to name a few. How this all works is described in [2,3]. We’ve implemented all those algorithms and integrated the pluraliser [5] in it to make it work. The source files are available to check and play with already, you can do so and ask us during the ESWC17 demo session, and/or also have a look at the related outputs of the NRF-funded project Grammar Engine for Nguni natural language interfaces (GeNi).

 

References

[1] Davies, K. Keet, C.M., Lawrynowicz, A. TDDonto2: A Test-Driven Development Plugin for arbitrary TBox and ABox axioms. Extended Semantic Web Conference (ESWC’17), Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[2] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51:131-157.

[3] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), 5-8 September, 2016, Edinburgh, UK. Association for Computational Linguistics, 174-183.

[4] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[5] Byamugisha, J., Keet, C.M., Khumalo, L. Pluralising Nouns in isiZulu and Related Languages. 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’16), Springer LNCS. April 3-9, 2016, Konya, Turkey.

New OWL files for the (extended) taxonomy of part-whole relations

Once upon a time (surely >6 years ago) I made an OWL file of the taxonomy of part-whole relations [1], which contains several parthood relations and a few meronyic-only ones that in natural language are considered ‘part’ but are not so according to mereology (like participation, membership). Some of these relations were defined with a specific domain and range that was a DOLCE category (it could just as well have been, say, GFO). Looking at it recently, I noticed it was actually a bit scruffy (but I’ll leave it here nonetheless), and more has happened in this area over the years. So, it was time for an update on contents and on design.

For the record on how it’s done and to serve, perhaps, as a comparison exercise on modeling, here’s what I did. First of all, I started over, so as to properly type the relations to DOLCE categories, with the DOLCE IRIs rather than duplicated as DOLCE-category-with-my-IRI. As DOLCE is way too big and slows down reasoning, I made a module of DOLCE, called DOLCEmini, mainly by removing the irrelevant object properties, though re-adding the SOB, APO and NAPO that’s in D18 but not in DOLCE-lite from DLP3791. This reduced the file from DOLCE-lite’s 534 axioms, 37 classes, 70 OPs, in SHI to DOLCEmini’s 388 axioms, 40 classes, 43 OPs, also in SHI, and I changed the ontology IRI to where DOLCEmini will be put online.

Then I created a new ontology, PW.owl, imported DOLCEmini, added the taxonomy of part-whole relations from [1] right under owl:topObjectProperty, with domain and range axioms using the DOLCE categories as in the definitions, under part-whole. This was then extended with the respective inverses under whole-part, all the relevant proper part versions of them (with inverses), transitivity added for all (as the reasoner isn’t doing it [2]) annotations added, and then aligned to some DOLCE properties with equivalences. This makes it to 524 axioms and 79 object properties.

I deprecated subquantityOf (annotated with ‘deprecated’ and subsumed by a new property ‘deprecated’). Several new stuff relations and their inverses were added (such as portions), and annotated them. This made it to the PW ontology of 574 axioms (356 logical axioms) and 92 object properties (effectively, for part-whole relations: 92 – 40 from dolce – 3 for deprecated = 49).

As we made an extension with mereotopology [3] (and also that file wasn’t great, though did the job nevertheless [4]), but one that not everybody may want to put up with, yet a new file was created, PWMT. PWMT imports PW (and thus also DOLCEmini) and was extended with the main mereotopological relations from [3], and relevant annotations were added. I skipped property disjointness axioms, because they don’t go well with transitivity, which I assumed to be more important. This makes PWMT into one of 605 (380 logical) axioms and 103 object properties, with, effectively, for parts: 103 – 40 from dolce – 3 for deprecated – 1 connection = 59 object properties.

That’s a lot of part-whole relations, but fear not. The ‘Foundational Ontology and Reasoner enhanced axiomatiZAtion’ (FORZA) and its tool that incorporates with the Guided ENtity reuse and class Expression geneRATOR (GENERATOR) method [4] describes a usable approach how that can work out well and has a tool for the earlier version of the owl file. FORZA uses an optional decision diagram for the DOLCE categories as well as the automated reasoner so that it can select and propose to you those relations that, if used in an axiom, is guaranteed not to lead to an inconsistency that would be due to the object property hierarchy or its domain and range axioms. (I’ll write more about it in the next post.)

Ah well, even if the OWL files are not used, it was still a useful exercise in design, and at least I’ll have a sample case for next year’s ontology engineering course on ‘before’ and ‘after’ about questionable implementation and (relatively) good implementation without the need to resorting to criticizing other owl files… (hey, even the good and widely used ontologies have a bunch of pitfalls, whose amount is not statistically significantly different from ontologies made by novices [5]).

 

References

[1] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

[2] Keet, C.M. Detecting and Revising Flaws in OWL Object Property Expressions. 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12), Oct 8-12, Galway, Ireland. Springer, LNAI 7603, 252-266.

[3] Keet, C.M., Fernandez-Reyes, F.C., Morales-Gonzalez, A. Representing mereotopological relations in OWL ontologies with OntoPartS. 9th Extended Semantic Web Conference (ESWC’12), Simperl et al. (eds.), 27-31 May 2012, Heraklion, Crete, Greece. Springer, LNCS 7295, 240-254.

[4] Keet, C.M., Khan, M.T., Ghidini, C. Ontology Authoring with FORZA. 22nd ACM International Conference on Information and Knowledge Management (CIKM’13). ACM proceedings, pp569-578. Oct. 27 – Nov. 1, 2013, San Francisco, USA.

[5] Keet, C.M., Suarez-Figueroa, M.C., Poveda-Villalon, M. Pitfalls in Ontologies and TIPS to Prevent Them. In: Knowledge Discovery, Knowledge Engineering and Knowledge Management: IC3K 2013 Selected papers. Fred, A., Dietz, J.L.G., Liu, K., Filipe, J. (Eds.). Springer, CCIS 454, pp. 115-131. 2015.

An exhaustive OWL species classifier

Students enrolled in my ontology engineering course have to do a “mini-project” on a particular topic, chosen from a list of topics, such as on ontology quality, verbalisations, or language features, and may be theoretical or software development-oriented. In terms of papers, the most impressive result was OntoPartS that resulted in an ESWC2012 paper with the two postgraduate students [1], but also quite some other useful results have come out of it over the past 7 years that I’m teaching it in one form or another. This year’s top project in terms of understanding the theory, creativity to do something with it that hasn’t been done before, and working software using Semantic Web technologies was the “OWL Classifier” by Aashiq Parker, Brian Mc George, and Muhummad Patel.

The OWL classifier classifies an OWL ontology in any of its ‘species’, which can be any of the 8 specified in the standard, i.e., the 3 OWL 1 ones and the 5 OWL 2 ones. It also gives information on the DL ‘alphabet soup’—which axioms use which language feature with which letter, and an explanation of the letters—and reports on which axioms are the ones that violate a particular species. An example is shown in the following screenshot, with an exercise ontology on phone points:

phonePoints

The students’ motivation to develop it was because they had to learn about DLs and the OWL species, but Protégé 4.x and 5.x don’t tell you the species and the interfaces have only a basic, generic, explanation for the DL expressivity. I concur. And is has gotten worse with Protégé 5.0: if an ontology is outside OWL 2 DL, it still says the ‘old’ DL expressivity plus an easy-to-overlook tiny red triangle in the top-right corner once the reasoner was invoked (using Hermit 1.3.8) or a cryptic “internal reasoner error” message (Pellet), whereas with Protégé 4.x you at least got a pop-up box complaining about the ‘non-simple role…’ issues. Compare that with the neat feedback like this:

t15and16

It is also very ‘sensitive’—more so than one would be with Protégé alone. Any remote ontology imports have to be available at the location specified with the IRI. Violations due to wrong datatype usage is a known issue with the OWL Reasoner Evaluation set of ontologies, and which we’ve bumped into with the TDD testing as well. The tool doesn’t accept the invalid ones (wrong datatypes—one can select any XML data type in Protégé, but the OWL standard doesn’t support them all). In addition, a language such as OWL 2 QL has further restrictions on types of datatypes. (It is also not trivial to figure out manually whether some ontology is suitable for OBDA or not.) So I tried one from the Ontop website’s examples, presumably in OWL 2 QL:

fishdelish

Strictly speaking, it isn’t in OWL 2 QL! The OWL 2 QL profile does have xsd:integer as datatype [2], not xsd:int, as, and I quote the standard, “the intersection of the value spaces of any set of these datatypes [including xsd:integer but not xsd:int, mk] is either empty or infinite, which is necessary to obtain the desired computational properties”. [UPDATE 24-6, thanks to Martin Rezk:] The main toolset for OWL 2 QL, Ontop, actually does support xsd:int and a few other datatypes beyond the standard (e.g.: also float and boolean). There is similar syntax fun to be had with the pizza ontology: the original one is indeed in OWL DL, but if you open the file in Protégé 5 and save it, it is not in OWL DL anymore but in OWL 2 DL, for the save operation snuck in an owl#NamedIndividual. Click on the thumbnails below to see the before-and-after in the OWL classifier. This is not an increase in expressiveness—both are in SHOIN—just syntax and tooling.

pizzaOldpizzaP5

 

 

 

 

 

The OWL Classifier can thus classify both OWL 1 and OWL 2 ontologies, which it does through a careful orchestration of two OWL APIs: v1.4.3 was the last one to support OWL 1 species checking, whereas for the OWL 2 ontologies, the latest version is used (v4.2.3). The jar file and the source code are freely available on github for anyone to use and to take further. Turning it into a Protégé plugin very likely will make at least next year’s ontology engineering students happy. Comments, questions, and suggestion are welcome!

 

References

[1] Keet, C.M., Fernandez-Reyes, F.C., Morales-Gonzalez, A. Representing mereotopological relations in OWL ontologies with OntoPartS. 9th Extended Semantic Web Conference (ESWC’12), Simperl et al. (eds.), 27-31 May 2012, Heraklion, Crete, Greece. Springer, LNCS 7295, 240-254.

[2] Boris Motik, Bernardo Cuenca Grau, Ian Horrocks, Zhe Wu, Achille Fokoue, Carsten Lutz, eds. OWL 2 Web Ontology Language: Profiles. W3C Recommendation, 11 December 2012 (2nd ed.).

Ontology authoring with a Test-Driven Development approach

Ontology development has its processes and procedures—conducting a domain analysis, the implementation, maintenance, and so on—which have been developed since the mid 1990s. These high-level information systems-like methodologies don’t tell you what and how to add the knowledge to the ontology, however, i.e., the ontology authoring stage is a ‘just do it’, but not how. There are, perhaps surprisingly, few methods for how to do that; notably, FORZA uses domain and range constraints and the reasoner to propose a suitable part-whole relation [1] and advocatus diaboli zooms in on disjointness constraints among classes [2]. In a way, they all use what can be considered as tests on the ontology before adding an axiom. This smells of notions that are well-known in software engineering: unit tests, test-driven development (TDD), and Agile, with the latter two relying on different methodologies cf. the earlier ones (waterfall, iterative, and similar).

Some of those software engineering approaches have been adjusted and adopted for ontology engineering; e.g., the Agile-inspired OntoMaven that uses the standard reasoning services as tests [3], eXtreme Design with ODPs [4] that have been prepared previously, and Vrandecic and Gangemi’s early exploration of possibilities for unit tests [5]. Except for Warrender & Lord’s TDD tests for subsumption [6], they are all test-last approaches (design, author, test), rather than a test-first approach (test, author, test). The test-first approach is called test-driven development in software engineering [7], which has been ported to conceptual modelling recently as well [8]. TDD is a step up from a “add something and lets see what the reasoner says” stance, because one has to think and check upfront first before doing. (Competency questions can help with that, but they don’t say how to add the knowledge.) The question that arises, then, is how such a TDD approach would look like for ontology development. Some of the more detailed questions to be answered are (from [9]):

  • Given the TDD procedure for programming in software engineering, then what does that mean for ontologies during ontology authoring?
  • TDD uses mock objects for incomplete parts of the code, and mainly for methods; is there a parallel to it in ontology development, or can that aspect of TDD be ignored?
  • In what way, and where, (if at all) can this be integrated as a methodological step in existing ontology engineering methodologies?

We—Agnieszka Lawrynowicz from Poznan University of Technology in Poland and I—answer these questions in our paper that was recently accepted at the 13th Extended Semantic Web Conference (ESWC’16), to be held in May 2016 in Crete, Greece: Test-Driven Development of Ontologies [9]. In short: we specified tests for the OWL 2 DL language features and basic types of axioms one can add, implemented it as a Protégé plug-in, and tested it on performance with 67 ontologies (result: great). The tool and test data can be downloaded from Agnieszka’s ARISTOTELES project page.

Now slightly less brief than that one-liner. The tests—like for class subsumption, domain and range, a property chain—can be specified in two principal ways, called TBox tests and ABox tests. The TBox tests rely solely on knowledge represented in the TBox, whereas for the ABox tests, mock individuals are explicitly added for a particular TBox axiom. For instance, the simple existential quantification, as shown below, where the TBox query is denoted in SPARQL-OWL notation.

Teq

Teqprime

 

 

 

 

 

For the implementation, there is (1) a ‘wrapping’ component that includes creating the mock entities, checking the condition (line 2 in the TBox test example in the figure above, and line 4 in ABox TDD test), returning the TDD test result, and cleaning up afterward; and (2) the interaction with the ontology doing the actual testing, such as querying the ontology and class and instance classification. There are several options to realise component (2) of the TBox TDD tests. The query can be sent to, e.g., OWL-BGP [10] that uses SPARQL-OWL and Hermit to return the answer (line 1), but one also could use just the OWL API and send it to the automated reasoner, among other options.

Because OWL-BGP and the other options didn’t cater for the tests with OWL’s object properties, such as a property chain, so a full implementation would require extending current tools, we decided to first examine performance of the different options for (2) for those tests that could be implemented with current tools so as to get an idea of which approach would be the best to extend, rather than gambling on one, implement all, and go on with user testing. This TDD tool got the unimaginative name TDDonto and can be installed as a Protégé plugin. We tested the performance with 67 TONES ontology repository ontologies. The outcome of that is that the TBox-based SPARQL-OWL approach is faster than the ABox TDD tests (except for class disjointness; see figure below), and the OWL API + reasoner for the TBox TDD tests is again faster in general. These differences are bigger with larger ontologies (see paper for details).

Test computation times per test type (ABox versus TBox-based SPARQL-OWL) and per the kind of the tested axiom (source: [9]).

Test computation times per test type (ABox versus TBox-based SPARQL-OWL) and per the kind of the tested axiom (source: [9]).

Finally, can this TDD be simply ‘plugged in’ into one of the existing methodologies? No. As with TDD for software engineering, it has its own sequence of steps. An initial sketch is shown in the figure below. It outlines only the default scenario, where the knowledge to be added wasn’t there already and adding it doesn’t result in conflicts.

Sketch of a possible ontology lifecycle that focuses on TDD, and the steps of the TDD procedure summarised in key terms (source: [9]).

Sketch of a possible ontology lifecycle that focuses on TDD, and the steps of the TDD procedure summarised in key terms (source: [9]).

The “select scenario” has to do with what gets fed into the TDD tests, and therewith also where and how TDD could be used. We specified three of them: either (a) the knowledge engineer provides an axiom, (b) a domain expert fills in some template (e.g., the ‘all-some’ one) and that software generates the axiom for the domain ontology (e.g., Professor \sqsubseteq \exists teaches.Course ), or (c) someone wrote a competency question that is either manually or automatically converted into an axiom. The “refactoring” could include a step for removing a property from a subclass when it is added to its superclass. The “regression testing” considers previous tests and what to do when any conflicts may have arisen (which may need an interaction with step 5).

There is quite a bit of work yet to be done on TDD for ontologies, but at least there is now a first comprehensive basis to work from. Both Agnieszka and I plan to go to ESWC’16, so I hope to see you there. If you want more details or read the tests with a nicer layout than how it is presented in the ESWC16 paper, then have a look at the extended version [11] or contact us, or leave a comment below.

 

References

[1] Keet, C.M., Khan, M.T., Ghidini, C. Ontology Authoring with FORZA. 22nd ACM International Conference on Information and Knowledge Management (CIKM’13). ACM proceedings. Oct. 27 – Nov. 1, 2013, San Francisco, USA. pp569-578.

[2] Ferre, S. and Rudolph, S. (2012). Advocatus diaboli exploratory enrichment of ontologies with negative constraints. In ten Teije, A. et al., editors, 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12), volume 7603 of LNAI, pages 42-56. Springer. Oct 8-12, Galway, Ireland

[3] Paschke, A., Schaefermeier, R. Aspect OntoMaven – aspect-oriented ontology development and configuration with OntoMaven. Tech. Rep. 1507.00212v1, Free University of Berlin (July 2015)

[4] Blomqvist, E., Sepour, A.S., Presutti, V. Ontology testing — methodology and tool. In: Proc. of EKAW’12. LNAI, vol. 7603, pp. 216-226. Springer (2012)

[5] Vrandecic, D., Gangemi, A. Unit tests for ontologies. In: OTM workshops 2006. LNCS, vol. 4278, pp. 1012-1020. Springer (2006)

[6] Warrender, J.D., Lord, P. How, What and Why to test an ontology. Technical Report 1505.04112, Newcastle University (2015), http://arxiv.org/abs/1505.04112

[7] Beck, K.: Test-Driven Development: by example. Addison-Wesley, Boston, MA (2004)

[8] Tort, A., Olive, A., Sancho, M.R. An approach to test-driven development of conceptual schemas. Data & Knowledge Engineering 70, 1088-1111 (2011)

[9] Keet, C.M., Lawrynowicz, A. Test-Driven Development of Ontologies. 13th Extended Semantic Web Conference (ESWC’16). Springer LNCS. 29 May – 2 June, 2016, Crete, Greece. (in print)

[10] Kollia, I., Glimm, B., Horrocks, I. SPARQL Query Answering over OWL Ontologies. In: Proc, of ESWC’11. LNCS, vol. 6643, pp. 382-396. Springer (2011)

[11] Keet, C.M., Lawrynowicz, A. Test-Driven Development of Ontologies (extended version). Technical Report, Arxiv.org http://arxiv.org/abs/1512.06211. Dec 19, 2015.

 

 

 

 

Reblogging 2013: Release of the (beta version of the) foundational ontology library ROMULUS

From the “10 years of keetblog – reblogging: 2013”: also not easy to choose regarding research. Here, then, the first results of Zubeida Khan’s MSc thesis on the foundational ontology library ROMULUS, being the first post of several papers on the theoretical and methodological aspects of it (KCAP’13 poster, KEOD’13 paper, MEDI’13 paper, book chapter, EKAW’14 paper) and her winning the award for best Masters from the CSIR. The journal paper on ROMULUS has just been published last week in the Journal on Data Semantics, in a special issue edited by Alfredo Cuzzocrea.

Release of the (beta version of the) foundational ontology library ROMULUS; April 4

—————-

With the increase on ontology development and networked ontologies, both good ontology development and ontology matching for ontology linking and integration are becoming a more pressing issue. Many contributions have been proposed in these areas. One of the ideas to tackle both—supposedly in one fell swoop—is the use of a foundational ontology. A foundational ontology aims to (i) serve as a building block in ontology development by providing the developer with guidance how to model the entities in a domain, and  (ii) serve as a common top-level when integrating different domain ontologies, so that one can identify which entities are equivalent according to their classification in the foundational ontology. Over the years, several foundational ontologies have been developed, such as DOLCE, BFO, GFO, SUMO, and YAMATO, which have been used in domain ontology development. The problem that has arisen now, is how to link domain ontologies that are mapped to different foundational ontologies?

To be able to do this in a structured fashion, the foundational ontologies have to be matched somehow, and ideally have to have some software support for this. As early as 2003, this issue as foreseen already and the idea of a “WonderWeb Foundational Ontologies Library” (WFOL) proposed, so that—in the ideal case—different domain ontologies can to commit to different but systematically related (modules of) foundational ontologies [1]. However, the WFOL remained just an idea because it was not clear how to align those foundational ontologies and, at the time of writing, most foundational ontologies were still under active development, OWL was yet to be standardised, and there was scant stable software infrastructure. Within the Semantic Web setting, the solvability of the implementation issues is within reach yet not realised, but their alignment is still to be carried out systematically (beyond the few partial comparisons in the literature).

We’re trying to solve these theoretical and practical shortcomings through the creation of the first such online library of machine-processable, aligned and merged, foundational ontologies: the Repository of Ontologies for MULtiple USes ROMULUS. This version contains alignments, mappings, and merged ontologies for DOLCE, BFO, and GFO and some modularized versions thereof, as a start. It also has a section on logical inconsistencies; i.e., entities that were aligned manually and/or automatically and seemed to refer to the same thing—e.g., a mathematical set, a temporal region—actually turned out not to be (at least from a logical viewpoint) due to other ‘interfering’ axioms in the ontologies. What one should be doing with those, is a separate issue, but at least it is now clear where the matching problems really are down to the nitty-gritty entity-level.

We performed a small experiment on the evaluation of the mappings (thanks to participants from DERI, Net2 funds, and Aidan Hogan), and we would like to have more feedback on the alignments and mappings. It is one thing that we, or some alignment tool, aligned two entities, another that asserting an equivalence ends up logically consistent (hence mapped) or inconsistent, and yet another what you think of the alignments, especially the ontology engineers. You can participate in the evaluation: you will get a small set of a few alignments at a time, and then you decide whether you agree, partially agree, or disagree with it, are unsure about it, or skip it if you have no clue.

Finally, ROMULUS also has a range of other features, such as ontology selection, a high-level comparison, browsing the ontology through WebProtégé, a verbalization of the axioms, and metadata. It is the first online library of machine-processable, modularised, aligned, and merged foundational ontologies around. A poster/demo paper [2] was accepted at the Seventh International Conference on Knowledge Capture (K-CAP’13), and papers describing details are submitted and in the pipeline. In the meantime, if you have comments and/or suggestions, feel free to contact Zubeida or me.

References

[1] Masolo, C., Borgo, S., Gangemi, A., Guarino, N., Oltramari, A. Ontology library. WonderWeb Deliverable D18 (ver. 1.0, 31-12-2003). (2003) http://wonderweb.semanticweb.org.

[2] Khan, Z., Keet, C.M. Toward semantic interoperability with aligned foundational ontologies in ROMULUS. Seventh International Conference on Knowledge Capture (K-CAP’13), ACM proceedings. 23-26 June 2013, Banff, Canada. (accepted as poster &demo with short paper)

OBDA/I Example in the Digital Humanities: food in the Roman Empire

A new installment of the Ontology Engineering module is about to start for the computer science honours students who selected it, so, in preparation, I was looking around for new examples of what ontologies and Semantic Web technologies can do for you, and that are at least somewhat concrete. One of those examples has an accompanying paper that is about to be published (can it be more recent than that?), which is on the production and distribution of food in the Roman Empire [1]. Although perhaps not many people here in South Africa might care about what happened in the Mediterranean basin some 2000 years ago, it is a good showcase of what one perhaps also could do here with the historical and archeological information (e.g., an inter-university SA project on digital humanities started off a few months ago, and several academics and students at UCT contribute to the Bleek and Lloyd Archive of |xam (San) cultural heritage, among others). And the paper is (relatively) very readable also to the non-expert.

 

So, what is it about? Food was stored in pots (more precisely: an amphora) that had engravings on it with text about who, what, where etc. and a lot of that has been investigated, documented, and stored in multiple resources, such as in databases. None of the resources cover all data points, but to advance research and understanding about it and food trading systems in general, it has to be combined somehow and made easily accessible to the domain experts. That is, essentially it is an instance of a data access and integration problem.

There are a couple of principal approaches to address that, usually done by an Extract-Transform-Load of each separate resource into one database or digital library, and then putting a web-based front-end on top of it. There are many shortcomings to that solution, such as having to repeat the ETL procedure upon updates in the source database, a single control point, and the, typically only, canned (i.e., fixed) queries of the interface. A more recent approach, of which the technologies finally are maturing, is Ontology-Based Data Access (OBDA) and Ontology-Based Data Integration (OBDI). I say “finally” here, as I still very well can remember the predecessors we struggled with some 7-8 years ago [2,3] (informally here, here, and here), and “maturing”, as the software has become more stable, has more features, and some of the things we had to do manually back then have been automated now. The general idea of OBDA/I applied to the Roman Empire Food system is shown in the figure below.

OBDA in the EPnet system (Source: [1])

OBDA in the EPnet system (Source: [1])

There are the data sources, which are federated (one ‘middle layer’, though still at the implementation level). The federated interface has mapping assertions to elements in the ontology. The user then can use the terms of the ontology (classes and their relations and attributes) to query the data, without having to know about how the data is stored and without having to write page-long SQL queries. For instance, a query “retrieve inscriptions on amphorae found in the city of ‘Mainz” containing the text ‘PNN’” would use just the terms in the ontology, say, Inscription, Amphora, City, found in, and inscribed on, and any value constraint added (like the PNN), and the OBDA/I system takes care of the rest.

Interestingly, the authors of [1]—admitted, three of them are former colleagues from Bolzano—used the same approach to setting up the ontology component as we did for [3]. While we will use the Protégé Ontology Development Environment in the OE module, it is not the best modelling tool to overcome the knowledge acquisition bottleneck. The authors modelled together with the domain experts in the much more intuitive ORM language and tool NORMA, and first represented whatever needed to be represented. This included also reuse of relevant related ontologies and non-ontology material, and modularizing it for better knowledge management and thereby ameliorating cognitive overload. A subset of the resultant ontology was then translated into the Web Ontology Language OWL (more precisely: OWL 2 QL, a tractable profile of OWL 2 DL), which is actually used in the OBDA system. We did that manually back then; now this can be done automatically (yay!).

Skipping here over the OBDI part and considering it done, the main third step in setting up an OBDA system is to link the data to the elements in the ontology. This is done in the mapping layer. This is essentially of the form “TermInTheOntology <- SQLqueryOverTheSource”. Abstracting from the current syntax of the OBDA system and simplifying the query for readability (see the real one in the paper), an example would thus have the following make up to retrieve all Dressel 1 type of amphorae, named Dressel1Amphora in the ontology, in all the data sources of the system:

Dressel1Amphora <-
    SELECT ic.id
       FROM ic JOIN at ON at.carrier=ic.id
          WHERE at.type=’DR1’

Or some such SQL query (typically larger than this one). This takes up a bit of time to do, but has to be done only once, for these mappings are stored in a separate mapping file.

The domain expert, then, when wanting to know about the Dressel1 amphorae in the system, would have to ask only ‘retrieve all Dressel1 amphorae’, rather than creating the SQL query, and thus being oblivious about which tables and columns are involved in obtaining the answer and being oblivious about that some data entry person at some point had mysteriously decided not to use ‘Dressel1’ but his own abbreviation ‘DR1’.

The actual ‘retrieve all Dressel1 amphorae’ is then a SPARQL query over the ontology, e.g.,

SELECT ?x WHERE {?x rdf:Type :Dressel1Amphora.}

which is surely shorter and therefore easier to handle for the domain expert than the SQL one. The OBDA system (-ontop-) takes this query and reasons over the ontology to see if the query can be answered directly by it without consulting the data, or else can be rewritten given the other knowledge in the ontology (it can, see example 5 in the paper). The outcome of that process then consults the relevant mappings. From that, the whole SQL query is constructed, which is sent to the (federated) data source(s), which processes the query as any relational database management system does, and returns the data to the user interface.

 

It is, perhaps, still unpleasant that domain experts have to put up with another query language, SPARQL, as the paper notes as well. Some efforts have gone into sorting out that ‘last mile’, such as using a (controlled) natural language to pose the query or to reuse that original ORM diagram in some way, but more needs to be done. (We tried the latter in [3]; that proof-of-concept worked with a neutered version of ORM and we have screenshots and videos to prove it, but in working on extensions and improvements, a new student uploaded buggy code onto the production server, so that online source doesn’t work anymore (and we didn’t roll back and reinstalled an older version, with me having moved to South Africa and the original student-developer, Giorgio Stefanoni, away studying for his MSc).

 

Note to OE students: This is by no means all there is to OBDA/I, but hopefully it has given you a bit of an idea. Read at least sections 1-3 of paper [1], and if you want to do an OBDA mini-project, then read also the rest of the paper and then Chapter 8 of the OE lecture notes, which discusses in a bit more detail the motivations for OBDA and the theory behind it.

 

References

[1] Calvanese, D., Liuzzo, P., Mosca, A., Remesal, J, Rezk, M., Rull, G. Ontology-Based Data Integration in EPNet: Production and Distribution of Food During the Roman Empire. Engineering Applications of Artificial Intelligence, 2016. To appear.

[2] Keet, C.M., Alberts, R., Gerber, A., Chimamiwa, G. Enhancing web portals with Ontology-Based Data Access: the case study of South Africa’s Accessibility Portal for people with disabilities. Fifth International Workshop OWL: Experiences and Directions (OWLED 2008), 26-27 Oct. 2008, Karlsruhe, Germany.

[3] Calvanese, D., Keet, C.M., Nutt, W., Rodriguez-Muro, M., Stefanoni, G. Web-based Graphical Querying of Databases through an Ontology: the WONDER System. ACM Symposium on Applied Computing (ACM SAC 2010), March 22-26 2010, Sierre, Switzerland. ACM Proceedings, pp1389-1396.

Applied Ontology article on a framework for ontology modularity

I intended to post this writeup to coincide with the official publication of the 10th anniversary edition of the Applied Ontology journal due to go in print this month, but I have less patience than I thought. The reason for this is that my PhD student, Zubeida Khan, and I got a paper accepted for the anniversary edition, which is online in the issue preprint as An empirically-based framework for ontology modularity [1]. It was one of those I’m-not-sure-but-lets-be-bold-and-submit-anyway papers, with Zubeida as main author. It is her first ISI-index journal article, and congrats to that! (The article bean counting is [somewhat/very] important in academia in South Africa). UPDATE (22-12): from the editorial by Guarino & Musen: it was one of the 2 papers accepted out of the 7 submitted, and IOS Press has awarded a prize for it.

So, what is the paper about? As the blog post’s title suggest: ontology modules. The first part is a highly structured and comprehensive literature review to figure out what all the parameters are for ontology modularisation, which properties modules have, and so on. The second part takes a turn to an experimental approach, where almost 200 ontology modules are classified according to those parameters. Both seek to answer questions like: “What are the use-cases, techniques, types, and annotation features that exist for modules? How do module types differ with respect to certain use-cases? Which techniques can we use to create modules of a certain type? Which techniques result in modules with certain annotation features?” Answers to that can be found in Section 6, with the very short version to the first question (explanations in the paper):

  • use-cases: maintenance, reasoning, validation, processing, comprehension, collaborative efforts, and reuse.
  • techniques: graph partitioning, modularity maximisation, hierarchical clustering, locality-based modularity, query-based modularity, semantic-based abstraction, a priori modularity, and manual modularity.
  • types: ODPs, subject domain-based, isolation branch, locality, privacy, domain coverage, ontology matching, optimal reasoning, axiom abstraction, entity type, high-level abstraction, weighted, expressiveness sub-language, and expressiveness feature modules.
  • annotation features: seed signature, information removal, abstraction (breadth and depth), refinement, stand-alone, source ontology, proper subset, imports, overlapping, mutual exclusion, union equivalence, partitioning, inter-module interaction, and pre-assigned number of modules.

This, then, feeds into the third part of the paper that puts the two previous ones together into a framework for ontology modularity, which links the various dimensions (groups of parameters) based on the dependencies that surfaced from the literature review and analysis of modules, and proposes how to proceed in some modularization scenario. That is, it answers the other three main questions. They are easier to show in a figure, imo, like the following one on dependencies between module type and the techniques to create them:

Dependencies between type of module and technique to create the module (Source: [1])

Dependencies between type of module and technique (Source: [1])

With it, you’d be able to answer a case like: “Given that we wish to create an ontology module with a certain purpose or use-case in mind, which modularity type of module could this result in?” and so on to which technique to use, and which properties such a module will have.

Space limitations caused the paper to have only one illustrative example with the Symptom Ontology, but the general idea hopefully will be clear nevertheless. At the moment, before official print, the article is still behind a paywall if you don’t have either institutional or IAOA access, but feel free to contact either Zubeida or me for a copy.

References

[1] Khan, Z.C., Keet, C.M. An empirically-based framework for ontology modularity. Applied Ontology, 2015, in print (25p). DOI: 10.3233/AO-150151.