Aligning different relations: the case of part-whole relations—LDK2017

Despite the best intentions, I did not get around to writing a post on the paper that I presented last week at the First International Conference on Language, Data and Knowledge 2017, 19-20 June, Galway, Ireland, and now Paul Groth also ‘beat’ me to it writing a nice conference report of it. On the bright side, it is an opportunity to say upfront I really enjoyed the conference and look forward to the next edition in 2019. The ESWC’17 organisers might be slightly disappointed that there was no special track on the multilingual semantic web after all, but I did get the distinct impression that the LDK17 authors might just all have gambled on LDK17—an opportunity to binge two days on all things natural language & Semantic Web—rather than on one track at an overpriced conference (despite the allure of it being A-rated).

So, what was my paper about that could have been submitted to either? I ended up struggling—and solving—an issue with aligning OWL object properties that were not simple 1:1 mappings, in a similar scope as our ESWC17 paper (introduced here) [4], but then with too many complications. Complications were due to the different conceptualisations of part-whole relations and that one of the requirements was to solve what to do with an object property (relation, relationship) that does not have a neat, single, label, and therewith neither fitting with the common OWL modelling paradigm nor with the recently agreed-upon ontolex-lemon model for linguistic annotations.

The start of all this sounded nice and doable: we need to generate natural language for healthcare, using, e.g., SNOMED CT, in local languages in South Africa, focussing on the largest one, being isiZulu. Medical terminologies are riddled with part-whole relations, so we sought to address that one (simple existentials already having been solved), availing of a standard list of part-whole relations (e.g. [1]). That turned out to be a non-trivial exercise, but doable eventually [2]. What wasn’t addressed in [2] was that some ‘common’ part-whole relations, such as membership and containment, weren’t like that in isiZulu, at all. Moreover, it wasn’t just a language issue, but ontological as well. The LDK17 paper “Representing and aligning similar relations: parts and wholes in isiZulu vs English” [3] describes this in some detail.

Here’s a (simplified) list of (assumed to be) common part-whole relations, which takes into account both transitivity differences and domain and range:

Now here’s the one based on the isiZulu language and some ontological analysis of that:

That is: there are both generalisations—some distinctions are not being made—and specialisations—some distinctions are made here but not elsewhere. For instance, ‘musician is part of some orchestra’ and ‘heart is part of some human’ (or vv.) is all done and described in the same way (ingxenye ‘part of’ and SC+CONJ for ‘has part’ [more about that below]). Yet, there is a difference between an individual (e.g., a voter) participating in some process and a collective (e.g., the electorate) participating in a process, or vv. The paper describes this more precisely, going into some detail regarding the differences in categories of domain and range and into the consequences on transitivity of mereological parthood.

The other ‘odd thing’—cf. current multilingual Semantic Web assumptions and technologies, that is—is that while the conceptualisation of ‘has part’ exists, it does not have a single label as in English (or in several other languages, such as heeft as deel), but it is dependent on the noun class of the noun of the class that play the part and play the whole in the relation. It combines the subject concord (~conjugation) of the noun class of the noun that plays the whole with a conjunction that is phonologically conditioned based on the first letter of the noun that plays the part; with verbalisation in the plural and three phonological cases, there are 18 possible strings all denoting ‘has part’. This still could be sorted with a language with inverses, provided the part-of direction has a name, like the ingxenye. This is not the case for containment, however. Instead of the relation (object property) having a name—be this a verb like ‘contained in’ or some noun phrase—it is the noun that plays the whole (the container, if you will) that gets modified. For instance, imvilophu ‘envelope’ and emvilophini denoting ‘contained in the envelope’, or, for individuals and locations, the city iTheku ‘Durban’ and eThekwini meaning ‘located in Durban’ (no typo—there’s some phonological conditioning I’m brushing over). While I have gotten used to such constructions, it generated some surprise among several attendees that one can have notions, concepts, views on or interpretations or descriptions of reality, that exist but do not have even one single string of text throughout to refer to regardless the context it is used.

The naming issue was solved by adding some arbitrary string as ‘name’ of the object property, and relating that to the function that verbalises that specific part-whole relation. The former issue, i.e., not all the same part-whole relations, required a bit more work, using ontology pattern alignments, by extending one correspondence pattern from the ODP catalogue and introducing a new one (see paper for the formal details), using the same broad framework of formalisation as proposed in [4].

All this was then implemented and aligned, and verified to not result in some unsatisfiable classes, object properties, or inconsistency (files). This also works in the isiZulu verbalisation tool we demo-ed at ESWC17 (described in the previous post) [5], all as part of the NRF-funded GeNI project.

Now, ideally, I already would have had the time to read the papers I flagged in my LDK17 conference notes with “check paper”. I haven’t yet due to end-of-semester tasks. So, on the basis of just a positive-seeming presentation, here are a few that are on the top of my list to check out first, for quite different reasons:

  • Interaction between natural language reading capabilities and math education, focusing on language production (i.e., ‘can you talk about it?’) [6], mainly because math education in South Africa faces a lot of problems. It also generated a lively discussion in the Q&A session.
  • The OnLiT ontology for linguistic [7] and LLODifying linguistic glosses [8] terminology (also: one of the two also won the best paper award).
  • Deep text generation, for it was looking at trying to address skewed or limited data to learn from [9], which is an issue we face when trying to do some NLP with most South African languages.

 

References

[1] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

[2] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), September 5-8, 2016, Edinburgh, UK. ACL.

[3] Keet, C.M. Representing and aligning similar relations: parts and wholes in isiZulu vs English. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 58-73.

[4] Fillottrani, P.R., Keet, C.M. Patterns for Heterogeneous TBox Mappings to Bridge Different Modelling Decisions. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017.

[5] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[6] Crossley, S., Kostyuk, V. Letting the genie out of the lamp: using natural language processing tools to predict math performance. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 330-342.

[7] Klimek, B., McCrae, J.P., Lehmann, C., Chiarcos, C., Hellmann, S. OnLiT: and ontology for linguistic terminology. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 42-57.

[8] Chiarcos, C., Ionov, M. Rind-Pawlowski, M., Fäth, C., Wichers Schreur, J., Nevskaya. I. LLODifying linguistic glosses. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 89-103.

[9] Dethlefs N., Turner A. Deep Text Generation — Using Hierarchical Decomposition to Mitigate the Effect of Rare Data Points. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 290-298.

Advertisements

On that “shared” conceptualization and other definitions of an ontology

It’s a topic that never failed to generate a discussion on all 10 instalments of the ontology engineering course I taught from BSc(hons) up to participants studying toward or already having a PhD: those pesky definitions of what an ontology is. To top it off, like I didn’t know, I also got a snarky reviewer’s comment about it on my Stuff ontology paper [1]:

A comment that might be superficial but I cannot help: since an ontology is usually (in Borst’s terms) assumed to be a ‘shared’ conceptualization, I find a little surprising for such a complex model to have been designed by a sole author. While I acknowledge the huge amount of literature carefully analyzed, it still seems that the concrete modeling decisions eventually relied on the background of a single ontologist

Is that bad? Does that make the Stuff Ontology a ‘nontology’? And, by the by, what about all those loner philosophers who write single-author papers on ontology; should that whole field be discarded because most of the ontology insights were “shared” only from paper submission and publication?

Anyway, let’s start from the beginning. There’s the much-criticized definition of an ontology from Gruber that, it seems, only novices seem to keep quoting (to my irritation, indeed):

An ontology is a specification of a conceptualization. [2]

If you wonder why quite a bit has been written about it: try to answer what “specification” really means and how it is specified, and what exactly a “conceptualization” is. The real fun starts with Borst et al.’s [3] and then Studer et al.’s [4] refinement of Gruber’s version, which the reviewer quoted above alluded to:

An ontology is a formal, explicit specification of a shared conceptualization. [4]

At least there’s the “formal” (be it in the sense of logic or formal ontology), and “explicit”, so something is being made explicit and precise. But “shared”? Shared with whom? How? Is a logical theory that not one, but two, people write down an ontology, then? Or one person develops an ontology and then emails it to a few colleagues or puts it online in, say, the open BioPortal ontology repository. Does that count as “shared” then? Or is it only “shared” if at least one other person agrees with it as is (all reviewers of the Stuff Ontology did, btw), or perhaps (most or all of) the ‘conceptualization’ of it but a few axioms would need a bit of tweaking and cleaning up? Do you need at least a group of people to develop an ontology, and if so, how large should that group be, and should that group consist of independent sub-groups that adopt the ontology (and if so, how many endorsers)? Is a lightweight low-hanging-fruit ontology that is used by a large company a real or successful ontology, but a highly axiomatised ontology with a high tangledness that is used by a specialist organization, not? And even if you canvass and get a large group and/or organization to buy into that formal explicit specification, what if they are all wrong on the reality is supposed to represent? Does it still count as an ontology no matter how wrong the conceptualization is, just because it’s formal, explicit, and shared? Is a tailor-made module of, say, the DOLCE ontology not also an ontology, even if the module was made by one person and made available in an online repository like ROMULUS?

Perhaps one shouldn’t start top-down, but bottom-up: take some things and decide (who?) whether it is an ontology or not. Case one: the taxonomy of part-whole relations is a mini-ontology, and although at the start only ‘shared’ with my co-author and published in the Applied Ontology journal [5], it has been used by quite a few researchers for various (and unintended) purposes afterward, notably in NLP (e.g., [6]). An ontology? If so, since when? Case two: Noy et al. converted the representation of the NCI thesaurus into OWL DL [7]. Does changing the serialisation of a multi-authored thesaurus from one format into another make it an ontology? (more on that below.) Case three: a group of 5 people try to represent the subject domain of, say, breast cancer, but it is replete with mistakes both regarding the reality it ought to represent and unintended modelling errors (such as confusing is-a with part-of). Is it still an ontology, albeit a bad one?

It gets more muddled when the representation language is thrown in (as with case 2 above). What if the ontology turns out to be unsatisfiable? From a logic viewpoint, it’s not a theory then (a consistent set of sentences, is), but if it’s formal, explicit, and shared, is it acceptable that those people who developed the artefact simply have an inconsistent conceptualization and that it still counts as an ontology?

Horrocks et al. [8] simplify the whole thing by eliminating the ‘shared’ aspect:

an ontology being equivalent to a Description Logic knowledge base. [8]

However, this generates a set of questions and problems of its own that are practically also problematic. For instance: 1) whether transforming a UML Class Diagram into OWL ‘magically’ makes it an ontology (answer: no); 2) The NCI Thesaurus to OWL (answer: no); or 3) if you used, say, Common Logic to represent it, that then it could not be an ontology because it’s not formalised in Description Logics (answer: it sure can be one).

There are more attempts to give a definition or a description, notably by Nicola Guarino in [9] (a key paper in the field):

An ontology is a logical theory accounting for the intended meaning of a formal vocabulary, i.e. its ontological commitment to a particular conceptualization of the world. The intended models of a logical language using such a vocabulary are constrained by its ontological commitment. An ontology indirectly reflects this commitment (and the underlying conceptualization) by approximating these intended models. [9]

That’s a mouthful, but at least no “shared” in there, either. And, finally, among the many definitions in [10], here’s Barry Smith and cs.’s take on it:

An ONTOLOGY is a representational artifact, comprising a taxonomy as proper part, whose representational units are intended to designate some combination of universals, defined classes, and certain relations between them. [10]

And again, no “shared” either in this definition. Of course, also with Smith’s definition, there are things one can debate about and pose it against Guarino’s definition, like the “universals” vs. “conceptualization” etc., but that’s a story for another time.

So, to sum up: there is that problem on how to interpret “shared”, which is untenable, and one just as well can pick a definition of an ontology from a widely cited paper that doesn’t include that in the definition.

That said, all this doesn’t help my students to grapple with the notion of ‘an ontology’. Examples help, and it would be good if someone, or, say, the International Association for Ontology and its Applications (IAOA) would have a list of “exemplar ontologies” sooner rather than later. (Yes, I have a list, but it still needs to be annotated better). Another aspect that helps explaining it comes is from Guarino’s slides on going “from logical to ontological level” and on good and bad ontologies. This first screenshot (taken from my slides—easier to find) shows there’s “something more” to an ontology than just the logic, with a hint to reasons why (note to my students: more about that later in the course). The second screenshot shows that, yes, we can have the good, bad, and ugly: the yellow oval denotes the intended models (what it should be), and the other ovals denote the various approximations that one may have tried to represent in an ontology. For instance, representing ‘each human has exactly one brain’ is more precise (“good”) than stating ‘each human has at least one brain’ (“less good”) or not saying anything at all about it an ontology of human anatomy (“bad”), and even “worse” it would be if that ontology ware to state ‘each human has exactly two tails’.

logicontogoddbaduglyonto

Maybe we can’t do better than ‘intuition’ or ‘very wieldy explanation’. If this were a local installation of WordPress, I’d have added a poll on definitions and the subjectivity on the shared-ness factor (though knowing well that science isn’t governed as a democracy). In lieu of that: comments, preferences for one definition or the other, or any better suggestions for definitions are most welcome! (The next instalment of my Ontology Engineering course will start in a few week’s time.)

 

References

[1] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

[2] Gruber, T. R. A translation approach to portable ontology specifications. Knowledge Acquisition, 1993, 5(2):199-220.

[3] Borst, W.N., Akkermans, J.M. Engineering Ontologies. International Journal of Human-Computer Studies, 1997, 46(2-3):365-406.

[4] Studer, R., Benjamins, R., and Fensel, D. Knowledge engineering: Principles and methods. Data & Knowledge Engineering, 1998, 25(1-2):161-198.

[5] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

[6] Tandon, N., Hariman, C., Urbani, J., Rohrbach, A., Rohrbach, M., Weikum, G.: Commonsense in parts: Mining part-whole relations from the web and image tags. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI’16). pp. 243-250. AAAI Press (2016)

[7] Noy, N.F., de Coronado, S., Solbrig, H., Fragoso, G., Hartel, F.W., Musen, M. Representing the NCI Thesaurus in OWL DL: Modeling tools help modeling languages. Applied Ontology, 2008, 3(3):173-190.

[8] Horrocks, I., Patel-Schneider, P. F., and van Harmelen, F. From SHIQ and RDF to OWL: The making of a web ontology language. Journal of Web Semantics, 2003, 1(1):7.

[9] Guarino, N. (1998). Formal ontology and information systems. In Guarino, N., editor, Proceedings of Formal Ontology in Information Systems (FOIS’98), Frontiers in Artificial intelligence and Applications, pages 3-15. Amsterdam: IOS Press.

[10] Smith, B., Kusnierczyk, W., Schober, D., Ceusters, W. Towards a Reference Terminology for Ontology Research and Development in the Biomedical Domain. KR-MED 2006 “Biomedical Ontology in Action”. November 8, 2006, Baltimore, Maryland, USA.

More stuff: relating stuffs and amounts of stuff to their parts and portions

With all the protests going on in South Africa, writing this post is going to be a moment of detachment of it (well, I’m trying), for it concerns foundational aspects of ontologies with respect to “stuff”. Stuff is the philosophers’ funny term for those kind of things that cannot be counted, or only counted in quantities, and are in natural language generally referred to by mass nouns. For instance, water, gold, mayonnaise, oil, and wine as kinds of things, yet one can talk of individual objects of them only in quantities, like a glass of wine, a spoonful of mayonnaise, and a litre of oil. It is one thing to be able to say which types of stuff there are [1], it is another matter how they relate to each other. The latter is described in the paper recently accepted at the 20th International Conference on Knowledge Engineering and Knowledge management (EKAW’16), entitled “Relating some stuff to other stuff” [2].

Is something like that even relevant, when students are protesting for free education, among other demands? Yes. At the end of the day, it is part and parcel of a healthy environment to live in. For instance, one should be able to realise traceability in food and medicine supply chains, to foster practices, and check compliance, of good production processes and supply chains, so that you will not buy food that makes you ill or take medicines that are fake [3,4]. Such production processes and product logistics deal with ‘stuffs’ and their portions and parts that get separated and put together to make the final product. Current implementations have only underspecified ‘links’ (if at all) that doesn’t let one infer automatically what (or who) the culprit is. Existing theoretical accounts from philosophy and in domain ontologies are incomplete, so they wouldn’t help you further either. The research described in the paper solves this issue.

Seven relations for portions and stuff-parts were identified, which have a temporal dimension where needed. For instance, the upper-half of the wine in your wine glass is a portion of the whole amount of wine in the glass, yet that amount of wine was a portion of the amount of wine in the bottle when you opened it, and yet it has as part some amount of alcohol. (Some reader may not find this example nice, for it being with alcohol, but Western Cape, where Cape Town is situated, is the wine region of the country.) The relations are structured in a little hierarchy, as informally depicted in the figure below.

Section of the basic taxonomy of part-whole relations of [5] (less and irrelevant sections in grey or suppressed), extended with the stuff relations and their position in the hierarchy.

Section of the basic taxonomy of part-whole relations of [5] (less and irrelevant sections in grey or suppressed), extended with the stuff relations and their position in the hierarchy.

Their formal definitions are included in the paper.

Another aspect of the solution is that it distinguishes between 1) the extensional and intensional level—like, between ‘an amount of wine’ and ‘wine’—because different constraints apply (following from that latter can be instantiated the former cannot), and 2) the amount of stuff and the (repeatable) quantity, as one can have 1kg of many things.

Just theory isn’t good enough, though, for one would want to use it in some way to indeed get those benefits of traceability in the supply chains. After considering the implementation options (see paper for details), I settled for an extension to the Stuff Ontology core ontology that now also imports a special purpose module OMmini of the Ontology of Units of Measure (see also the Stuff Ontology page). The latter sounds easier than that it worked in praxis, but that’s a topic of a different post. The module is there, and the links between the OMmin.owl and stuff.owl have been declared.

Although the implementation is atemporal in the end, it is still possible to do some automated reasoning for traceability. This is mainly thought availing of property chains to approximate the relevant temporal aspects. For instance, with scatteredPortionOf \circ portionOf \sqsubseteq scatteredPortionOf then one can infer that a scattered portion in my glass of wine that was a portion of bottle #1234 of organic Pinotage wine of an amount of wine, contained in cask #3, with wine from wine farm X of Stellar Winery from the 2015 harvest is a scattered portion of that amount of matter (that cask). Or take the (high-level) pharmaceutical supply chain from [4]: a portion (that is on a ‘pallet’) of the quantity of medicine produced by the manufacturer goes to the warehouse, of which a portion (in a ‘case’) goes to the distribution centre. From there, a portion ends up on the dispensing shelf, and someone buys it. Then tracing any customer’s portion of medicine—i.e., regardless the actual instance—can be inferred with the following chain: scatteredPortionOf \circ scatteredPortionOf \circ scatteredPortionOf \sqsubseteq scatteredPortionOf

Sure, the research presented hasn’t solved everything yet, but at least software developers now have a (better) way to automate traceability in supply chains. It also allows one to be more fine-grained in the analysis where a culprit may be, so that there are fewer cases of needless scares. For instance, we know that when there’s an outbreak of Salmonella, then we only have to trace where the batch of egg yolk went (typically in the tiramisu served in homes for the elderly), where it came from (which farm), and got mixed with in the production process, while the amounts of egg white on your lemon merengue still would be safe to eat even when it came from the same batch that had at least one infected egg.

I’ll be presenting the paper at EKAW’16 in November in Bologna, Italy, and hope to see you there! It’s not a good time of the year w.r.t. weather, but that’s counterbalanced by the beauty of the buildings and art works, and the actual venue room is in one of the historical buildings of the oldest university of Europe.

 

References

[1] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

[2] Keet, C.M. Relating some stuff to other stuff. 20th International Conference on Knowledge Engineering and Knowledge Management EKAW’16). Springer LNAI, 19-23 November 2016, Bologna, Italy. (accepted)

[3] Donnelly, K.A.M. A short communication – meta data and semantics the industry interface: what does the food industry think are necessary elements for exchange? In: Proc. of Metadata and Semantics Research (MTSR’10). Springer CCIS vol. 108, 131-136.

[4] Solanki, M., Brewster, C. OntoPedigree: Modelling pedigrees for traceability in supply chains. Semantic Web Journal, 2016, 7(5), 483-491.

[5] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.