Figuring out the verbalisation of temporal constraints in ontologies and conceptual models

Temporal conceptual models, ontologies, and their logics are nothing new, but that sort of information and knowledge representation still doesn’t gain a lot of traction (cf. say, formal methods for verification). This is in no small part because modelling temporal information is not easy. Several conceptual modelling languages do have various temporal extensions, but most modellers don’t even use all of the default language features yet [1]. How could one at least reduce the barrier to adoption of temporal logics and modelling languages? The two principle approaches are visualisation with a diagrammatic language and rendering it in a (pseudo-)natural language. One of my postgraduate students looked at the former, trying to figure out what would be the best icons and such, which showed there was still a steep learning curve [2]. Before examining whether that could be optimised, I wondered whether the natural language option might be promising. The problem was, that no-one had yet tried to determine what the natural language counterpart of the temporal constraints were supposed to be, let alone whether they be ‘adequate’ or the ‘best’ way of rendering the temporal constraints in tolerable natural language sentences. I wanted to know that badly enough that I tried to find out.

Given that using templates is a tried-and-tested relatively successful approach for atemporal conceptual models and ontologies (e.g., for ORM, the ACE system), it makes sense to do something similar, but then for some temporal extension. As temporal conceptual modelling language I used one that has a Description Logics foundation (DLRUS [3,4]) for that easily links to ontologies as well, added a few known temporal constraints (like for relationships/DL roles, mandatory) and removing others (some didn’t seem all that interesting), which resulted in 34 constraints, still. For each one, I tried to devise more and less reasonable templates, resulting in 101 templates overall. Those templates were evaluated on semantics and preference by three temporal logic experts and five ‘mixed experts’ (experts in natural language generation, logic, or modelling). This resulted in a final set of preferred templates to verbalise the temporal constraints. The remainder of this post first describes a bit about the templates and then the results of which I think they are most interesting.

Templates

The basic idea of a template—in the context of the verbalisation of conceptual models and ontologies—is to have some natural language for the constraint where then the vocabulary gets slotted in at runtime. Take, for instance, simple named class subsumption in an ontology, C \sqsubseteq D, for which one could define a template “Each [C] is a(n) [D]”, so that with some axiom Manager \sqsubseteq Employee, it would generate the sentence “Each Manager is an Employee”. One also could have devised the template “All [C] are [D]” and then it would have generated “All Managers are Employees”. The choice between the two templates in this case is just taste, for in both cases, the semantics is the same. More complex axioms are not always that straightforward. For instance, for the axiom type C \sqsubseteq \exists R.D, would “Each [C] [R] some [D]” be good enough, or would perhaps “Each [C] must [R] at least one [D]” be better? E.g., “Each Professor teaches some Course” vs “Each Professor must teach at least one Course”.

The same can be done for the temporal constraints. To get there, I did a bit of a linguistic detour that informed the template design (described in the paper [5]). Let us take as first example for templates temporal class that has a semantics of o \in C^{\mathcal{I}(t)} \rightarrow \exists t' \neq t. o \notin C^{\mathcal{I}(t')}; for instance, UndergraduateStudent (assuming they graduate and end up as alumni or as drop outs, and weren’t undergrads from birth):

  1. If an object is an instance of entity type [C], then there is some time where it is not a(n) [C].
  2. [C] is an entity type whose objects are, for some time in their existence, not instances of [C].
  3. [C] is an entity type of which each object is not a(n) [C] for some time during its existence.
  4. All instances of entity type [C] are not a(n) [C] for some time.
  5. Each [C] is not a(n) [C] for some time.
  6. Each [C] is for some time not a(n) [C].

Which one(s) do you think captures the semantics, and which one(s) do you prefer?

A more elaborate constraint for relationships is ‘dynamic extension for relationships, past, mandatory], which is formalised as \langle o , o' \rangle \in \mbox{{\sc RDexM}-}_{R_1,R_2}^{\mathcal{I}(t)} \rightarrow (\langle o , o' \rangle \in{\tt R_1}^{\mathcal{I}(t)} \rightarrow \exists t'<t. \langle o , o' \rangle \in \mbox{{\sc RDex}}_{R_1,R_2}^{\mathcal{I}(t')} where \langle o , o' \rangle \in \mbox{{\sc RDex}}_{R_1,R_2}^{\mathcal{I}(t)} \rightarrow ( \langle o , o' \rangle \in{\tt R_1}^{\mathcal{I}(t)} \rightarrow    \exists t'>t. \langle o , o' \rangle \in {\tt R_2}^{\mathcal{I}(t')}).; e.g., every passenger who boards a flight must have checked in for that flight. Two options could be:

  1. Each ..C_1.. ..R_1.. ..C_2.. was preceded by ..C_1.. ..R_2.. ..C_2.. some time earlier.
  2. Each ..C_1.. ..R_1.. ..C_2.. must be preceded by ..C_1.. ..R_2.. ..C_2.. .

I’m not saying they are all correct; they were some of the options given, which the participants could choose from and comment on. The full list of constraints and template options are available in the supplementary material, which also contains a file where you can fill in your own answers, see what the (anonymised) participants said, and it has the final list of ‘best’ constraints.

Results

The main aggregate quantitative results are shown in the following table.

Many observations can be made from the data (see the paper for details). Some of the salient aspects are that there was low inter-annotator agreement among the experts, despite that they know each other (temporal logics is a small community) and that the ‘mixed group’ deemed many sentences correct that the experts deemed wrong in the sense of not properly capturing the semantics of the constraint. Put differently, it looks like the mixed experts, as a group, did not fully grasp some subtle distinction in the temporal constraints.

With respect to the templates, the preferred ones don’t follow the structure of the logic, but are, in a way, a separate rendering, or: there’s no neat 1:1 mapping between axiom type and template structure. That said, that doesn’t mean that they always chose the shortest template: the experts definitely did not, while the mixed experts leaned a bit toward preferring templates with fewer words even though they were surely not always the semantically correct option.

It may not look good that the experts preferred different templates, but in a follow-up interview with one of the experts, the expert noted that it was not really a problem “for there is the logic that does have the precise meaning anyway” and thus “resolves any confusion that may arise from using slightly different terminology”. The temporal logic expert does have a point from the expert’s view, fair enough, but that pretty much defeats my aim with the experiment. Asking more non-experts may not be a good strategy either, for they are, on average, too lenient.

So, for now, we do have a set of, relatively, ‘best’ templates to verbalise temporal constraints in temporal conceptual models and ontologies. The next step is to compare that with the diagrammatic representation. This we did [6], and I’ll describe those results informally in a next post.

I’ll present more details at the upcoming CREOL: Contextual Representation of Events and Objects in Language Workshop that is part of the Joint Ontology Workshops 2017, which will be held next week (21-23 September) in Bolzano, Italy. As the KRDB group at FUB in Bolzano has a few temporal logic experts, I’m looking forward to the discussions! Also, I’d be happy if you would be willing to fill in the spreadsheet with your preferences (before looking at the answers given by the participants!), and send them to me.

 

References

[1] Keet, C.M., Fillottrani, P.R. An analysis and characterisation of publicly available conceptual models. 34th International Conference on Conceptual Modeling (ER’15). Johannesson, P., Lee, M.L. Liddle, S.W., Opdahl, A.L., Pastor López, O. (Eds.). Springer LNCS vol 9381, 585-593. 19-22 Oct, Stockholm, Sweden.

[2] T. Shunmugam. Adoption of a visual model for temporal database representation. M. IT thesis, Department of Computer Science, University of Cape Town, South Africa, 2016.

[3] A. Artale, E. Franconi, F. Wolter, and M. Zakharyaschev. A temporal description logic for reasoning about conceptual schemas and queries. In S. Flesca, S. Greco, N. Leone, and G. Ianni, editors, Proceedings of the 8th Joint European Conference on Logics in Artificial Intelligence (JELIA-02), volume 2424 of LNAI, pages 98-110. Springer Verlag, 2002.

[4] A. Artale, C. Parent, and S. Spaccapietra. Evolving objects in temporal information systems. Annals of Mathematics and Artificial Intelligence, 50(1-2):5-38, 2007.

[5] Keet, C.M. Natural language template selection for temporal constraints. CREOL: Contextual Representation of Events and Objects in Language, Joint Ontology Workshops 2017, 21-23 September 2017, Bolzano, Italy. CEUR-WS Vol. (in print).

[6] Keet, C.M., Berman, S. Determining the preferred representation of temporal constraints in conceptual models. 36th International Conference on Conceptual Modeling (ER’17). Springer LNCS. 6-9 Nov 2017, Valencia, Spain. (in print)

Advertisements

Round 2 of the search engine, browser, and language bias mini-experiment

Exactly a year ago I did a mini-experiment to see whether search engine bias exist in South Africa as well. It did. The notable case was that Google in English on Safari on the Mac (GES) showed results for ‘politically interesting searches’ that had less information and was leaning to the right-side of the political spectrum in a way that raised cause for concern, as compared to Google in isiZulu in Firefox (GiF) and Bing in English in Firefox (BEF). I repeated the experiment in the exact same way, with some of the same queries and a few more new ones that take into account current affairs; the only difference being using my Internet connection at home rather than at work. The same problem still exists, sometimes quite dramatically. As recommendation, then: don’t use Google in English on Safari on the Mac unless you want to be in an “anti-government Democratic Alliance as centre-of-the-world” bubble.

To back it all up, I took screenshots again, with the order fltr GiF, GES, BEF, so you can check for yourself what users with different configurations see on the first page of the search results. The set of clearly different/biased results are listed first.

  • EFF”, which in South Africa is a left populist opposition party, and internationally the abbreviation of the electronic frontier foundation:

    “EFF” search

    GiF lists it as political party; GES in relation to the DA first and then as political party; BEF as political party and electronic frontier foundation.

  • jacob zuma”, the current president of the country: GiF first has a google ad to oust zuma, then general info and news; GES with a google ad to oust zuma, comment by JZ’s son

    “jacob zuma” search

    blaming the whites (probably fuelling racial divisiveness), then general info and news; BEF has general info and news.

  • ANC”, currently the largest political party nationally and in power: GiF has first a link to ANC site, one

    “ANC” search

    news, and for the rest contact info; GES has first ‘bad press’ for the ANC as top stories, then twitter, then the ANC website; BEF lists first the ANC site, then news and info.

  • Manana”, who is the Higher Education deputy minister who faces allegations of mistreatment by female

    “Manana” search

    staff members in his department: GiF with news about the accusations; GES has negative news about the ANC women’s league and DA actions; BEF shows info about Manana and mixed it up with the Spanish mañana.

  • The autocomplete function when typing “ANC” was somewhat surprising: GiF also associates it with ‘eff news’, and ‘zuma’;

    exploring the autocomplete on “ANC”

    GES doesn’t have ‘eff news’ to suggest, so autocomplete also seems to be determined by the client-side configuration; BEF has all sorts of things.

  • white monopoly capital” (long story): GiF shows general info and news; GES also shows general info

    “white monopoly capital” search

    and news, but with that inciting blaming the whites news item; BEF shows general info and news as well, but differently ordered from Google’s result.

  • DA”, which in South Africa refers to the abbreviation of the Democratic Alliance opposition party

    “DA” search

    (capitalist, for the rich): GiF lists the DA website and some news; GES shows news on DA action and opinion, then the DA website; BEF lists the DA site, some general info and disambiguation.

  • motion of no confidence”, which was held last week against Jacob Zuma

    “motion of no confidence” search

    (the motion failed, but not by a large margin): GiF has again that Google ad for the organization to oust Zuma, then info and mostly news (with 1 international news site [Al Jazeera]); GES has info then SA opinion pieces rather than news; BEF has news and info.

  • FeesMustFall”, which was one of the tags of the student protests in 2015

    “FeesMustFall” search

    and 2016 (for free higher education): GiF has general info and news; GES shows first two ads to join the campaign, then general info and news; BEF has info and news. So, this seems flipped cf. last year.

Then the set of searches of which the results are roughly the same. I had expected this for “Law on cookies in South Africa” and “Socialism”, for they were about the same last year as well. I wasn’t sure about “women’s month” (this month, August), given its history; there are slight differences, but not much. The interesting one, perhaps, was that “state capture gupta” also showed similar results across the three configurations, all of them showing results to pages that treat it as fact and at least some detailed background reading on it.

“Law on cookies in South Africa” search

“Socialism” search

“women’s month” search

 

 

“state capture gupta” search

Finally, last year the mini-experiment was motivated by lecture preparations for the “Social Issues and Professional Practice” block of CSC1016S that I’m scheduled to teach in the upcoming semester (if there won’t be protests, that is). As compared to last year, now I can also add a note on the Algorithmic Transparency and Accountability statement from the ACM, in addition to the ‘filter bubble’ and ‘search engine manipulation’ items. Maybe I should cook up an exercise for the students so we can get data rather still being in the realm of anecdotes with my 20 searches and three configurations. If you did the same with a different configuration, please let me know.

A grammar of the isiZulu verb (present tense)

If you have read any of the blog posts on (automated) natural language generation for isiZulu, then you’ll probably agree with me that isiZulu verbs are non-trivial. True, verbs in other languages are most likely not as easy as in English, or Afrikaans for that matter (e.g., they made irregular verbs regular), but there are many little ‘bits and pieces’ ‘glued’ onto the verb root that make it semantically a ‘heavy’ element in a sentence. For instance:

  • Aba-shana ba-ya-zi-theng-is-el-an-a                izimpahla
  • Children   2.SC-Pres-8.OC-buyVR -C-A-R-FV 8.clothes
  • ‘The children are selling the clothes to each other’

The ba is the subject concord (~conjugation) to match with the noun class (which is 2) of the noun that plays the subject in the sentence (abashana), the ya denotes a continuous action (‘are doing something’ in the present), the zi is the object concord for the noun class (8) of the noun that plays the object in the sentence (izimpahla), theng is the verb root, then comes the CARP extension with is the causative (turning ‘buy’ into ‘sell’), and el the applicative and an the reciprocative, which take care of the ‘to each other’, and then finally the final vowel a.

More precisely, the general basic structure of the verb is as follows:

where NEG is the negative; SC the subject concord; T/A denotes tense/aspect; MOD the mood; OC the object concord; Verb Rad the verb radical; C the causative; A the applicative; R the reciprocal; and P the passive. For instance, if the children were not selling the clothes to each other, then instead of the SC, there would be the NEG SC in that position, making the verb abayazithengiselana.

To make sense of all this in a way that it would be amenable to computation, we—my co-author Langa Khumalo and I—specified the grammar of the complex verb for the present tense in a CFG using an incremental process of development. To the best of our (and the reviewer’s) knowledge, the outcome of the lengthy exercise is (1) the first comprehensive and precisely formulated documentation of the grammar rules for the isiZulu verb present tense, (2) all together in one place (cf. fragments sprinkled around in different papers, Wikipedia, and outdated literature (Doke in 1927 and 1935)), and (3) goes well beyond handling just one of the CARP, among others. The figure below summarises those rules, which are explained in detail in the forthcoming paper “Grammar rules for the isiZulu complex verb”, which will be published in the Southern African Linguistics and Applied Language Studies [1] (finally in print, yay!).

It is one thing to write these rules down on paper, and another to verify whether they’re actually doing what they’re supposed to be doing. Instead of fallible and laborious manual checking, we put them in JFLAP (for the lack of a better alternative at the time; discussed in the paper) and tested the CFG both on generation and recognition. The tests went reasonably well, and it helped fixing a rule during the testing phase.

Because the CFG doesn’t take into account phonological conditioning for the vowels, it generates strings not in the language. Such phonological conditioning is considered to be a post-processing step and was beyond the scope of elucidating and specifying the rules themselves. There are other causes of overgeneration that we did not get around to doing, for various reasons: there are rules that go across the verb root, which are simple to represent in coding-style notation (see paper) but not so much in a CFG, and rules for different types of verbs, but there’s no available resource that lists which verb roots are intransitive, which as monosyllabic and so on. We have started with scoping rules and solving issues for the latter, and do have a subset of phonological conditioning rules; so, to be continued… For now, though, we have completed at least one of the milestones.

Last, but not least, in case you wonder what’s the use of all this besides the linguistics to satisfy one’s curiosity and investigate and document an underresourced language: natural language generation for intelligent user interfaces in localised software, spellcheckers, and grammar checkers, among others.

 

References

[1] Keet, C.M., Khumalo, L. Grammar rules for the isiZulu complex verb. Southern African Linguistics and Applied Language Studies, (in print). Submitted version (the rules are the same as in the final version)

Aligning different relations: the case of part-whole relations—LDK2017

Despite the best intentions, I did not get around to writing a post on the paper that I presented last week at the First International Conference on Language, Data and Knowledge 2017, 19-20 June, Galway, Ireland, and now Paul Groth also ‘beat’ me to it writing a nice conference report of it. On the bright side, it is an opportunity to say upfront I really enjoyed the conference and look forward to the next edition in 2019. The ESWC’17 organisers might be slightly disappointed that there was no special track on the multilingual semantic web after all, but I did get the distinct impression that the LDK17 authors might just all have gambled on LDK17—an opportunity to binge two days on all things natural language & Semantic Web—rather than on one track at an overpriced conference (despite the allure of it being A-rated).

So, what was my paper about that could have been submitted to either? I ended up struggling—and solving—an issue with aligning OWL object properties that were not simple 1:1 mappings, in a similar scope as our ESWC17 paper (introduced here) [4], but then with too many complications. Complications were due to the different conceptualisations of part-whole relations and that one of the requirements was to solve what to do with an object property (relation, relationship) that does not have a neat, single, label, and therewith neither fitting with the common OWL modelling paradigm nor with the recently agreed-upon ontolex-lemon model for linguistic annotations.

The start of all this sounded nice and doable: we need to generate natural language for healthcare, using, e.g., SNOMED CT, in local languages in South Africa, focussing on the largest one, being isiZulu. Medical terminologies are riddled with part-whole relations, so we sought to address that one (simple existentials already having been solved), availing of a standard list of part-whole relations (e.g. [1]). That turned out to be a non-trivial exercise, but doable eventually [2]. What wasn’t addressed in [2] was that some ‘common’ part-whole relations, such as membership and containment, weren’t like that in isiZulu, at all. Moreover, it wasn’t just a language issue, but ontological as well. The LDK17 paper “Representing and aligning similar relations: parts and wholes in isiZulu vs English” [3] describes this in some detail.

Here’s a (simplified) list of (assumed to be) common part-whole relations, which takes into account both transitivity differences and domain and range:

Now here’s the one based on the isiZulu language and some ontological analysis of that:

That is: there are both generalisations—some distinctions are not being made—and specialisations—some distinctions are made here but not elsewhere. For instance, ‘musician is part of some orchestra’ and ‘heart is part of some human’ (or vv.) is all done and described in the same way (ingxenye ‘part of’ and SC+CONJ for ‘has part’ [more about that below]). Yet, there is a difference between an individual (e.g., a voter) participating in some process and a collective (e.g., the electorate) participating in a process, or vv. The paper describes this more precisely, going into some detail regarding the differences in categories of domain and range and into the consequences on transitivity of mereological parthood.

The other ‘odd thing’—cf. current multilingual Semantic Web assumptions and technologies, that is—is that while the conceptualisation of ‘has part’ exists, it does not have a single label as in English (or in several other languages, such as heeft as deel), but it is dependent on the noun class of the noun of the class that play the part and play the whole in the relation. It combines the subject concord (~conjugation) of the noun class of the noun that plays the whole with a conjunction that is phonologically conditioned based on the first letter of the noun that plays the part; with verbalisation in the plural and three phonological cases, there are 18 possible strings all denoting ‘has part’. This still could be sorted with a language with inverses, provided the part-of direction has a name, like the ingxenye. This is not the case for containment, however. Instead of the relation (object property) having a name—be this a verb like ‘contained in’ or some noun phrase—it is the noun that plays the whole (the container, if you will) that gets modified. For instance, imvilophu ‘envelope’ and emvilophini denoting ‘contained in the envelope’, or, for individuals and locations, the city iTheku ‘Durban’ and eThekwini meaning ‘located in Durban’ (no typo—there’s some phonological conditioning I’m brushing over). While I have gotten used to such constructions, it generated some surprise among several attendees that one can have notions, concepts, views on or interpretations or descriptions of reality, that exist but do not have even one single string of text throughout to refer to regardless the context it is used.

The naming issue was solved by adding some arbitrary string as ‘name’ of the object property, and relating that to the function that verbalises that specific part-whole relation. The former issue, i.e., not all the same part-whole relations, required a bit more work, using ontology pattern alignments, by extending one correspondence pattern from the ODP catalogue and introducing a new one (see paper for the formal details), using the same broad framework of formalisation as proposed in [4].

All this was then implemented and aligned, and verified to not result in some unsatisfiable classes, object properties, or inconsistency (files). This also works in the isiZulu verbalisation tool we demo-ed at ESWC17 (described in the previous post) [5], all as part of the NRF-funded GeNI project.

Now, ideally, I already would have had the time to read the papers I flagged in my LDK17 conference notes with “check paper”. I haven’t yet due to end-of-semester tasks. So, on the basis of just a positive-seeming presentation, here are a few that are on the top of my list to check out first, for quite different reasons:

  • Interaction between natural language reading capabilities and math education, focusing on language production (i.e., ‘can you talk about it?’) [6], mainly because math education in South Africa faces a lot of problems. It also generated a lively discussion in the Q&A session.
  • The OnLiT ontology for linguistic [7] and LLODifying linguistic glosses [8] terminology (also: one of the two also won the best paper award).
  • Deep text generation, for it was looking at trying to address skewed or limited data to learn from [9], which is an issue we face when trying to do some NLP with most South African languages.

 

References

[1] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

[2] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), September 5-8, 2016, Edinburgh, UK. ACL.

[3] Keet, C.M. Representing and aligning similar relations: parts and wholes in isiZulu vs English. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 58-73.

[4] Fillottrani, P.R., Keet, C.M. Patterns for Heterogeneous TBox Mappings to Bridge Different Modelling Decisions. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017.

[5] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[6] Crossley, S., Kostyuk, V. Letting the genie out of the lamp: using natural language processing tools to predict math performance. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 330-342.

[7] Klimek, B., McCrae, J.P., Lehmann, C., Chiarcos, C., Hellmann, S. OnLiT: and ontology for linguistic terminology. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 42-57.

[8] Chiarcos, C., Ionov, M. Rind-Pawlowski, M., Fäth, C., Wichers Schreur, J., Nevskaya. I. LLODifying linguistic glosses. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 89-103.

[9] Dethlefs N., Turner A. Deep Text Generation — Using Hierarchical Decomposition to Mitigate the Effect of Rare Data Points. In: Gracia J., Bond F., McCrae J., Buitelaar P., Chiarcos C., Hellmann S. (eds) Language, Data, and Knowledge LDK 2017. Springer LNAI vol 10318, 290-298.

Our ESWC17 demos: TDDonto2 and an OWL verbaliser for isiZulu

Besides the full paper on heterogeneous alignments for 14th Extended Semantic Web Conference (ESWC’17) that will take place next week in Portoroz, Slovenia, we also managed to squeeze out two demo papers. You may already know of TDDonto2 with Kieren Davies and Agnieszka Lawrynowicz, which was discussed in an earlier post that has been updated with a tutorial video. It now has a demo paper as well [1], which describes the rationale and a few scenarios. The other demo, with Musa Xakaza and Langa Khumalo, is new-new, but the regular reader might have seen it coming: we finally managed to link the verbalisation patterns for certain Description Logic axiom types [2,3] to those in OWL ontologies. The tool takes as input an ontology in isiZulu and the verbalisation algorithms, and out come the isiZulu sentences, be this in plain text for further processing or in a GUI for inspection by a domain expert [4]. There is a basic demo-screencast to show it’s all working.

The overall architecture may be of interest, for it deviates from most OWL verbalisers. It is shown in the following figure:

For instance, we use the Python-based OWL API Owlready, rather than a Java-based app, for Python is rather popular in NLP and the verbalisation algorithms may be used elsewhere as well. We made more such decisions with the aim to make whatever we did as multi-purpose usable as possible, like the list of nouns with noun classes (surprisingly, and annoyingly, there is no such readily available list yet, though isizulu.net probably will have it somewhere but inaccessible), verb roots, and exceptions in pluralisation. (Problems for integrating the verbaliser with, say, Protégé will be interesting to discuss during the demo session!)

The text-based output doesn’t look as nice as the GUI interface, so I will show here only the GUI interface, which is adorned with some annotations to illustrate that those verbalisation algorithms in the background are far from trivial templates:

For instance, while in English the universal quantification is always ‘Each’ or ‘All’ regardless the named class quantified over, in isiZulu it depends on the noun class of the noun that is the name of the OWL class. For instance, in the figure above, izingwe ‘leopards’ is in noun class 10, so the ‘Each/All’ is Zonke, amavazi ‘vases’ is in noun class 6, so ‘Each/All’ then becomes Onke, and abantu ‘people’/’humans’ is in noun class 2, making Bonke. There are 17 noun classes. They also determine the subject concords (SC, alike conjugation) for the verbs, with zi- for noun class 10, ­a- for noun class 6, and ba- for noun class 2, to name a few. How this all works is described in [2,3]. We’ve implemented all those algorithms and integrated the pluraliser [5] in it to make it work. The source files are available to check and play with already, you can do so and ask us during the ESWC17 demo session, and/or also have a look at the related outputs of the NRF-funded project Grammar Engine for Nguni natural language interfaces (GeNi).

 

References

[1] Davies, K. Keet, C.M., Lawrynowicz, A. TDDonto2: A Test-Driven Development Plugin for arbitrary TBox and ABox axioms. Extended Semantic Web Conference (ESWC’17), Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[2] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51:131-157.

[3] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), 5-8 September, 2016, Edinburgh, UK. Association for Computational Linguistics, 174-183.

[4] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (demo paper)

[5] Byamugisha, J., Keet, C.M., Khumalo, L. Pluralising Nouns in isiZulu and Related Languages. 17th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing’16), Springer LNCS. April 3-9, 2016, Konya, Turkey.

On heterogeneous mappings between ontologies

Representing information and knowledge often can be done in different ways even when the same representation language is used. In some cases, one way of representing it is always better than another—or: the other option is sub-optimal or plain wrong—but in other cases the distinction is not all that clear-cut. For instance, whether to represent ‘Employee’ as a subclass of ‘Person’ or that it inheres in ‘Person’. Now, if two ontologies (or conceptual models) represent it differently but they have to be aligned, then how to find such different modelling patterns and how to align them? And, taking a step back: which alternate modelling patterns are there, and why those? We sought to answer these questions, whose outcome will be presented (and appear in the proceedings of [1]) the 14th Extended Semantic Web Conference (ESWC’17) that will take place later this month in Portoroz, Slovenia.

Setting aside the formal stuff in this blog post, let’s first have a look at some of those different modelling patterns. At it’s core, there are 1) modelling practices in ontologies vs conceptual models and 2) foundational [or: top-level, or upper] ontology guidance vs being ‘compacter’ in representing the knowledge. The generalisations of the following handwaivy examples are described in more detail in the paper, but for this blog post, it hopefully will do as a teaser of the six formalised patterns. Take, e.g., the following examples that are all variations on the same theme: to-reify-or-not-to-reify, where the example in B is further dressed up with content from a foundational ontology:

Indeed, in the examples, what is shown on the left-hand side does not have the exact same information content as what is shown on the right-hand side, but the underlying conceptualization is pretty much the same. The models on the right-hand side are more precise, for one has the opportunity to specify those, like stating that a particular marriage is between two persons (so, no group marriages allowed). Whether one always needs such more precise constraints is a separate matter.

Then there’s the Employee example mentioned in this post’s introduction with two alternate ways of representing it:

That is, a modeller chooses between representing the role an object performs/has as a subclass of that object or in a separate hierarchy of roles. Foundational ontologies take the latter option, domain ontologies the former.

These examples are instantiations of small modelling patterns (of which there may be more than the six formalised in the paper). To devise mappings between them, one ends up with alignments in such a way that they are between two patterns, rather than 1:1 mappings. To get there, we had to take some preliminary steps on how to represent it all formally, such as specifying the language for a pattern and a defining an ontology pattern alignment. This allowed us to formalise the patterns and devise that formal specification of the heterogeneous alignments.

That outcome, in turn, feeds into the alignment pattern search and checking algorithms. The algorithms show that it is feasible to find those patterns automatically, which then can propose possible alignments to the modeller, and that, upon aligning, one can check whether that’s done correctly. For instance, take the following two ontologies graphically represented in an (extended, enhanced) ICOM tool:

Two inter-ontology assertions have been made, pointed out with the two yellow arrows; i.e., ‘Tennis’ is a subclass of ‘Tournament’ and ‘TennisPlayer’ is a subclass of ‘Athlete’. The pattern search algorithm then will try to find instantiations for the small modelling patterns for alignment. Once something is found—in this case, pattern A fits—it will check whether all conditions for the alignment can be satisfied, and if so, it will propose a possible alignment, which is shown in the following illustrative figure:

Of interest here is, perhaps, the ‘new’ object property being proposed, indicated with the yellow arrow, that amounts to an equivalence to the partOf+Match+played. (That threesome can’t be mapped as equivalent to ‘participated’ due to differences in domain and range axioms, and drawing three subsumption lines from ‘participated’ to ‘part of’, ‘Match’, and ‘played’ is awkward.). The algorithms’ output then thus reduces the alignment into a final question to the modeller along the line of “are you ok with the alignment between the purple elements in the two diagrams?”, and accept or reject it. Please refer to the paper for further details.

The principles presented could possibly be used also for refactoring of an ontology, like in TDD [2] or when ‘preparing’ an ontology to align to a foundational ontology. More results on this topic are in the pipeline, and if you want to know now already, we can have a chat at ESWC.

References

[1] Fillottrani, P.R., Keet, C.M. Patterns for Heterogeneous TBox Mappings to Bridge Different Modelling Decisions. 14th Extended Semantic Web Conference (ESWC’17). Springer LNCS. Portoroz, Slovenia, May 28 – June 2, 2017. (in print)

[2] Keet, C.M., Lawrynowicz, A. Test-Driven Development of Ontologies. In: Proceedings of the 13th Extended Semantic Web Conference (ESWC’16). Springer LNCS 9678, 642-657. 29 May – 2 June, 2016, Crete, Greece.

On that “shared” conceptualization and other definitions of an ontology

It’s a topic that never failed to generate a discussion on all 10 instalments of the ontology engineering course I taught from BSc(hons) up to participants studying toward or already having a PhD: those pesky definitions of what an ontology is. To top it off, like I didn’t know, I also got a snarky reviewer’s comment about it on my Stuff ontology paper [1]:

A comment that might be superficial but I cannot help: since an ontology is usually (in Borst’s terms) assumed to be a ‘shared’ conceptualization, I find a little surprising for such a complex model to have been designed by a sole author. While I acknowledge the huge amount of literature carefully analyzed, it still seems that the concrete modeling decisions eventually relied on the background of a single ontologist

Is that bad? Does that make the Stuff Ontology a ‘nontology’? And, by the by, what about all those loner philosophers who write single-author papers on ontology; should that whole field be discarded because most of the ontology insights were “shared” only from paper submission and publication?

Anyway, let’s start from the beginning. There’s the much-criticized definition of an ontology from Gruber that, it seems, only novices seem to keep quoting (to my irritation, indeed):

An ontology is a specification of a conceptualization. [2]

If you wonder why quite a bit has been written about it: try to answer what “specification” really means and how it is specified, and what exactly a “conceptualization” is. The real fun starts with Borst et al.’s [3] and then Studer et al.’s [4] refinement of Gruber’s version, which the reviewer quoted above alluded to:

An ontology is a formal, explicit specification of a shared conceptualization. [4]

At least there’s the “formal” (be it in the sense of logic or formal ontology), and “explicit”, so something is being made explicit and precise. But “shared”? Shared with whom? How? Is a logical theory that not one, but two, people write down an ontology, then? Or one person develops an ontology and then emails it to a few colleagues or puts it online in, say, the open BioPortal ontology repository. Does that count as “shared” then? Or is it only “shared” if at least one other person agrees with it as is (all reviewers of the Stuff Ontology did, btw), or perhaps (most or all of) the ‘conceptualization’ of it but a few axioms would need a bit of tweaking and cleaning up? Do you need at least a group of people to develop an ontology, and if so, how large should that group be, and should that group consist of independent sub-groups that adopt the ontology (and if so, how many endorsers)? Is a lightweight low-hanging-fruit ontology that is used by a large company a real or successful ontology, but a highly axiomatised ontology with a high tangledness that is used by a specialist organization, not? And even if you canvass and get a large group and/or organization to buy into that formal explicit specification, what if they are all wrong on the reality is supposed to represent? Does it still count as an ontology no matter how wrong the conceptualization is, just because it’s formal, explicit, and shared? Is a tailor-made module of, say, the DOLCE ontology not also an ontology, even if the module was made by one person and made available in an online repository like ROMULUS?

Perhaps one shouldn’t start top-down, but bottom-up: take some things and decide (who?) whether it is an ontology or not. Case one: the taxonomy of part-whole relations is a mini-ontology, and although at the start only ‘shared’ with my co-author and published in the Applied Ontology journal [5], it has been used by quite a few researchers for various (and unintended) purposes afterward, notably in NLP (e.g., [6]). An ontology? If so, since when? Case two: Noy et al. converted the representation of the NCI thesaurus into OWL DL [7]. Does changing the serialisation of a multi-authored thesaurus from one format into another make it an ontology? (more on that below.) Case three: a group of 5 people try to represent the subject domain of, say, breast cancer, but it is replete with mistakes both regarding the reality it ought to represent and unintended modelling errors (such as confusing is-a with part-of). Is it still an ontology, albeit a bad one?

It gets more muddled when the representation language is thrown in (as with case 2 above). What if the ontology turns out to be unsatisfiable? From a logic viewpoint, it’s not a theory then (a consistent set of sentences, is), but if it’s formal, explicit, and shared, is it acceptable that those people who developed the artefact simply have an inconsistent conceptualization and that it still counts as an ontology?

Horrocks et al. [8] simplify the whole thing by eliminating the ‘shared’ aspect:

an ontology being equivalent to a Description Logic knowledge base. [8]

However, this generates a set of questions and problems of its own that are practically also problematic. For instance: 1) whether transforming a UML Class Diagram into OWL ‘magically’ makes it an ontology (answer: no); 2) The NCI Thesaurus to OWL (answer: no); or 3) if you used, say, Common Logic to represent it, that then it could not be an ontology because it’s not formalised in Description Logics (answer: it sure can be one).

There are more attempts to give a definition or a description, notably by Nicola Guarino in [9] (a key paper in the field):

An ontology is a logical theory accounting for the intended meaning of a formal vocabulary, i.e. its ontological commitment to a particular conceptualization of the world. The intended models of a logical language using such a vocabulary are constrained by its ontological commitment. An ontology indirectly reflects this commitment (and the underlying conceptualization) by approximating these intended models. [9]

That’s a mouthful, but at least no “shared” in there, either. And, finally, among the many definitions in [10], here’s Barry Smith and cs.’s take on it:

An ONTOLOGY is a representational artifact, comprising a taxonomy as proper part, whose representational units are intended to designate some combination of universals, defined classes, and certain relations between them. [10]

And again, no “shared” either in this definition. Of course, also with Smith’s definition, there are things one can debate about and pose it against Guarino’s definition, like the “universals” vs. “conceptualization” etc., but that’s a story for another time.

So, to sum up: there is that problem on how to interpret “shared”, which is untenable, and one just as well can pick a definition of an ontology from a widely cited paper that doesn’t include that in the definition.

That said, all this doesn’t help my students to grapple with the notion of ‘an ontology’. Examples help, and it would be good if someone, or, say, the International Association for Ontology and its Applications (IAOA) would have a list of “exemplar ontologies” sooner rather than later. (Yes, I have a list, but it still needs to be annotated better). Another aspect that helps explaining it comes is from Guarino’s slides on going “from logical to ontological level” and on good and bad ontologies. This first screenshot (taken from my slides—easier to find) shows there’s “something more” to an ontology than just the logic, with a hint to reasons why (note to my students: more about that later in the course). The second screenshot shows that, yes, we can have the good, bad, and ugly: the yellow oval denotes the intended models (what it should be), and the other ovals denote the various approximations that one may have tried to represent in an ontology. For instance, representing ‘each human has exactly one brain’ is more precise (“good”) than stating ‘each human has at least one brain’ (“less good”) or not saying anything at all about it an ontology of human anatomy (“bad”), and even “worse” it would be if that ontology ware to state ‘each human has exactly two tails’.

logicontogoddbaduglyonto

Maybe we can’t do better than ‘intuition’ or ‘very wieldy explanation’. If this were a local installation of WordPress, I’d have added a poll on definitions and the subjectivity on the shared-ness factor (though knowing well that science isn’t governed as a democracy). In lieu of that: comments, preferences for one definition or the other, or any better suggestions for definitions are most welcome! (The next instalment of my Ontology Engineering course will start in a few week’s time.)

 

References

[1] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

[2] Gruber, T. R. A translation approach to portable ontology specifications. Knowledge Acquisition, 1993, 5(2):199-220.

[3] Borst, W.N., Akkermans, J.M. Engineering Ontologies. International Journal of Human-Computer Studies, 1997, 46(2-3):365-406.

[4] Studer, R., Benjamins, R., and Fensel, D. Knowledge engineering: Principles and methods. Data & Knowledge Engineering, 1998, 25(1-2):161-198.

[5] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

[6] Tandon, N., Hariman, C., Urbani, J., Rohrbach, A., Rohrbach, M., Weikum, G.: Commonsense in parts: Mining part-whole relations from the web and image tags. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI’16). pp. 243-250. AAAI Press (2016)

[7] Noy, N.F., de Coronado, S., Solbrig, H., Fragoso, G., Hartel, F.W., Musen, M. Representing the NCI Thesaurus in OWL DL: Modeling tools help modeling languages. Applied Ontology, 2008, 3(3):173-190.

[8] Horrocks, I., Patel-Schneider, P. F., and van Harmelen, F. From SHIQ and RDF to OWL: The making of a web ontology language. Journal of Web Semantics, 2003, 1(1):7.

[9] Guarino, N. (1998). Formal ontology and information systems. In Guarino, N., editor, Proceedings of Formal Ontology in Information Systems (FOIS’98), Frontiers in Artificial intelligence and Applications, pages 3-15. Amsterdam: IOS Press.

[10] Smith, B., Kusnierczyk, W., Schober, D., Ceusters, W. Towards a Reference Terminology for Ontology Research and Development in the Biomedical Domain. KR-MED 2006 “Biomedical Ontology in Action”. November 8, 2006, Baltimore, Maryland, USA.