FOIS’18 conference report

To some perhaps surprisingly, despite being local organizer, I could attend all sessions of the 10th International Conference Formal Ontology in Information Systems as participant (cf. running around for last-minute things). It just wasn’t as much of a trip as it usually is: only 15 minutes to town at the Atlantic Imbizo conference venue, which is situated between the Clock Tower and (award-winning) Zeitz MOCAA at Cape Town’s V&A Waterfront. This blog post has turned into a longer post than intended—yet, there’s still so much left out to talk about—and it is divided up into sections on keynotes, presentations, ontologies, and the (ontologically inappropriate basket of) other things.

 

Keynotes

The first keynote was presented by (emeritus) professor in philosophy Peter Simons from Trinity College Dublin and Universität Salzburg, on the ontology of aboutness (slides).

Peter Simon during his keynote talk

That may sound a bit abstract, but it is not unusual for some information system that it will have to record statements about something, such as different medical opinions, changes of policies, plans or expectations, and we need a way to represent that and deal with it. Simons discussed several earlier proposals before proposing his own, which includes as main entities a bearer, act, time, act-type, mental content, mental content type, intentional objects, referent, and referent type (slide 16), and then variants for pictorial and linguistic (speech and writing). And, in closing, his advice of “Don’t get involved in irrelevant philosophical disputes”.

The second keynote was presented by Alessandro Oltramari, who works at Bosch Research and Technology Centre in Pittsburgh, USA. He presented several of Bosch’s projects where ontologies are used in one way or another (slides) and that he was involved in. One of them was about knowledge-based intelligent IoT and another on an emergency assistant, or, in business sales parlance, a “personal guardian angel” mobile device that has location awareness, safety information of those locations, a decision support system for alternate route computation, and automatic escalation. The ontologies used include the foundational ontology DOLCE, the domain ontology of semantic sensor networks (SSN) from the W3C, and specific schemas developed in-house. Another project on a knowledge-based chatbot for healthcare policies links up DOLCE, schema.org, and some in-house schemas with Highmark-specific information (and is not ashamed of using SKOS). Om my question what methods and methodologies were used for the in-house ontology development, the (disappointing) answer was, unfortunately, only “DOLCE and OntoClean”, but the former is neither a method nor a methodology (it implies a top-down approach), and the latter is some 15 years old, as if nothing has happened in ontology engineering in the meantime (more about that further below). Regardless, it was good to see that ontologies are being used in industry.

The third keynote (slides) was by Riichiro Mizoguchi from the Japan Advanced Institute of Science and Technology (JAIST), on a state-centric methodology, which I’ll leave for a separate post.

Riichiro Mizoguchi during his keynote talk.

 

Presentations

The report on the presentations easily could take up several pages, but I’ll try to keep it short, lest otherwise this post never gets posted. The first session of the conference was on foundations. This included Antony Galton’s assessment of the treatment of time in upper ontologies [1]. It was mildly entertaining in that it turned out that BFO would need abstract things for its treatment of time (which it doesn’t have and doesn’t like) and adheres to Newtonian physics cf. the latest scientific theories. It is definitely on my list of papers to read in more detail. Another paper-for-printing to read is Torsten Hahmann’s work on mereotopology, which extends it to multidimensional space [2]. A nice bonus (though it ought not to be perceived as such) is that at least the theorems in the paper have been proved with Prover9 and Vampire (cf. having to double-check them manually). Laure Vieu presented a proposal for a graph-based approach to represent structure among the components of an entity [3], which is apparently different from the graph-based approach for representing molecules (within the Semantic Web context); I’ll have to look at that in more detail, for it sounds like it might be of some use for the parts aspects of part-whole relations.

Besides such theoretical contributions that are rather distant from applications, there were two of note that were motivated from praxis more clearly. One was about the ontological foundations of competition and the sort of competitive relations there are [4], which was presented by Tiago Prince Sales. The other one was presented by Pawel Garbacz, whose presentation conveyed more than the paper so as to get a real feel of the problem, being identity criteria for localities [5], with complicating use cases extracted from a Polish history project. He presented some examples of changes and a proposal for how to identify a locality/settlement. For instance, settlements can get moved altogether, have a population-only move, split into two, be merged, renamed and renamed again, deserted by a population and repopulated and renamed, and so on. When is it the same settlement and when is it another one? The paper [5] describes a first solution for identity criteria with an event-based approach to identity of localities.

My presentation on part-whole relations in Zulu language and culture [6] was scheduled in the ‘applications’ session, which had positive feedback and some pointers that may assist with future work.

 

venue during a Q&A session

Ontologies

Besides presentations, there was a discussion session on “what constitutes a good ontology paper?” for the Applied Ontology journal. Seeing the ontology papers at FOIS now, they should have done such as session for FOIS as well. There are four papers in the proceedings describing OWL files: “Amnestic forgery” (AF, conceptual metaphors) [7] presented by Mehwish Alam, UNiCS for research and innovation policy [8] presented by Fernando Roda, SAREF4Health [9] presented by João Moreira, and religious and spiritual belief (ORSB) [10] presented by Stefan Schulz. Skimming through each paper, AF, UNiCS and ORSB do not use a methodology explicitly, none of them uses existing methods, but they all do use a foundational or top-level ontology or the WordNet material, and then it’s cool enough to get into FOIS, apparently. This is a bit disappointing. At least SAREF4Health presented a set of competency questions, a systematic approach and broader framework, and some evaluation, and ORSB reuses not only top-level and top-domain ontologies but also tests some patterns. AF and ORSB have some interest to it as they’re addressing relatively novel modeling issues to solve and the ORSB discussion could be used more broadly for any “terms of dubious reference”. UNiCS is not really an ontology but an information model or, at best, a conceptual data model (e.g. calling “SCOPUS subject” an ontology is pushing it a bit too far); it makes their OBDA scenario easier to realize, true, but that’s a separate discussion. Fig 1 of SAREF4Health doesn’t look any better either, which has all the hallmarks of a plain UML Class Diagram (attributes with data types and such), with object diagram components attached and coloured in and annotated with OntoUML. SAREF4Health’s other downsides are things like “implementing the ontology as RDF” that just hurts to read (it is left implicit for AF that is plugged into the LOD cloud), as is the download in Turtle format (cf. the required exchange syntax of OWL 2), which isn’t even available at the provided link when you click on it (copy-paste gets you in the right direction), but is [I think] in some github sub-directory that has a whole bunch of ttl files with neither head nor tail, but one of them is called saref4health.ttl. On first inspection, it has plenty of data properties and data type use, and the class-as-instance issue here and there (e.g., ‘Rechargeable Lithium Polymer battery’ as instance cf. class), and others (e.g., a ‘series’ of measurements is not a subclass of a measurement) and very many classes directly subsumed by top, though some are knock-on effects from imports.

And then ontologists at FOIS deplored that there are many domain ontologies that are of poor quality and artifacts presented as ontologies but aren’t. The FOIS reviewers themselves apparently can’t even get their act together in the reviewing process, where artifacts that are sold as domain ontologies but aren’t (UNiCS, SAREF4Health) make it not only through the reviewing process but, moreover, even get a best paper award from the PC chairs (SAREF4Health). The PC chairs wanted to make a political statement to communicate that FOIS accepts domain ontology papers. It is good that the FOIS topics are becoming less narrow and I’m not saying they are pointless papers or lousy artifacts per sé—they are useful reference papers and UNiCS and SAREF4Health perform the application tasks they’re supposed to be performing, which is a good thing. Maybe, collectively, ontology developers can’t do better or don’t need to do better w.r.t. applied ontology? Either way, once upon a time there were principles for what ontologies are; what happened to that? Also, there are multiple methodologies for domain ontology development, and there are a myriad of methods and tools, which have been mostly ignored. For instance, using one foundational ontology over another ‘just because I know x’ is neither a scientific nor a sound engineering approach. There are comparisons, requirements, and a mix of the two to help you figure out which one is the best to use; an early tool for that is ONSET, the ONtology Selection and Explanation Tool, developed by Zubeida Khan (more data). To name one example.

Coincidentally, ontology engineering papers with such a content do not, or very rarely, make it into FOIS; but just that they don’t (because they’re typically not philosophical enough), doesn’t mean they don’t exist. Just in case a FOIS ontologist would like to explore methods, methodologies and tools for ontology development: ESWC, EKAW, and K-CAP are good/top conferences covering such topics in whole or in part, and Chapter 5 of the ontology engineering textbook provides a sampling as well (as do some other sections in Block II). Considering my critical comments, one may ask whether my ontologies and ontology papers are any better, or anyone else’s for that matter. Perhaps, perhaps not. You can check for yourself some of my recent papers on domain ontologies that also have OWL files[1] that I was involved in developing; one paper was intended as a reference paper for the domain ontology [11], another paper was a bit of both domain ontology and some framework [12], and yet another turned into a core ontology [13] (v1, with the main categories; there’s an updated version for the relations).

Anyway, returning to the first sentence of this section: the open forum discussion did not make it any clearer as to what would be the characteristics of a good ontology paper for the Applied Ontology journal (or FOIS, for that matter). Mainly just Protégé screenshots certainly is not, but opinions varied as to what would be. Going by examples of the ontology papers that made it through: use of a top-level or foundational ontology and some modeling issues and solutions seems to be preferred, evaluation and usage & uptake as a nice-to-have. Is developing an (domain) ontology science? That question wasn’t answered unanimously; I think it was leaning towards a ‘mostly no’ w.r.t. applied ontology but it may be if it’s the first to solve a modeling issue. How to evaluate the ontology? Another question without a satisfactory answer. Overall, the criteria for an ontology paper—let alone for the ontology itself—are “TBD” and meanwhile one has to hope that one will get a supportive ‘reviewer 2’.

 

Other

In case you have clicked-though to one or more of the listed papers, you may have noticed that the FOIS’18 proceedings are Open Access—paid for by those who registered for the conference (it was calculated in the registration fee). I suppose the next FOIS organisers and the IAOA exec may like your opinion on that approach.

mentors of the early career symposium papers

Besides the best paper award for SAREF4Health [9], there were two “distinguished paper awards”, which went to aforementioned paper on the graph-based approach for structured universals by Laure Vieu and Claudio Masolo [3] and to the foundational ontologies for units of measure by Michael Grüninger and co-authors [14]. The early career symposium went well and from hearsay they had a good social activity, too. There were lots of interesting conversations, networking, good food, and so on, and lots more to write about. There are also more photos.

Some of the postgraduate students and a recent PhD graduate in the spotlight at the closing ceremony, being thanked for chairing the sessions.

Last, but not least: the next FOIS in 2020 will be in Bolzano, Italy, as part of a ‘Bolzano summer of knowledge’ with more co-located conferences, workshops, and summer schools.

 

References

[1] Antony Galton. The treatment of time in upper ontologies. Proc. of FOIS’18. IOS Press, 306: 33-46.

[2] Thorsten Hahmann. On Decomposition Operations in a Theory of Multidimensional Qualitative Space. Proc. of FOIS’18. IOS Press, 306: 173-186.

[3] Claudio Masolo, Laure Vieu. Graph-Based Approaches to Structural Universals and Complex States of Affairs. Proc. of FOIS’18. IOS Press, 306: 69-82.

[4] Tiago Prince Sales, Daniele Porello, Nicola Guarino, Giancarlo Guizzardi, John Mylopoulos. Ontological Foundations of Competition. Proc. of FOIS’18. IOS Press, 306: 96-112.

[5] Pawel Garbacz, Agnieszka Ławrynowicz, Bogumił Szady. Identity criteria for localities. Proc. of FOIS’18. IOS Press, 306: 47-56.

[6] C. Maria Keet, Langa Khumalo. On the Ontology of Part-Whole Relations in Zulu Language and Culture. Proc. of FOIS’18. IOS Press, 306: 225-238.

[7] Aldo Gangemi, Mehwish Alam, Valentina Presutti. Amnestic Forgery: An Ontology of Conceptual Metaphors. Proc. of FOIS’18. IOS Press, 306: 159-172.

[8] Alessandro Mosca, Fernando Roda, Guillem Rull. UNiCS – The Ontology for Research and Innovation Policy Making. Proc. of FOIS’18. IOS Press, 306: 200-210.

[9] João Moreira, Luís Ferreira Pires, Marten van Sinderen, Laura Daniele. SAREF4health: IoT Standard-Based Ontology-Driven Healthcare Systems. Proc. of FOIS’18. IOS Press, 306: 239-252.

[10] Stefan Schulz, Ludger Jansen. Towards an Ontology of Religious and Spiritual Belief. Proc. of FOIS’18. IOS Press, 306: 253-260.

[11] Keet, C.M., Lawrynowicz, A., d’Amato, C., Kalousis, A., Nguyen, P., Palma, R., Stevens, R., Hilario, M. The Data Mining OPtimization ontology. Web Semantics: Science, Services and Agents on the World Wide Web, 2015, 32:43-53.

[12] Chavula, C., Keet, C.M. An Orchestration Framework for Linguistic Task Ontologies. 9th Metadata and Semantics Research Conference (MTSR’15), Garoufallou, E. et al. (Eds.). Springer CCIS vol. 544, 3-14.

[13] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

[14] Michael Grüninger, Bahar Aameri, Carmen Chui, Torsten Hahmann, Yi Ru. Foundational Ontologies for Units of Measure. Proc. of FOIS’18. IOS Press, 306: 211-224.

[1] I have others developed as part of methods & tools research

Advertisements

ISAO 2018, Cape Town, ‘trip’ report

The Fourth Interdisciplinary School on Applied Ontology has just come to an end, after five days of lectures, mini-projects, a poster session, exercises, and social activities spread over six days from 10 to 15 September in Cape Town on the UCT campus. It’s not exactly fair to call this a ‘trip report’, as I was the local organizer and one of the lecturers, but it’s a brief recap ‘trip report kind of blog post’ nonetheless.

The scientific programme consisted of lectures and tutorials on:

The linked slides (titles of the lectures, above) reveal only part of the contents covered, though. There were useful group exercises and plenary discussion with the ontological analysis of medical terms such as what a headache is, a tooth extraction, blood, or aspirin, an exercises on putting into practice the design process of a conceptual modelling language of one’s liking (e.g.: how to formalize flowcharts, including an ontological analysis of what those elements are and ontological commitments embedded in a language), and trying to prove some theorems of parthood theories.

There was also a session with 2-minute ‘blitztalks’ by participants interested in briefly describing their ongoing research, which was followed by an interactive poster session.

It was the first time that an ISAO had mini-projects, which turned out to have had better outcomes than I expected, considering the limited time available for it. Each group had to pick a term and investigate what it meant in the various disciplines (task description); e.g.: what does ‘concept’ or ‘category’ mean in psychology, ontology, data science, and linguistics, and ‘function’ in manufacturing, society, medicine, and anatomy? The presentations at the end of the week by each group were interesting and most of the material presented there easily could be added to the IAOA Education wiki’s term list (an activity in progress).

What was not a first-time activity, was the Ontology Pub Quiz, which is a bit of a merger of scientific programme and social activity. We created a new version based on questions from several ISAO’18 lecturers and a few relevant questions created earlier (questions and answers; we did only questions 1-3,6-7). We tried a new format compared to the ISAO’16 quiz and JOWO’17 quiz: each team had 5 minutes to answer a set of 5 questions, and another team marked the answers. This set-up was not as hectic as the other format, and resulted in more within-team interaction cf. among all participants interaction. As in prior editions, some questions and answers were debatable (and there’s still the plan to make note of that and fix it—or you could write an article about it, perhaps :)). The students of the winning team received 2 years free IAOA membership (and chocolate for all team members) and the students of the other two teams received one year free IAOA membership.

Impression of part of the poster session area, moving into the welcome reception

As with the three previous ISAO editions, there was also a social programme, which aimed to facilitate getting to know one another, networking, and have time for scientific conversations. On the first day, the poster session eased into a welcome reception (after a brief wine lapse in the coffee break before the blitztalks). The second day had an activity to stretch the legs after the lectures and before the mini-project work, which was a Bachata dance lesson by Angus Prince from Evolution Dance. Not everyone was eager at the start, but it turned out an enjoyable and entertaining hour. Wednesday was supposed to be a hike up the iconic Table Mountain, but of all the dry days we’ve had here in Cape Town, on that day it was cloudy and rainy, so an alternative plan of indoor chocolate tasting in the Biscuit Mill was devised and executed. Thursday evening was an evening off (from scheduled activities, at least), and Friday early evening we had the pub quiz in the UCT club (the campus pub). Although there was no official planning for Saturday afternoon after the morning lectures, there was again an attempt at Table Mountain, concluding the week.

The participants came from all over the world, including relatively many from Southern Africa with participants coming also from Botswana and Mauritius, besides several universities in South Africa (UCT, SUN, CUT). I hope everyone has learned something from the programme that is or will be of use, enjoyed the social programme, and made some useful new contacts and/or solidified existing ones. I look forward to seeing you all at the next ISAO or, better, FOIS, in 2020 in Bolzano, Italy.

Finally, as a non-trip-report comment from my local chairing viewpoint: special thanks go to the volunteers Zubeida Khan for the ISAO website, Zola Mahlaza and Michael Harrison for on-site assistance, and Sam Chetty for the IT admin.

Review of ‘The web was done by amateurs’ by Marco Aiello

Via one of those friend-of-a-friend likes on social media that popped up in my stream, I stumbled upon the recently published book “The web was done by amateurs” (there’s also a related talk) by Marco Aiello, which piqued my interest both concerning the title and the author. I’ve met Aiello once in Trento, when a colleague and he had a departing party, with Aiello leaving for Groningen. He probably doesn’t remember me, nor do I remember much of him—other than his lamentations about Italian academia and going for greener pastures. Turns out he’s done very well for himself academically, and the foray into writing for the general public has been, in my opinion, a fairly successful attempt with this book.

The short book—it easily can be read in a weekend—starts in the first part with historical notes on who did what for the Internet (the infrastructure) and the multiple predecessor proposals and applications of hyperlinking across documents that Tim Berners-Lee (TBL) apparently was blissfully unaware of. It’s surely a more interesting and useful read than the first Google hit, the few factoids from W3C, or Wikipedia one can find online with a simple search—or: it pays off to read books still in this day and age :). The second part is for most readers, perhaps, also still history: the ‘birth’ of the Web and the browser wars in the mid 1990s.

Part III is, in my opinion, the most fun to read: it discusses various extensions to the original design of TBL’s Web that fixes, or at least aims to fix, a shortcoming of the Web’s basics, i.e., they’re presented as “patches” to patch up a too basic—or: rank-amateur—design of the original Web. They are, among others, persistence with cookies to mimic statefulness for Web-based transactions (for, e.g., buying things on the web), trying to get some executable instructions with Java (ActiveX, Flash), and web services (from CORBA, service-oriented computing, to REST and the cloud and such). Interestingly, they all originate in the 1990s in the time of the browser wars.

There are more names in the distant and recent history of the Web that I knew of, so even I picked up a few things here or there. IIRC, they’re all men, though. Surely there would be at least one woman worthy of mention? I probably ought to know, but didn’t, so I searched the Web and easily stumbled upon the Internet Hall of Fame. That list includes Susan Estrada among the pioneers, who founded CERFnet that “grew the network from 25 sites to hundreds of sites.”, and, after that, Anriette Esterhuysen and Nancy Hafkin for the network in Africa, Qiheng Hu for doing this for China, and Ida Holz for the same in Latin America (in ‘global connections’). Web innovators specifically include Anne-Marie Eklund Löwinder for DNS security extensions (DNSSEC, noted on p143 but not by its inventor’s name) and Elizabeth Feinler for the “first query-based network host name and address (WHOIS) server” and “she and her group developed the top-level domain-naming scheme of .com, .edu, .gov, .mil, .org, and .net, which are still in use today”.

One patch to the Web that I really missed in the overview of the early patches, is the “Web 2.0”. I know that, technologically, it is a trivial extension to TBL’s original proposal: the move from static web pages in 1:n communication from content provider to many passive readers, to m:n communication with comment sections (fancy forms), or: instead of the surfer being just a recipient of information by reading one webpage after another and thinking her own thing of it, to be able to respond and interact, i.e., the chatrooms, the article and blog comment features, and, in the 2000s, the likes of MySpace and Facebook. It got so many more people involved in it all.

Continuing with the book’s content, cloud computing and the fog (section 7.9) are from this millennium, as is, what Aiello dubbed, the “Mother of All Patches.”: the Semantic Web. Regarding the latter, early on in the book (pp. vii-viii) there is already an off-hand comment that does not bode well: “Chap. 8 on the Semantic Web is slightly more technical than the rest and can be safely skipped.” (emphasis added). The way Chapter 8 is written, perhaps. Before discussing his main claim there, a few minor quibbles: it’s the Web Ontology Language OWL, not “Ontology Web Language” (p105), and there’s OWL 2 as successor of the OWL of 2004. “RDF is a nifty combination of being a simple modeling language while also functioning as an expressive ontological language” (p104), no: RDF is for representing data, not really for modeling, and most certainly would not be considered an ontology language (one can serialize an ontology in RDF/XML, but that’s different). Class satisfiability example: no, that’s not what it does, or: the simplification does not faithfully capture it; an example with a MammalFish that cannot have any instances (as subclass of both Mammal and Fish that are disjoint), would have been (regardless the real world).

The main claim of Aiello regarding the Semantic Web, however, is that it’s been that time to throw in the towel, because there hasn’t been widespread uptake of Semantic Web technologies on the Web even though it was proposed already around the turn of the millenium. I lean towards that as well and have reduced the time spent on it from my ontology engineering course over the years, but don’t want to throw out the baby with the bathwater just yet, for two reasons. First, scientific results tend to take a long time to trickle down. Second, I am not convinced that the ‘semantic’ part of the Web is the same level of end-user stuff as playing with HTML is. I still have an HTML book from 1997. It has instructions to “design your first page in 10 minutes!”. I cannot recall if it was indeed <10 minutes, but it sure was fast back in 1998-1999 when I made my first pages, as a non-IT interested layperson. I’m not sure if the whole semantics thing can be done even on the proverbial rainy Sunday afternoon, but the dumbed down version with schema.org sort of works. This schema.org brings me to p110 of Aiello’s book, which states that Google can make do with just statistics for optimal search results because of its sheer volume (so bye-bye Semantic Web). But it is not just stats-based: even Google is trying with schema.org and its “knowledge graph”; admitted, it’s extremely lightweight, but it’s more than stats-only. Perhaps the schema.org and knowledge graph sort of thing are to the Semantic Web what TBL’s proposal for the Web was to, say, the fancier HyperCard.

I don’t know if people within the Semantic Web research community would think of its tooling as technologies for the general public. I suspect not. I consider the development and use of ontologies in ontology-driven information systems as part of the ‘back office’ technologies, notwithstanding my occasional attempts to explain to friends and family what sort of things I’m working on.

What I did find curious, is that one of Aiello’s arguments for the Semantic Web’s failure was that “Using ontologies and defining what the meaning of a page is can be much more easily exploited by malicious users” (p110). It can be exploited, for sure, but statistics can go bad, very bad, too, especially on associations of search terms, the creepy amount of data collection on the Web, and bias built into the Machine Learning algorithms. Search engine optimization is just the polite terms for messing with ‘honest’ stats and algorithms. With the Semantic Web, it would a conscious decision to mess around and that’s easily traceable, but with all the stats-based approaches, it sneakishly can creep in whilst trying to keep up the veneer of impartiality, which is harder to detect. If it were a choice between two technology evils, I prefer the honest bastard cf. being stabbed in the back. (That the users of the current Web are opting for the latter does not make it the lesser of two evils.)

As to two possible new patches (not in the book and one can debate whether they are), time will tell whether a few recent calls for “decentralizing” the Web will take hold, or more fine-grained privacy that also entails more fine-grained recording of events (e.g., TBL’s solid project). The app-fication discussion (Section 10.1) was an interesting one—I hardly use mobile apps and so am not really into it—and the lock-in it entails is indeed a cause for concern for the Web and all it offers. Another section in Chapter 10 is IoT, which sounds promising and potentially scary (what would the data-hungry ML algorithms of the Web infer from my fridge contents, and from that, about me??)—for the past 10 years or so. Lastly, the final chapter has the tempting-to-read title “Should a new Web be designed?”, but the answer is not a clear yes or no. Evolve, it will.

Would I have read the book if I weren’t on sabbatical now? Probably still, on an otherwise ‘lost time’ intercontinental trip to a conference. So, overall, besides the occasional gap and one could quibble a bit here and there, the book is a nice read on the whole for any lay-person interested in learning something about the ubiquitous Web, any expert who’s using only a little corner of it, and certainly for the younger generation to get a feel for how the current Web came about and how technologies get shaped in praxis.

From ontology verbalisation to language learning exercises

I’m aware that to most people ‘playing with’ (investigating) ontologies and isiZulu does not sound particularly useful on the face of it. Yet, there’s the some long-term future music, like eventually being able to generate patient discharge notes in one’s own language, which will do its bit to ameliorate the language barrier in healthcare in South Africa so that patients at least will adhere to the treatment instructions a little better, and therewith receive better quality healthcare. But benefits in the short-term might serve something as well. To that end, I proposed an honours project last year, which has been completed in the meantime, and one of the two interesting outcomes has made it into a publication already [1]. As you may have guessed from the title, it’s about automation for language learning exercises. The results will be presented at the 6th Workshop on Controlled Natural Language, in Maynooth, Ireland in about 2 weeks time (27-28 August). In the remainder of this post, I highlight the main contributions described in the paper.

First, regarding the post’s title, one might wonder what ontology verbalisation has to do with language learning. Nothing, really, except that we could reuse the algorithms from the controlled natural language (CNL) for ontology verbalisation to generate (computer-assisted) language learning exercises whose answers can be computed and marked automatically. That is, the original design of the CNL for things like pluralising nouns, verb conjugation, and negation that is used for verbalising ontologies in isiZulu in theory [2] and in practice [3], was such that the sentence generator is a detachable module that could be plugged in elsewhere for another task that needs such operations.

Practically, the student who designed and developed the back-end, Nikhil Gilbert, preferred Java over Python, so he converted most parts into Java, and added a bit more, notably the ‘singulariser’, a sentence scrabble, and a sentence generator. Regarding the sentence generator, this is used as part of the exercises & answers generator. For instance, we know that humans and the roles they play (father, aunt, doctor, etc.) are mostly in isiZulu’s noun classes 1, 2, 1a, 2a, or 3a, that those classes do not (or rarely?) have non-human nouns and generally it holds for all humans and their roles that they can ‘eat’, ‘talk’ etc. This makes it relatively easy create a noun chain and a verb chain list to mix and match nouns with verbs accordingly (hurrah! for the semantics-based noun class system). Then, with the 231 nouns and 59 verbs in the newly constructed mini-corpus, the noun chain and the verb chain, 39501 unique question sentences could be generated, using the following overall architecture of the system:

Architecture of the CNL-driven CALL system. The arrows indicate which upper layer components make use of the lower layer components. (Source: [1])

From a CNL perspective as well as the language learning perspective, the actual templates for the exercises may be of interest. For instance, when a learner is learning about pluralising nouns and their associated verb, the system uses the following two templates for the questions and answers:

Q: <prefixSG+stem> <SGSC+VerbRoot+FV>
A: <prefixPL+stem> <PLSC+VerbRoot+FV>
Q: <prefixSG+stem> <SGSC+VerbRoot+FV> <prefixSG+stem>
A: <prefixPL+stem> <PLSC+VerbRoot+FV> <prefixPL+stem>

The answers can be generated automatically with the algorithms that generate the plural noun (from ‘prefixSG’ to ‘prefixPL’) and add the plural subject concord (from ‘SGSC’ to ‘PLSC’, in agreement with ‘prefixPL’), which were developed as part of the GeNI project on ontology verbalization. This can then be checked against what the learner has typed. For instance, a generated question could be umfowethu usula inkomishi and the correct answer generated (to check the learner’s response against) is abafowethu basula izinkomishi. Another example is generation of the negation from the positive, or, vv.; e.g.:

Q: <PLSC+VerbRoot+FV>
A: <PLNEGSC+VerbRoot+NEGFV>

For instance, the question may present batotoba and the correct answer is then abatotobi. In total, there are six different types of sentences, with two double, like the plural above, hence a total of 16 templates. It is not a lot, but it turned out it is one of the very few attempts to use a CNL in such way: there is one paper that also will be presented at CNL’18 in the same session [4], and an earlier one [5] uses a fancy grammar system (that we don’t have yet computationally for isiZulu). This is not to be misunderstood as that this is one of the first CNL/NLG-based system for computer-assisted language learning—e.g., there’s assistance in essay writing, grammar concept question generation, reading understanding question generation—but curiously very little on CNLs or NLG for the standard entry-level type of questions to learn the grammar. Perhaps the latter is considered ‘boring’ for English by now, given all the resources. However, thousands of students take introduction courses in isiZulu each year, and some automation can alleviate the pressure of routine activities from the lecturers. We have done some evaluations with learners—with encouraging results—and plan to do some more, so that it may eventually transition to actual use in the courses; that is: TBC…

 

References

[1] Gilbert, N., Keet, C.M. Automating question generation and marking of language learning exercises for isiZulu. 6th International Workshop on Controlled Natural language (CNL’18). IOS Press. Co. Kildare, Ireland, 27-28 August 2018. (in print)

[2] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51(1): 131-157.

[3] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. The Semantic Web: ESWC 2017 Satellite Events, Blomqvist, E. et al. (eds.). Springer LNCS vol. 10577, 59-64.

[4] Lange, H., Ljunglof, P. Putting control into language learning. 6th International Workshop on Controlled Natural language (CNL’18). IOS Press. Co. Kildare, Ireland, 27-28 August 2018. (in print)

[5] Gardent, C., Perez-Beltrachini, L. Using FB-LTAG Derivation Trees to Generate Transformation-Based Grammar Exercises. Proc. of TAG+11, Sep 2012, Paris, France. pp117-125, 2012.

On ‘open access’ CS conference proceedings

It perhaps sounds nice and doing-good-like, for the doe-eyed ones at least: publish computer science conference proceedings as open access so that anyone in the world can access the scientific advances for free. Yay. Free access to scientific materials is good for a multitude of reasons. There’s downside in the set-up in the way some try to push this now, though, which amounts to making people pay for what used to be, and still mostly is, for free already. I take issue with that. Instead of individualising a downside of open access by heaping more costs onto the individual researchers, the free flow of knowledge should be—and remain—a collectivised effort.

 

It is, and used to be, the case that most authors put the camera-ready-copy (CRC) on their respective homepages and/or institutional repositories, and it used to be typically even before the conference (e.g., mine are here). Putting the CRC on one’s website or in an openly accessible institutional repository seems to happen slightly less often now, even though it is legal to do so. I don’t know why. Even if it were not entirely legal, a collective disobedience is not something that the publishers easily can fight. It doesn’t help that Google indexes the publisher quicker than the academics’ webpages, so the CRCs on the authors’ pages don’t turn up immediately in the search results even whey the CRCs are online, but that would be a pathetic reason for not uploading the CRC. It’s a little extra effort to lookup an author’s website, but acceptable as long as the file is still online and freely available.

Besides the established hallelujah’s to principles of knowledge sharing, there’s since recently a drive at various computer science (CS) conferences to make sure the proceedings will be open access (OA). Like for OA journal papers in an OA or hybrid journal, someone’s going to have to pay for the ‘article processing charges’. The instances that I’ve seen close-up, put those costs for all papers of the proceedings in the conference budget and therewith increase the conference registration costs. Depending on 1) how good or bad the deal is that the organisers made, 2) how many people are expected to attend, and 3) how many papers will go in the volume, it hikes up the registration costs by some 50 euro. This is new money that the publishing house is making that they did not use to make before, and I’m pretty sure they wouldn’t offer an OA option if it were to result in them making less profit from the obscenely lucrative science publishing business.

So, who pays? Different universities have different funding schemes, as have different funders as to what they fund. For instance, there exist funds for contributing to OA journal article publishing (also at UCT, and Springer even has a list of OA funders in several countries), but that cannot be used in this case, for the OA costs are hidden in the conference registration fee. There are also conference travel funds, but they fund part of it or cap it to a maximum, and the more the whole thing costs, the greater the shortfall that one then will have to pay out of one’s own research fund or one’s own pocket.

A colleague (at another university) who’s pushing for the OA for CS conference proceedings said that his institution is paying for all the OA anyway, not him—he easily can have principles, as it doesn’t cost him anything anyway. Some academics have their universities pay for the conference proceedings access already anyway, as part of the subscription package; it’s typically the higher-ranking technical universities that have access. Those I spoke to, didn’t like the idea that now they’d have to pay for access in this way, for they already had ‘free’ (to them) access, as the registration fees come from their own research funds. For me, it is my own research funds as well, i.e., those funds that I have to scramble together through project proposal applications with their low acceptance rates. If I’d go to/have papers at, say, 5 such conferences per year (in the past several years, it was more like double that), that’s the same amount as paying a student/scientific programmer for almost a week and about a monthly salary for the lowest-paid in South Africa, or travel costs or accommodation for the national CS&IT conference (or both) or its registration fees. That is, with increased registration fees to cover the additional OA costs, at least one of my students or I would lose out on participating in even a local conference, or students would be less exposed to doing research and obtaining programming experience that helps them to get a better job or better chance at obtaining a scholarship for postgraduate studies. To name but a few trade-offs.

Effectively, the system has moved from “free access to the scientific literature anyway” (the online CRCs), to “free access plus losing money (i.e.: all that I could have done with it) in the process”. That’s not an improvement on the ground.

Further, my hard-earned research funds are mine, and I’d like to decide what to do with it, rather than having that decision been taken for me. Who do the rich boys up North think they are to say that I should spend it on OA when the papers were already free, rather than giving a student an opportunity to go to a national conference or devise and implement an algorithm, or participate in an experiment etc.! (Setting aside them trying to reprimand and ‘educate’ me on the goodness—tsk! as if I don’t know that the free flow of scientific information is a good thing.)

Tell me, why should the OA principles trump the capacity building when the papers are free access already anyway? I’ve not seen OA advocates actually weighing up any alternatives on what would be the better good to spend money on. As to possible answers, note that an “it ought to be the case that there would be enough money for both” is not a valid answer in discussing trade-offs, nor is a “we might add a bit of patching up as conference registration reduction for those needy that are not in the rich inner core” for it hardly ever happens, nor is a “it’s not much for each instance, you really should be able to cover it” because many instances do add up. We all know that funding for universities and for research in general is being squeezed left, right, and centre in most countries, especially over the past 8-10 years, and such choices will have to, and are being, made already. These are not just choices we face in Africa, but this holds also in richer countries, like in the EU (fewer resources in relative or absolute terms and greater divides), although a 250 euro (the 5 conferences scenario) won’t go as far there as in low-income countries.

Also, and regardless the funding squeeze: why should we start paying for free access that already was a de facto, and with most CS proceedings publishers, also a de jure, free access anyway? I’m seriously starting to wonder who’s getting kickbacks for promoting and pushing this sort of scheme. It’s certainly not me, and nor would I take it if some publisher would offer it to me, as it contributes to the flow of even more money from universities and research institutes to the profits of multinationals. If it’s not kickbacks, then to all those new ‘conference proceedings need to be OA’ advocates: why do you advocate paying for a right that we had for free? Why isn’t it enough for you to just pay for a principle yourself as you so desire, but instead insist to force others to do so too even when there is already a tacit and functioning agreement going on that realises that aim of free flow of knowledge?

Sure, the publisher has a responsibility to keep the papers available in perpetuity, which I don’t, and link rot does exist. One easily could write a script to search all academics’ websites and get the files, like citeseer used to do well. They get funding for such projects for long-term archiving, like arxiv.org does as well, and philpapers, and SSRN as popular ones (see also a comprehensive list of preprint servers), and most institution’s repositories, too (e.g., the CS@UCT pubs repository). So, the perpetuity argument can also be taken care of that way, without the researchers actually having to pay more.

Really, if you’re swimming in so much research money that you want to pay for a principle that was realised without costs to researchers, then perhaps instead do fund the event so that, say, some student grants can be given out, that it can contribute to some nice networking activity, or whatever part of the costs. The new “we should pay for OA, notwithstanding that no one was suffering when it was for free” attitude for CS conference proceedings is way too fishy to actually being honest; if you’re honest and not getting kickbacks, then it’s a very dumb thing to advocate for.

For the two events where this scheme is happening that I’m involved in, I admit I didn’t forcefully object at the time it was mentioned (nor had I really thought through the consequences). I should have, though. I will do so a next time.

An Ontology Engineering textbook

My first textbook “An Introduction to Ontology Engineering” (pdf) is just released as an open textbook. I have revised, updated, and extended my earlier lecture notes on ontology engineering, amounting to about 1/3 more new content cf. its predecessor. Its main aim is to provide an introductory overview of ontology engineering and its secondary aim is to provide hands-on experience in ontology development that illustrate the theory.

The contents and narrative is aimed at advanced undergraduate and postgraduate level in computing (e.g., as a semester-long course), and the book is structured accordingly. After an introductory chapter, there are three blocks:

  • Logic foundations for ontologies: languages (FOL, DLs, OWL species) and automated reasoning (principles and the basics of tableau);
  • Developing good ontologies with methods and methodologies, the top-down approach with foundational ontologies, and the bottom-up approach to extract as much useful content as possible from legacy material;
  • Advanced topics that has a selection of sub-topics: Ontology-Based Data Access, interactions between ontologies and natural languages, and advanced modelling with additional language features (fuzzy and temporal).

Each chapter has several review questions and exercises to explore one or more aspects of the theory, as well as descriptions of two assignments that require using several sub-topics at once. More information is available on the textbook’s page [also here] (including the links to the ontologies used in the exercises), or you can click here for the pdf (7MB).

Feedback is welcome, of course. Also, if you happen to use it in whole or in part for your course, I’d be grateful if you would let me know. Finally, if this textbook will be used half (or even a quarter) as much as the 2009/2010 blogposts have been visited (around 10K unique visitors since posting them), that would mean there are a lot of people learning about ontology engineering and then I’ll have achieved more than I hoped for.

UPDATE: meanwhile, it has been added to several open (text)book repositories, such as OpenUCT and the Open Textbook Archive, and it has been featured on unglue.it in the week of 13-8 (out of its 14K free ebooks).

Ontology, part-whole relations, isiZulu and culture

The title is a mouthful, but it can go together. What’s interesting, is that the ‘common’ list of part-whole relations are not exactly like that in isiZulu and Zulu culture.

Part-whole relations have been proposed over the past 30 years, such as to relate a human heart to the human it is part of, that Gauteng is located in South Africa (geographically a part of), and the slice of the cake is a portion of the cake, and they seemed well-established by now. The figure below provides an informal view of it.

Informal taxonomy of common part-whole relations (source: [2])

My co-author, Langa Khumalo, and I already had an inkling this hierarchy probably would not work for isiZulu, based, first, on a linguistic analysis to generate natural language [1], and, second, the Shuter & Shooter English-isiZulu dictionary already lists 18 translations for just ‘part’ alone. Yet, if those ‘common’ part-whole relations are universal, the differences observed ought to be just an artefact of language, not ontological differences. To clear up the matter, we guided ourselves with the following questions:

  1. Which part-whole relations have been named in isiZulu, and to what extent are they not only lexically but also semantically distinct?
  2. Can all those part-whole relations be mapped with equivalence relations to the common part-whole relations?
  3. For those that cannot be mapped with equivalence relations: is the difference in meaning ontologically possibly interesting for ontology engineering?
  4. Is there something different as gleaned from isiZulu part-whole relations that is useful in improving the theoretical appreciation of part-whole relations?

To figure this out, we first took a bottom-up approach with evidence gathering, and then augmented it with further ontological analysis. Plodding though the isiZulu-English dictionaries got us 81 terms that had something to do with parts. 41 were discarded because they were not applicable upon closer inspection (e.g., referring to creating parts cf. relating parts, idioms). Further annotations and examples were added, which reduced it to 28 (+ 3 we had missed and were added). Of those 28, we selected 13 for ontological analysis and formalisation. That selection was based on importance (like ingxenye) and some of them that seemed a bit overly specific, like iqatha for portions of meat, and meat only. The hierarchy of the final selection is shown in the figure below, with an informal indication of what the relation relates.

Selected isiZulu terms with informal descriptions. (Source: [2])

They held up ontologically, i.e., some are the same as the ‘common’ ones, yet some others are really different, like the hlanganyela for a collective (cf. individual object) being part of (participating in) an event. Admitted, some of the domains/ranges aren’t very clearly delineated. For instance, isiqephu relates solid and ‘solid-like’ portions, as in, e.g., Zonke izicezu zesinkwa ziyisiqephu sesinkwa esisodwa ‘all slices of bread are a portion of some loaf of bread’. Where exactly that border of ‘solid-like’ is and when it really counts as a liquid (and thus isiqephu applies no more), is not yet clear—that’s a separate question orthogonal to the relation. Nonetheless, the investigation did clear up several things, especially the more precise umunxa that took me a while to unravel, which turned out to be a chain of parthood relations; e.g., the area where the fireplace is in the hut is a portion of the hut (sample use with the verbaliser: Onke amaziko angumunxa wexhiba). We didn’t touch upon really thorny issues that probably will deserve a paper of their own. For instance, the temporalised parthood isihlephu is used to relate a meaningful scattered part with identity to the whole it was part of, such as the broken-off ear of a cup that was part of the cup (but it cannot be used for the chip of the cup, as a chip isn’t identifiable in the same way as the ear is).

We did try to test the terms against the isiZulu National Corpus to see how the terms are used, but with the limited functionalities and tooling, not as much came out of it as we had hoped for. In any case, the detailed assessment of a section of the corpus did show the relevant uses were not contradicting the formalisation.

Further details can be found in our paper “On the ontology of part-whole relations in Zulu language and culture” that will be presented at the 10th International Conference on Formal Ontology in Information Systems 2018 (FOIS’18) that will be held from 17 to 21 September in Cape Town, South Africa.

As far as I know, this is the first such investigation. Checking out other languages a bit (mainly Spanish and German), and some related works on Turkish and Chinese, it might be the case that also there the ‘common’ part-whole relations may not be exactly the same. We carried out whole process systematically, which is described as such in the paper, so that anyone who’d like to do something like this for another language region and culture, could follow the same procedure.

 

References

[1] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. 9th International Natural Language Generation conference (INLG’16), September 5-8, 2016, Edinburgh, UK. ACL, 174-183.

[2] Keet, C.M., Khumalo, L. On the ontology of part-whole relations in Zulu language and culture. 10th International Conference on Formal Ontology in Information Systems 2018 (FOIS’18). IOS Press. 17-21 September, 2018, Cape Town, South Africa. (in print)