“Grammar infused” templates for NLG

It’s hardly ever entirely one extreme or the other in natural language generation and controlled natural languages. Rarely can one get away with simplistic ‘just fill in the blanks’ templates that do not do any grammar or phonological processing to make the output better; our technical report about work done some 17 years ago was a case in point on the limitations thereof if one still needs to be convinced [1]. But where does NLG start? I agree with Ehud Reiter that it isn’t about template versus NLG, but a case of levels of sophistication: the fill-in-the-blank templates definitely don’t count as NLG and full-fledged grammar-only systems definitely do, with anything in-between a grey area. Adding word-level grammatical functions to templates makes it lean to NLG, or even indeed being so if there are relatively many such rules, and dynamically creating nicely readable sentences with aggregation and connectives counts as NLG for sure, too.

With that in mind, we struggled with how to name the beasts we had created for generating sentences in isiZulu [2], a Niger-Congo B language: nearly every resultant word in the generated sentences required a number of grammar rules to make it render sufficiently well (i.e., at least grammatically acceptable and understandable). Since we didn’t have a proper grammar engine yet, but we knew they could never be fill-in-the-blank templates either, we dubbed them verbalisation patterns. Most systems (by number of systems) use either only templates or templates+grammar, so our implemented system [3] was in good company. It may sound like oldskool technology, but you go ask Meta with their Galactica if a ML/DL-based approach is great for generating sensible text that doesn’t hallucinate… and does it well for languages other than English.

That said, honestly, those first attempts we did for isiZulu were not ideal for reusability and maintainability – that was not the focus – and it opened up another can of worms: how do you link templates to (partial) grammar rules? With the ‘partial’ motivated by taking it one step at a time in grammar engine development, as a sort of agile engine development process that is relevant especially for languages that are not well-resourced.

We looked into this recently. There turn out to be three key mechanisms for linking templates to computational grammar rules: embedding (E), where grammar rules are mixed with the templates specifications and therewith co-dependent, and compulsory (C) and partial (P) attachment where there is, or can be, an independent existence of the grammar rules.

Attachment of grammar rules (that can be separated) vs embedding of grammar rules in a system (intertwined with templates) (Source: [6])

The difference between the latter two is subtle but important for use and reuse of grammar rules in the software system and the NLG-ness of it: if each template must use at least one rule from the set of grammar rules and each rule is used somewhere, then the set of rules is compulsorily attached. Conversely, it is partially attached if there are templates in that system that don’t have any grammar rules attached. Whether it is partial because it’s not needed (e.g., the natural language’s grammar is pretty basic) or because the system is on the fill-in-the-blank not-NLG end of the spectrum, is a separate question, but for sure the compulsory one is more on the NLG side of things. Also, a system may use more than one of them in different places; e.g., EC, both embedding and compulsory attachment. This was introduced in [4] in 2019 and expanded upon in a journal article entitled Formalisation and classification of grammar and template-mediated techniques to model and ontology verbalisation [5] that was published in IJMSO, and even more detail can be found in Zola Mahlaza’s recently completed PhD thesis [6]. These papers have various examples, illustrations how to categorise a system, and why one system was categorised in one way and not another. Here’s a table with several systems that combine templates and computational grammar rules and how they are categorised:

Source: [5]

We needed a short-hand name to refer to the cumbersome and wordy description of ‘combining templates with grammar rules in a [theoretical or implemented] system in some way’, which ended up to be grammar-infused templates.

Why write about this now? Besides certain pandemic-induced priorities in 2021, the recently proposed template language for Abstract Wikipedia that I blogged about before may mix Compulsory or Partial attachment, but ought not to permit the messy embedding of grammar in a template. This may not have been clear in v1 of the proposal, but hopefully it is a little bit more so in this new version that was put online over the past few days. To make that long story short: besides a few notes at the start of its Section 3, there’s a generic description of an idea for a realization algorithm. Its details don’t matter if you don’t intend to design a new realiser from scratch and maybe not either if you want to link it to your existing system. The key take-away from that section is that there’s where the real grammar and phonological conditioning stuff happens if it’s needed. For example, for the ‘age in years’ sub-template for isiZulu, recall that’s:

Year_zu(years):"{root:Lexeme(L686326)} {concord:RelativeConcord()}{Copula()}{concord_1<nummod:NounPrefix()}-{nummod:Cardinal(years)}"

The template language sets some boundaries for declaring such a template, but it is a realiser that has to interpret ‘keywords’, such as root, concord, and RelativeConcord, and do something with it so that the output ends up correctly; in this case, from ‘year’ + ‘25’ as input data to iminyaka engama-25 as outputted text. That process might be done in line with Ariel Gutman’s realiser pipeline for Abstract Wikipedia and his proof-of-concept implementation with Scribunto or any other realizer architecture or system, such as Grammatical Framework, SimpleNLG, NinaiUdiron, or Zola’s Nguni Grammar Engine, among several options for multilingual text generation. It might sound silly to put templates on top of the heavy machinery of a grammar engine, but it will make it more accessible to the general public so that they can specify how sentences should be generated. And, hopefully, permit a rules-as-you-go approach as well.

It is then the realiser (including grammar) engine and the partially or compulsorily attached computational grammar rules and other algorithms that work with the template. For the example, when it sees root and that the lemma fetched is a noun (L686326 is unyaka ‘year’), it also fetches the value of the noun class (a grammatical feature stored with the noun), which we always need somewhere for isiZulu NLG. It then needs to figure out to make a plural out of ‘year’, which it will know that it must do thanks to the years fetched for the instance (i.e., 25, which is plural) and the nummod that links to the root by virtue of the design and the assumption there’s a (dependency) grammar. Then, with concord:RelativeConcord, it will fetch the relative concord for that noun class, since concord also links to root. We already can do the concordial agreements and pluralising of nouns (and much more!) for isiZulu since several years. The only hurdle is that that code would need to become interoperable with the template language specification, in that our realisers will have to be able to recognise and process properly those ‘keywords’. Those words are part of an extensible set of words inspired by dependency grammars.

How this is supposed to interact smoothly is to be figured out still. Part of that is touched upon in the section about instrumentalising the template language: you could, for instance, specify it as functions in Wikifunctions that is instantly editable, facilitating an add-rules-as-you-go approach. Or it can be done less flexibly, by mapping or transforming it to another template language or to the specification of an external realiser (since it’s the principle of attachment, not embedding, of computational grammar rules).

In closing, whether the term “grammar-infused templates” will stick remains to be seen, but combining templates with grammars in some way for NLG will have a solid future at least for as long as those ML/DL-based large language model systems keep hallucinating and don’t consider languages other than English, including the intended multilingual setting for Abstract Wikipedia.

References

[1] M. Jarrar, C.M. Keet, and P. Dongilli. Multilingual verbalization of ORM conceptual models and axiomatized ontologies. STARLab Technical Report, Vrije Universiteit Brussels, Belgium. February 2006.

[2] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51:131-157. (accepted version free access)

[3] Keet, C.M. Xakaza, M., Khumalo, L. Verbalising OWL ontologies in isiZulu with Python. The Semantic Web: ESWC 2017 Satellite Events, Blomqvist, E et al. (eds.). Springer LNCS vol 10577, 59-64. Portoroz, Slovenia, May 28 – June 2, 2017.

[4] Mahlaza, Z., Keet, C.M. A classification of grammar-infused templates for ontology and model verbalisation. 13th Metadata and Semantics Research Conference (MTSR’19). E. Garoufallou et al. (Eds.). Springer vol. CCIS 1057, 64-76. 28-31 Oct 2019, Rome, Italy.

[5] Mahlaza, Z., Keet, C.M. Formalisation and classification of grammar and template-mediated techniques to model and ontology verbalisation. International Journal of Metadata, Semantics and Ontologies, 2020, 14(3): 249-262.

[6] Mahlaza, Z. Foundations for reusable and maintainable surface realisers for isiXhosa and isiZulu. PhD Thesis, Department of Computer Science, University of Cape Town, South Africa. 2022.

Advertisement

Good girls, bold girls – but not böse

That first sentence of a book, including non-fiction books, may set the tone for what’s to come. For my memoir, it’s a translation of Brave meisjes komen in de hemel, brutale overal: good girls go to heaven, bold ones go everywhere.

I had read a book with that title some 25 years ago. It was originally written by Ute Ehrhardt in 1994 and translated from German to Dutch and published a year later. For the memoir, I had translated the Dutch title of the book into English myself: the brutale translates to ‘bold’ according to me, my dictionary (a Prisma Woordenboek hard copy), and an online dictionary. Bold means “(of a person, action, or idea) showing a willingness to take risks; confident and courageous.” according to the Oxford dictionary (and similarly here) and it’s in the same league as audacious, daring, brazen, and perky. It has a positive connotation.

What I, perhaps, ought to have done last year, is to find out whether the book also had been translated into English and trust that translator. As it turned out, I’m glad I did not do so, which brings me to the more substantive part of the post. I wanted to see whether I could find the book in order to link it in this post. I did. Interestingly, the word used in the English title was “bad” rather than ‘bold’, yet brutaal is not at all necessarily bad, nor is the book about women being bad. Surely something must have gotten warped in translation there?!

I took the hard copy from the bookshelf and checked the fine-print: it listed the original German title as Gute Mädchen kommen in den Himmel, böse überall hin. Hm, bӧse is not good. It has 17 German-to-English translations and none is quite as flattering as bold, not at all. This leaves either bad translations to blame or there was a semantic shift in the German-to-Dutch translation. Considering the former first, it appeared that the German-Dutch online dictionary did not offer nice Dutch words for bӧse either. Getting up from my chair again to consult my hard copy Prisma German-Dutch dictionary did not pay off either, except for one, maybe (ondeugend). It does not even list brutaal as possible translation. Was the author, Dr Ehrhardt of the Baby Boomer generation, still so indoctrinated in the patriarchy and Christianity – Gute vs Das Bӧse – as to think that not being a smiling nice girl must mean being bӧse? The term did not hold back the Germans, by the way: it was the best-sold non-fiction book in Germany in 1995, my Dutch copy stated. Moreover, it turned out to be at second place overall since German book sales counting started 60 years ago, including having been a whopping 107 weeks at first place in the Spiegel bestseller list. What’s going on here? Would the Germans be that interested in ‘bad’ girls? Not quite. The second option applies, i.e., the the semantic shift for the Dutch translation.

The book’s contents is not about bad, mean, or angry women at all and the subtitle provides a further hint to that: waarom lief zijn vrouwen geen stap verder brengt ‘why being nice won’t get women even one step ahead’. Instead of being pliant, submissive, and self-sabotaging in several ways, and therewith have our voices ignored, contributions downplayed, and being passed over for jobs and promotions, it seeks to give women a kick in the backside in order to learn to stand one’s ground and it provides suggestions to be heard and taken into account by avoiding the many pitfalls. Our generation of children of the Baby Boomers would improve the world better than those second wave feminists tried to do, and this book fitted right within the Zeitgeist. It was the girl power decade in the 1990s, where women took agency to become master of their own destiny, or at least tried to. The New Woman – yes, capitalised in the book. Agent Dana Scully of the X Files as the well-dressed scientist and sceptic investigator. Buffy the vampire slayer. Xena, Warrior Princess. The Spice Girls. Naomi Wolf’s Fire with Fire (that, by the way, wasn’t translated into Dutch). Reading through the book again now, it comes across as a somewhat dated use-case-packed manifesto about the pitfalls to avoid and how to be the architect of your own life. That’s not being bad, is it.

I suppose I have to thank the German-to-Dutch book translator Marten Hofstede for putting a fitting Dutch title to the content of the book. It piqued my interest in the bookstore at the train station, and I bought and read it in hat must have been 1997. It resonated. To be honest, if the Dutch title would have used any of the listed translations in the online dictionary – such as kwaad, verstoord, and nijdig – then I likely would not have bought the book. Having had to be evil or perpetually angry to go everywhere, anywhere and upward would have been too steep price to pay. Luckily, bold was indeed the right attribute. Perhaps for the generation after me, i.e., who are now in their twenties, it’s not about being bold but about being, as a normal way of outlook and interaction in society. Of course a woman is entitled to live her own life, as any human being is.

A review of NLG realizers and a new architecture

That last step in the process of generating text from some structured representation of data, information or knowledge is done by things called surface realizers. They take care of the ‘finishing touches’ – syntax, morphology, and orthography – to make good natural language sentences out of an ontology, conceptual data model, or Wikidata data, among many possible sources that can be used for declaring abstract representations. Besides theories, there are also many tools that try to get that working at least to some extent. Which ways, or system architectures, are available for generating the text? Which components do they all, or at least most of them, have? Where are the differences and how do they matter? Will they work for African languages? And if not, then what?

My soon-to-graduate PhD student Zola Mahlaza and I set out to answer these questions, and more, and the outcome is described in the article Surface realization architecture for low-resourced African languages that was recently accepted and is now in print with the ACM Transactions on Asian and Low-Resource Language Information Processing (ACM TALLIP) journal [1].

Zola examined 77 systems, which exhibited some 13 different principal architectures that could be classified into 6 distinct architecture categories. Purely by number of systems, manually coded and rule-based would be the most popular, but there are a few hybrid and data-driven systems as well. A consensus architecture for realisers there is not. And none exhibit most of the software maintainability characteristics, like modularity, reusability, and analysability that we need for African languages (even more so than for better resourced languages). African is narrowed down in the paper further to those in the Niger-Congo B (‘Bantu’) family of languages. One of the tricky things is that there’s a lot going on at the sub-word level with these languages, whereas practically all extant realizers operate at the word-level.

Hence, the next step was to create a new surface realizer architecture that is suitable for low-resourced African languages and that is maintainable. Perhaps unsurprisingly, since the paper is in print, this new architecture compares favourably against the required features. The new architecture also has ‘bonus’ features, like being guided by an ontology with a template ontology [2] for verification and interoperability. All its components and the rationale for putting it together this way are described in Section 5 of the article and the maintainability claims are discussed in its Section 6.

Source: [1]

There’s also a brief illustration how one can redesign a realiser into the proposed architecture. We redesigned the architecture of OWLSIZ for question generation in isiZulu [3] as use case. The code of that redesign of OWLSIZ is available, i.e., it’s not merely a case of just having drawn a different diagram, but it was actually proof-of-concept tested that it can be done.

While I obviously know what’s going on in the article, if you’d like to know much more details than what’s described there, I suggest you consult Zola as the main author of the article or his (soon to be available online) PhD thesis [4] that devotes roughly a chapter to this topic.

References

[1] Mahlaza, Z., Keet, C.M. Surface realisation architecture for low-resourced African languages. ACM Transactions on Asian and Low-Resource Language Information Processing, (in print). DOI: 10.1145/3567594.

[2] Mahlaza, Z., Keet, C.M. ToCT: A task ontology to manage complex templates. FOIS’21 Ontology Showcase. Sanfilippo, E.M. et al. (Eds.). CEUR-WS vol. 2969. 9p.

[3] Mahlaza, Z., Keet, C.M.: OWLSIZ: An isiZulu CNL for structured knowledge validation. In: Proc. of WebNLG+ 2020. pp. 15–25. ACL, Dublin, Ireland (Virtual).

[4] Mahlaza, Z. Foundations for reusable and maintainable surface realisers for isiXhosa and isiZulu. PhD Thesis, Department of Computer Science, University of Cape Town, South Africa. 2022.

Semantic interoperability of conceptual data modelling languages: FaCIL

Software systems aren’t getting any less complex to design, implement, and maintain, which applies to both the numerous diverse components and the myriad of people involved in the development processes. Even a straightforward configuration of a data­base back-end and an object-oriented front-end tool requires coordination among database analysts, programmers, HCI people, and increasing involvement of domain experts and stakeholders. They each may prefer, and have different competencies in, certain specific design mechanisms; e.g., one may want EER for the database design, UML diagrams for the front-end app, and perhaps structured natural language sentences with SBVR or ORM for expressing the business rules. This requires multi-modal modelling in a plurality of paradigms. This would then need to be supported by hybrid tools that offer interoperability among those modelling languages, since such heterogeneity won’t go away any time soon, or ever.

Example of possible interactions between the various developers of a software system and the models they may be using.

It is far from trivial to have these people work together whilst maintaining their preferred view of a unified system’s design, let alone doing all this design in one system. In fact, there’s no such tool that can seamlessly render such varied models across multiple modelling languages whilst preserving the semantics. At best, there’s either only theory that aims to do that, or only a subset of the respective languages’ features, or a subset of the required combinations. Well, more precisely, until our efforts. We set out to fill this gap in functionality, both in a theoretically sound way and implemented as proof-of-concept to demonstrate its feasibility. The latest progress was recently published in the paper entitled A framework for interoperability with hybrid tools in the Journal of Intelligent Information Systems [1], in collaboration with Germán Braun and Pablo Fillottrani.

First, we propose the Framework for semantiC Interoperability of conceptual data modelling Languages, FaCIL, which serves as the core orchestration mechanism for hybrid modelling tools with relations between components and a workflow that uses them. At its centre, it has a metamodel that is used for the interchange between the various conceptual models represented in different languages and it has sets of rules to and from the metamodel (and at the metamodel level) to ensure the semantics is preserved when transforming a model in one language into a model in a different language and such that edits to one model automatically propagate correctly to the model in another language. In addition, thanks to the metamodel-based approach, logic-based reconstructions of the modelling languages also have become easier to manage, and so a path to automated reasoning is integrated in FaCIL as well.

This generic multi-modal modelling interoperability framework FaCIL was instantiated with a metamodel for UML Class Diagrams, EER, and ORM2 interoperability specifically [2] (introduced in 2015), called the KF metamodel [3] with its relevant rules (initial and implemented ones), an English controlled natural language, and a logic-based reconstruction into a fragment of OWL (orchestration graphically from the paper). This enables a range of different user interactions in the modelling process, of which an example of a possible workflow is shown in the following figure.

A sample workflow in the hybrid setting, showing interactions between visual conceptual data models (i.e., in their diagram version) and in their (pseudo-)natural language versions, with updates propagating to the others automatically. At the start (top), there’s a visual model in one’s preferred language from which a KF runtime model is generated. From there, it can go in various directions: verbalise, convert, or modify it. If the latter, then the KF runtime model is also updated and the changes are propagated to the other versions of the model, as often as needed. The elements in yellow/green/blue are thanks to FaCIL and the white ones are the usual tasks in the traditional one-off one-language modelling setting.

These theoretical foundations were implemented in the web-based crowd 2.0 tool (with source code). crowd 2.0 is the first hybrid tool of its kind, tying together all the pieces such that now, instead of partial or full manual model management of transformations and updates in multiple disparate tools, these tasks can be carried out automatically in one application and therewith also allow diverse developers and stakeholders to work from a shared single system.

We also describe a use case scenario for it – on Covid-19, as pretty much all of the work for this paper was done during the worse-than-today’s stage of the pandemic – that has lots of screenshots from the tool in action, both in the paper (starting here, with details halfway in this section) and more online.

Besides evaluating the framework with an instantiation, a proof-of-concept implementation of that instantiation, and a use case, it was also assessed against the reference framework for conceptual data modelling of Delcambre and co-authors [4] and shown to meet those requirements. Finally, crowd 2.0’s features were assessed against five relevant tools, considering the key requirements for hybrid tools, and shown to compare favourable against them (see Table 2 in the paper).

Distinct advantages can be summed up as follows, from those 26 pages of the paper, where the, in my opinion, most useful ones are underlined here, and the most promising ones to solve another set of related problems with conceptual data modelling (in one fell swoop!) in italics:

  • One system for related tasks, including visual and text-based modelling in multiple modelling languages, automated transformations and update propagation between the models, as well as verification of the model on coherence and consistency.
  • Any visual and text-based conceptual model interaction with the logic has to be maintained only in one place rather than for each conceptual modelling and controlled natural language separately;
  • A controlled natural language can be specified on the KF metamodel elements so that it then can be applied throughout the models regardless the visual language and therewith eliminating duplicate work of re-specifications for each modelling language and fragment thereof;
  • Any further model management, especially in the case of large models, such as abstraction and modularisation, can be specified either on the logic or on the KF metamodel in one place and propagate to other models accordingly, rather than re-inventing or reworking the algorithms for each language over and over again;
  • The modular design of the framework allows for extensions of each component, including more variants of visual languages, more controlled languages in your natural language of choice, or different logic-based reconstructions.

Of course, more can be done to make it even better, but it is a milestone of sorts: research into the  theoretical foundations of this particular line or research had commenced 10 years ago with the DST/MINCyT-funded bi-lateral project on ontology-driven unification of conceptual data modelling languages. Back then, we fantasised that, with more theory, we might get something like this sometime in the future. And we did.

References

[1] Germán Braun, Pablo Fillottrani, and C Maria Keet. A framework for interoperability with hybrid tools. Journal of Intelligent Information Systems, in print since 29 July 2022.

[2] Keet, C. M., & Fillottrani, P. R. (2015). An ontology-driven unifying metamodel of UML Class Diagrams, EER, and ORM2. Data & Knowledge Engineering, 98, 30–53.

[3] Fillottrani, P.R., Keet, C.M. KF metamodel formalization. Technical Report, Arxiv.org http://arxiv.org/abs/1412.6545. Dec 19, 2014. 26p.

[4] Delcambre, L. M. L., Liddle, S. W., Pastor, O., & Storey, V. C. (2018). A reference framework for conceptual modeling. In: 37th International Conference on Conceptual Modeling (ER’18). LNCS. Springer, vol. 11157, 27–42.

English, Englishes – which one to use for writing?

Sometimes, the answer to the question in the post’s title is easy, if you’re writing in English: do whatever the style guide says. Don’t argue with the journal editor or typesetter about that sort of trivia (unless they’re very wrong). If it states American English spelling, do so; if British English, go for that. If you can’t distinguish your color from colour, modeling from modelling, and a faucet from a tap, use a spellchecker with one of the Englishes on offer—even OpenOffice Writer shows red wavy lines under ‘color’, ‘modeling’, and ‘faucet’ when it’s set to my default “English (South Africa)”. There are very many other places where you can write in English as much as you like or have time for, however, and then the blog post’s question becomes more relevant. How many Englishes or somehow accepted recognised variants of English exist, and where does it make a difference in writing such that you’ll have to, or are supposed to, choose?

It begs the question of how many variants of English count as one of the Englishes, which is tricky to answer, because it depends on what counts. Does a dialect count? Does it count when it’s sanctioned by a country when it has an official language status and a language body? Does it count when there are enough users? Or when there’s enough text to detect the substantive differences? What are the minimum number or type of differences, if any, and from which standard, before one may start to talk of different Englishes and a new spin-off X-English? People have been deliberating about such matters and trying to document differences and even have come up with classification schemes. Englishes around the world, to be more precise, refer to localised or indigenised versions of English that are either those people’s first or institutionalised language, not just any variant or dialect. There’s an International Association for World Englishes (IAWE) and there are handbooks, textbooks, and scientific journals about it, and the 25th conference of the IAWE will take place next year.

In recent years there have been suggestions that English could break up into mutually unintelligible languages, much as Latin once did. Could such a break-up occur, or are we in need of a new appreciation of the nature of World English?

Tom McArtrur, 1987, writing from “the mother country”, but not “the centre of gravity”, of English (pdf).

My expertise doesn’t go that far – I’m operating from the consumer-side of these matters, standards-following, and trying to not make too many mistakes. It took me a while to figure out there was British English (BE) and American English (AE) and then it was a matter of looking up rules on spelling differences, like -ise vs. -ize and single vs. double l (e.g., traveling vs. travelling), checking comparative word lists, and other varied differences, like whether it’s ‘towards’ or ‘toward’ or 15:30, 15.30, 3.30pm or 3:30pm (or one of my colleagues p’s, like a 3.30p). Not to mention a plethora of online writing guides and the comprehensive sense of style book by Steven Pinker. Let’s explore the Englishes and Global English a little.

McArthur’s Englishes (source)

South African English (SAE) exists as one of the recognised Englishes, all the way into internationally reputable dictionaries. It is a bit of a mix of BE and AE, with some spices sprinkled into it. It tries to follow BE but there are AE influences due to the media and, perhaps, anti-colonial sentiment. It’s soccer, not football, for instance, and the 3.30pm variant rather than a 24h clock. Well, I’m not sure it is officially, but practically it is so. It also has ‘weird’ words that everyone is convinced is native English of the BE variety, but isn’t, like timeously rather than timeous or timely – the most I could find was a Wiktionary entry on it claiming it to be Scottish and SAE, but not even the Dictionary of SAE (DSAE) has an entry for it. I’ve seen it so often in work emails over the years that I caved in and use it as well. There are at least a handful of SAE words that people in South Africa think is BE but isn’t, as any SA expat will be able to recall when they get quizzical looks overseas. Then there are hundreds of words that people know is SAE at least unofficially, which are mainly the loan words and adopted words from the 10 other languages spoken in SA – regional overlap causes mutual language influences in all directions. Bakkie, indaba, veld, lekker, dagga, and many more – I’ve blogged about that before. My OpenOffice SAE spellchecker doesn’t flag any of these words as typos.

Arguably, also grammatical differences for SAE exist. In practice they sure do, but I’m not aware of anything officially endorsed. There is no ‘benevolent language dictator’ with card-carrying members of the lexicography and grammar police to endorse or reprimand. Indeed there is the Pan-South African Language Board (PANSALB), but its teeth and thunder don’t come close to the likes of the Académie Française or Real Academia Española. Regarding grammar, that previous post already mentioned the case of the preposition at the end of a sentence when it’s a separable part of the verb in Afrikaans, Dutch, and German (e.g., meenemen or mitnehmen ‘take with’). A concoction that still makes me wince each time I hear or read it, is the ‘can be able to’. It’s either can + verb what you can, or copula + able to + verb what you can do. It is, e.g., ‘I can carry out the experiment’ or ‘I’m able to carry out the experiment’, but not ‘I can be able to carry out the experiment’. I suspect it carries over from a verb form in Niger-Congo B languages since I’ve heard it used also by at least Tanzanians, Kenyans, and Malawians, and meanwhile I’ve occasionally seen it also in texts written by English South African students.

If the notion of “Englishes” feels uncomfortable, then what about Global/World/International English? Is there one? For many a paper I review double-blind, i.e., where the author names and affiliations are hidden, I can’t tell unless the English is really bad. I’ve read enough to be able to spot Spanglish or Chinglish, but mostly I can’t tell, in that there’s a sort of bland scientific English – be it a pidgin English, or maybe multiple authors cancel out ways of making mistakes, or no-one really bothers tear the vocabulary apart into their boxes because it’s secondary to the scientific content being communicated. No doubt that investigative deliberations are ongoing about that too; if there aren’t, they ought to.

Another scenario for ‘global English’, concerns how to write a newsletter to a global audience. For instance, if you were to visit a website with an intended audience in the USA, then it should tolerable to read “this fall”, even though elsewhere it’s either autumn, spring, a rainy or a dry season. If it’s an article by the UN, say, then one may expect a different wording that is either not US-centric or, if the season matters, to qualify it like in a “Covid-19 cases are expected to rise during fall and winter in North America”. With the former wording, you can’t please everyone, due to different calendars with different month names and year ends and different seasons. The question also came up recently for a Wikimedia blog post that I was involved sideways in a draft version, on Abstract Wikipedia progress for its natural language generation component. My tendency was toward(s) a Global English, whereas one of my collaborators’ stance was that they assumed a rule that it should be the English of wherever the organisation’s headquarters is located. These choices were also confusing when I was writing the first draft of my memoir: it was published by a South African publisher, hence, SAE style guidelines, but the book is also distributed – and read! – internationally.

Without clear rules, there will always be people who complain about your English, be it either that you’re wrong or just not in the inner circle for sensing ‘the feeling of the language that only a native speaker can have’, that supposedly inherently unattainable fingerspitzengefühl for it. No clear rules isn’t good for developing spelling and grammar checkers either. In that regard, and that one only, perhaps I just might prefer a benevolent dictator. I don’t even care which of the Englishes (except for not the stupid stuff like spelling ‘light’ as ‘lite’, ffs). I also fancy the idea of banding together with other ‘nonfirst-language’ speakers of English to start devising and dictating rules, since the English speakers can’t seem to sort out their own language – at least not enough like the grammatically richer languages – and we’re in the overwhelming majority in numbers (about 1:3 apparently). One can dream.

As to the question in the title of the blog post: what I’ve written so far is not a clear answer for all cases, indeed, in particular when there is no editorial house style dictating it, but this lifting of the veil hopefully has made clear that attempting to answer the question means opening up that can of worms further. You could create your own style guide for your not-editor-policed writings. The more I read about it, though, the more complicated things turn out to be, so you’re warned in case you’d like to delve into this topic. Meanwhile, I’ll keep winging it on my blog with some version of a ‘global English’ and inadvertent typos and grammar missteps…

How does one do an ontological investigation?

It’s a question I’ve been asked several times. Students see ontology papers in venues such as FOIS, EKAW, KR, AAAI, Applied Ontology, or the FOUST workshops and it seems as if all that stuff just fell from the sky neatly into the paper, or that the authors perhaps played with mud and somehow got the paper’s contents to emerge neatly from it. Not quite. It’s just that none of the authors bothered to write a “methods and methodologies” or “procedure” section. That it’s not written doesn’t mean it didn’t happen.

To figure out how to go about doing such an ontological investigation, there are a few options available to you:

  • Read many such papers and try to distill commonalities with which one could  reverse engineer a possible process that could have led to those documented outcomes.
  • Guess the processes and do something, submit the manuscript, swallow the critical reviews and act upon those suggestions; repeat this process until it makes it through the review system. Then try again with another topic to see if you can do it now by yourself in fewer iterations.
  • Try to get a supervisor or a mentor who has published such papers and be their apprentice or protégé formally or informally.
  • Enrol in an applied ontology course, where they should be introducing you to the mores of the field, including the process of doing ontological investigations. Or take up a major/minor in philosophy.

Pursuing all options likely will get you the best results. In a time of publish-or-perish, shortcuts may be welcome since the ever greater pressures are less forgiving to learning things the hard way.

Every discipline has its own ways for how to investigate something. At a very high level, it still will look the same: you arrive at a question, a hypothesis, or a problem that no one has answered/falsified/solved before, you do your thing and obtain results, discuss them, and conclude. For ontology, what hopefully rolls out of such an investigation is what the nature of the entity under investigation is. For instance, what dispositions are, a new insight on the transitivity of parthood, the nature of the relation between portions of stuff, or what a particular domain entity (e.g., money, peace, pandemic) means.

I haven’t seen cookbook instructions for how to go about doing this for applied ontology. I did do most of the options listed above: I read (and still read) a lot of articles, conducted a number of such investigations myself and managed to get them published, and even did a (small) dissertation in applied philosophy (mentorships are hard to come by for women in academia, let alone the next stage of being someone’s protégé). I think it is possible to distill some procedure from all of that, for applied ontology at least. While it’s still only a rough outline, it may be of interest to put it out there to get feedback on it to see whether this can be collectively refined or extended.

With X the subject of investigation, which could be anything—a feature such as the colour of objects, the nature of a relation, the roles people fulfill, causality, stuff, collectives, events, money, secrets—the following steps will get you at least closer to an answer, if not finding the answer outright:

  1. (optional) Consult dictionaries and the like for what they say about X;
  2. Do a scientific literature review on X and, if needed when there’s little on X, also look up attendant topics for possible ideas;
  3. Criticise the related work for where they fall short and how, and narrow down the problem/question regarding X;
  4. Put forth your view on the matter, by building up the argument step by step; e.g., as follows:
    1. From informal explanation to a possible intermediate stage with sketching a solution (in ad hoc notation for illustration or by abusing ORM or UML class diagram notation) to a formal characterisation of X, or the aspect of X if the scope was narrowed down.
    2. From each piece of informal explanation, create the theory one axiom or definition at a time.
    Either of the two may involve proofs for logical consequences and will have some iterations of looking up more scientific literature to finalise an axiom or definition.
  1. (optional) Evaluate and implement.
  2. Discuss where it gave new insight, note any shortcomings, and mention new questions it may generate or problem it doesn’t solve yet, and conclude.

For step 3, and as compared to scientific literature I’ve read in other disciplines, the ontologists are a rather blunt critical lot. The formalisation stage in step 4 is more flexible than indicated. For instance, you can choose your logic or make one up [1], but you do need at least something of that (more about that below). Few use tools, such as Isabelle, Prover9, and HeTS, to assist with the logic aspects, but I would recommend you do. Also within that grand step 4, is that philosophers typically would not use UML or ORM or the like, but use total freedom in drawing something, if there’s a drawing at all (and a good number would recoil at the very word ‘conceptual data modeling language’, but that’s for another time), and likewise for many a logician. Here are two sample sequences for that step 4:

A visualization of the ‘one definition or axiom at a time’ option (4b)

A visualization of the ‘iterating over a diagram first’ option (4a)

As an aside, the philosophical investigations are lonesome endeavours resulting in disproportionately more single-author articles and books. This is in stark contrast with ontologies, those artefacts in computing and IT: many of them are developed in teams or even in large consortia, ranging from a few modellers to hundreds of contributors. Possibly because there are more tasks and the scope often may be larger.

Is that all there is to it? Sort of, yes, but for different reasons, there may be different emphases on different components (and so it still may not get you through the publication process to tell the world about your awesome results). Different venues have different scopes, even if they use the same terminology in their respective CFPs. Venues such as KR and AAAI are very much logic oriented, so there must be a formalization and proving interesting properties will substantially increase the (very small) chance of getting the paper accepted. Toning down the philosophical musings and deliberations is unlikely to be detrimental. For instance, our paper on essential vs immutable part-whole relations [2]. I wouldn’t expect the earlier papers, such as on social roles by Masolo et al [3] or temporal mereology by Donnelly and Bittner [4], to be able to make it through in the KR/AAAI/IJCAI venues nowadays (none of the IJCAI’22 papers sound even remotely like an ontology paper). But feel free to try. IJCAI 2023 will be in Cape Town, in case that information would help to motivate trying.

Venues such as EKAW and KCAP like some theory, but there’s got to be some implementation, (plausible) use, and/or evaluation to it for it to have a chance to make it through the review process. For instance, my theory on relations was evaluated on a few ontologies [5] and the stuff paper had the ontology also in OWL, modelling guidance for use, and notes on interoperability [6]. All those topics, which reside in the “step 5” above, come at the ‘cost’ of less logic and less detailed philosophical deliberations—research time and a paper’s page limits do have hard boundaries.

Ontology papers in FOIS and the like prefer to see more emphasis on the theory and what can be dragged in and used or adapted from advances in analytic philosophy, cognitive science, and attendant disciplines. Evaluation is not asked for as a separate item but assumed to be evident from the argumentation. I admit that sometimes I skip that as well when I write for such venues, e.g., in [7], but typically do put some evaluation in there nonetheless (recall [1]). And there still tends to be the assumption that one can write axioms flawlessly and oversee consequences without the assistance of automated model checkers and provers. For instance, have a look at the FOIS 2020 best paper award paper on a theory of secrets [8], which went through the steps mentioned above with the 4b route, and the one about the ontology of competition [9], which took the 4a route with OntoUML diagrams (with the logic implied by its use), and one more on mereology that first had other diagrams as part of the domain analysis to then go to the formalization with definitions and theorems and a version in CLIF [10]. That’s not to say you shouldn’t do an evaluation of sorts (of the variety use cases, checking against requirements, proving consistency, etc.), but just that you may be able to get away with not doing so (provided your argumentation is good enough and there’s enough novelty to it).

Finally, note that this is a blog post and it was not easy to keep it short. Alleys and more explanations and illustrations and details are quite possible. If you have comments on the high-level procedure, please don’t hesitate to leave a comment on the blog or contact me directly!

References

[1] Fillottrani, P.R., Keet, C.M.. An analysis of commitments in ontology language design. Proceedings of the 11th International Conference on Formal Ontology in Information Systems 2020 (FOIS’20). Brodaric, B and Neuhaus, F. (Eds.). IOS Press, FAIA vol. 330, 46-60.

[2] Artale, A., Guarino, N., and Keet, C.M. Formalising temporal constraints on part-whole relations. Proceedings of the 11th International Conference on Principles of Knowledge Representation and Reasoning (KR’08). Gerhard Brewka, Jerome Lang (Eds.) AAAI Press, pp 673-683.

[3] Masolo, C., Vieu, L., Bottazzi, E., Catenacci, C., Ferrario, R., Gangemi, A., & Guarino, N. Social Roles and their Descriptions. Proceedings of the 9th International Conference on Principles of Knowledge Representation and Reasoning (KR’04). AAAI press. pp 267-277.

[4] Bittner, T., & Donnelly, M. A temporal mereology for distinguishing between integral objects and portions of stuff. Proceedings of Association for the Advancement of Artificial Intelligence conference 2007 (AAAI’07). AAAI press. pp 287-292.

 [5] Keet, C.M. Detecting and Revising Flaws in OWL Object Property Expressions. 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12), A. ten Teije et al. (Eds.). Springer, LNAI 7603, 252-266.

[6] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). Springer LNAI vol. 8876, 209-224.

[7] Keet, C.M. The computer program as a functional whole. Proceedings of the 11th International Conference on Formal Ontology in Information Systems 2020 (FOIS’20). Brodaric, B and Neuhaus, F. (Eds.). IOS Press, FAIA vol. 330, 216-230.

[8] Haythem O. Ismail, Merna Shafie. A commonsense theory of secrets. Proceedings of the 11th International Conference on Formal Ontology in Information Systems 2020 (FOIS’20). Brodaric, B and Neuhaus, F. (Eds.). IOS Press, FAIA vol. 330, 77-91.

[9] Tiago Prince Sales, Daniele Porello, Nicola Guarino, Giancarlo Guizzardi, John Mylopoulos. Ontological foundations of competition. Proceedings of the 10th International Conference on Formal Ontology in Information Systems 2020 (FOIS’18). Stefano Borgo, Pascal Hitzler, Oliver Kutz (eds.). IOS Press, FAIA vol. 306, 96-109.

[10] Michael Grüninger, Carmen Chui, Yi Ru, Jona Thai. A mereology for connected structures. Proceedings of the 11th International Conference on Formal Ontology in Information Systems 2020 (FOIS’20). Brodaric, B and Neuhaus, F. (Eds.). IOS Press, FAIA vol. 330, 171-185.

More detail on the ontology of pandemic

When we can declare the covid-19 pandemic to be over? I mulled about that earlier in January this year when the omicron wave was fizzling out in South Africa, and wrote a blog post as a step toward trying to figure out and a short general public article was published by The Conversation (republished widely, including by The Next Web). That was not all and the end of it. In parallel – or, more precisely, behind the scenes – that ontological investigation did happen scientifically and in much more detail.

The conclusion is still the same, just with a more detailed analysis, which is now described in the paper entitled Exploring the ontology of pandemic [1], which was accepted at the International Conference on Biomedical Ontology 2022 recently.

First, it includes a proper discussion of how the 9 relevant domain ontologies have pandemic represented in the ontology – the same as epidemic, a sibling thereof, or as a subclass, and why – and what sort of generic top-level entity it is asserted to be, and a few more scientific references by domain experts.

Second, besides the two foundational ontologies that I discussed the alignment to (DOLCE and BFO) in the blog post, I tried with five more foundational ontologies that were selected meeting several criteria: BORO, GFO, SUMO, UFO, and YAMATO. That mainly took up a whole lot more time, but it didn’t add substantially to insights into what kind of entity pandemic is. It did, however, make clear that manually aligning is hard and difficult to get it as precise as it ought, and may need, to be, for several reasons (elaborated on in the paper).

Third, I dug deeper into the eight characteristics of pandemics according to the review by Morens, Folkers and Fauci (yes, him, from the CDC) [1] and disentangled what’s really going on with those, besides already having noted that several of them are fuzzy. Some of the characteristics aren’t really a property of pandemic itself, but of closely related entities, such as the disease (see table below). There are so many intertwined entities and relations, in fact, that one could very well develop an ontology of just pandemics, rather than have it only as a single class on an ontology as is now the case. For instance, there has to be a high attack rate, but ‘attack rate’ itself relies on the fact that there is an infectious agent that causes a disease and that R (reproduction) number that, in turn, is a complex thing that takes into account factors including susceptibility to infection, social dynamics of a population, and the ability to measure infections.

Finally, there are different ways to represent all the knowledge, or a relevant part thereof, as I also elaborated on in my Bio-Ontologies keynote last month. For instance, the attack rate could be squashed into a single data property if the calculation is done elsewhere and you don’t care how it is calculated, or it can be represented in all its glory details for the sake of it or for getting a clearer picture of what goes into computing the R number. For a scientific ontology, the latter is obviously the better choice, but there may be scenarios where the former is more practical.

The conclusion? The analysis cleared up a few things, but with some imprecise and highly complex properties as part of the mix to determine what is (and is not) a pandemic, there will be more than one optimum/finish line for a particular pandemic. To arrive at something more specific than in the paper, the domain experts may need to carry out a bit more research or come up with a consensus on how to precisiate those properties that are currently still vague.

Last, but not least, on attending ICBO’22, which will be held from 25-28 September in Ann Arbour, MI, USA: it runs in hybrid format. At the moment, I’m looking into the logistics of trying to attend in person now that we don’t have the highly anticipated ‘winter wave’ like the one we had last year and that thwarted my conference travel planning. While that takes extra time and resources to sort out, there’s that very thick silver lining that that also means we seem to be considerably closer to that real end of this pandemic (of the acute infections at least). According to the draft characterisation pandemic, one indeed might argue it’s over.

References

[1] Keet, C.M. Exploring the Ontology of Pandemic. 13th International Conference on Biomedical Ontology (ICBO’22). CEUR-WS. Michigan, USA, September 25-28, 2022.

[2] Morens, DM, Folkers, GK, Fauci, AS. What Is a Pandemic? The Journal of Infectious Diseases, 2009, 200(7): 1018-1021.

A proposal for a template language for Abstract Wikipedia

Natural language generation applications have been ‘mainstreaming’ behind the scenes for the last couple of years, from automatically generating text for images, to weather forecasts, summarising news articles, digital assistants that mechanically blurt out text based the structured information they have, and many more. Google, Reuters, BBC, Facebook – they all do it. Wikipedia is working on it as well, principally within the scope of Abstract Wikipedia to try to build a better multilingual Wikipedia [1] to reach more readers better. They all have some source of structured content – like data fetched from a database or spreadsheet, information from, say, a UML class diagram, or knowledge from some knowledge graph or ontology – and a specification as to what the structure of the sentence should be, typically with some grammar rules to at least prettify it, if not also being essential to generate a grammatically correct sentence [2]. That specification is written in templates that are then filled with content.

For instance, a simple rendering of a template may be “Each [C1] [R1] at least one [C2]” or “[I1] is an instance of [C1]”, where the things within the square brackets are variables standing in for content that will be fetched from the source, like a class, relationship, or individual. Linking these to a knowledge graph about universities, it may generate, e.g., “Each academic teaches at least one course” and “Joanne Soap is an instance of Academic”. To get the computer to do this, just “Each [C1] [R1] at least one [C2]” for template won’t do: we need to tell it what the components are so that the program can process it to generate that (pseudo-)natural language sentence.

Many years ago, we did this for multiple languages and used XML to specify the templates for the key aspects of the content. The structured input were conceptual data models in ORM in the DOGMA tool that had that verbalisation component [3]. As example, the template for verbalising a mandatory constraint was as follows:

<Constraint xsi:type="Mandatory">
 <Text> - [Mandatory] Each</Text>
 <Object index="0"/>
 <Text>must</Text>
 <Role index="0"/>
 <Text>at least one</Text>
 <Object index="1"/>
</Constraint>

Besides demarcating the sentence and indicating the constraint, there’s fixed text within the <text> … </text> tags and there’s the variable part with the <Object… that declares that the name of the object type has to be fetched and the <Role… that declares that the name of the relationship has to be fetched from the model (well, more precisely in this care: the reading label), which were elements declared in an XML Schema. With the same example as before, where Academic is in the object index “0” position and Course in the “1” position (see [3] for details), the software would then generate “ – [Mandatory] Each Academic must teaches at least one Course.”

This can be turned up several notches by adding grammatical features to it in order to handle, among others, gender for nouns in German, because they affect the rendering of the ‘each’ and ‘one’ in the sample sentence, not to mention the noun classes of isiZulu and many other languages [4], where even the verb conjugation depends on the noun class of the noun that plays the role of subject in the sentence. Or you could add sentence aggregation to combine two templates into one larger one to generate more flowy text, like a “Joanne Soap is an academic who teaches at least one course”. Or change the application scenario or the machinery for how to deal with the templates. For instance, instead of those variables in the template + code elsewhere that does the content fetching and any linguistic processing, we could put part of that in the template specification. Then there are no variables as such in the template, but functions. The template specification for that same constraint in an ORM diagram might then look like this:

ConstraintIsMandatory {
 “[Mandatory] Each ”
 FetchObjectType(0)
 “ must ”
 MakeInfinitive(FetchRole(0))
 “ at least one ”
 FetchObjectType(1)}

If you want to go with newer technology than markup languages, you may prefer to specify it in JSON. If you’re excited about functional programming languages and see everything through the lens of functions, you even can turn the whole template specification into a bunch of only functions. Either way: there must be a specification of how those templates are permitted to look like, or: what elements can be used to make a valid specification of a template. This so that the software will work properly so that it neither will spit out garbage nor will halt halfway before returning anything. What is permitted in a template language can be specified by means of a model, such as an XML Schema or a DTD, a JSON artefact, or even an ontology [5], a formal definition in some notation of choice, or by defining a grammar (be it a CFG or in BNF notation), and anyhow with enough documentation to figure out what’s going on.

How might this look like in the context of Abstract Wikipedia? For the natural language generation aspects and its first proposal for the realiser architecture, the structured content to be rendered in a natural language sentence is fetched from Wikidata, as is the lexicographic data, and the functions to do the various computations are to come from/go in Wikifunctions. They’re then combined with the templates in various stages in the realiser pipeline to generate those sentences. But there was still a gap as to what those templates in this context may look like. Ariel Gutman, a google.org fellow working on Abstract Wikipedia, and I gave it a try and that proposal for a template language for Abstract Wikipedia is now online accessible for comment, feedback, and, if you happen to speak a grammatically rich language, an option to provide difficult examples so that we can check whether the language is expressive enough.

The proposal is – as any other proposal for a software system – some combination of theoretical foundations, software infrastructure peculiarities, reasoned and arbitrary design decisions, compromises, and time constraints. Here’s a diagram of the key aspects of the syntax, i.e., with the elements, how they relate, and the constraints holding between them, in ORM notation:

An illustrative diagram with the key features of the template language in ORM notation.

There’s also a version in CFG notation, and there are a few examples, each of which shows how the template looks like for verbalising one piece of information (Malala Yousafzai’s age) in Swedish, French, Hebrew, and isiZulu. Swedish is the simplest one, as would English or Dutch be, so let’s begin with that:

Persoon_leeftijd_nl(Entity,Age_in_years): “{Person(Entity) is 
  {Age_in_years} jaar.}”

Where the Person(Entity) fetches the name of the person (that’s identified by an identifier) and the Age_in_years fetches the age. One may like to complicate matters and add a conditional statement, like that any age less than 30 will render that last part not just as jaar ‘year’, but as jaar oud ‘years old’ but jaar jong ‘years young’, but where that dividing line is, is a sensitive topic for some and I will let that rest. In any case, in Dutch, there’s no processing of the number itself to be able to render it in the sentence – 25 renders as 25 – but in other languages there is. For instance, in isiZulu. In that case, instead of a simple fetching of the number, we can put a function in the slot:

Person_AgeYr_zu(Entity,Age_in_years): “{subj:Person(Entity)} 
  {root:subjConcord()}na{Year(Age_in_years).}”

That Year(Age_in_years) is a function that is based on either another function or a sub-template. For instance, it can be defined as follows:

Year_zu(years):"{root:Lexeme(L686326)} 
  {concord:RelativeConcord()}{Copula()}{concord_1<nummod:NounPrefix()}-
  {nummod:Cardinal(years)}"

Where Lexeme(L686326) is the word for ‘year’ in isiZulu, unyaka, and for the rest, it first links the age rendering to the ‘year’ with the RelativeConcord() of that word, which practically fetches e- for the ‘years’ (iminyaka, noun class 4),  then gets the copulative (ng in this case), and then the concord for the noun class of the noun of the number. Malala is in her 20s, which is amashumi amabili ..  (noun class 6, which is computed via Cardinal(years)), and thus the function nounPrefix will fetch ama-. So, for Malala’s age data, Year_zu(years) will return iminyaka engama-25. That then gets processed with the rest of the Person_AgeYr_zu template, such as adding an U to the name by subj:Person(Entity), and later steps in the pipeline that take care of things like phonological conditioning (-na- + i- = –ne-), to eventually output UMalala Yousafzai uneminyaka engama-25. In other words: such a template indeed can be specified with the proposed template syntax.

There’s also a section in the proposal about how that template language then connects to the composition syntax so that it can be processed by the Wikifunctions Orchestrator component of the overall architecture. That helps hiding a few complexities from the template declarations, but, yes, someone’s got to write those functions (or take them from existing grammar engines) that will take care of those more or less complicated processing steps. That’s a different problem to solve. You also could link it up with another realiser by means of a transformation the the input type it expects. For now, it’s the syntax of the declarative part for the templates.

If you have any questions or comments or suggestions on that proposal or interesting use cases to test with, please don’t hesitate to add something to the talk page of the proposal, leave a comment here, or contact either Ariel or me directly.

 

References

[1] Vrandečić, D. Building a multilingual Wikipedia. Communications of the ACM, 2021, 64(4), 38-41.

[2] Mahlaza, Z., Keet, C.M. Formalisation and classification of grammar and template-mediated techniques to model and ontology verbalisation. International Journal of Metadata, Semantics and Ontologies, 2020, 14(3): 249-262.

[3] M. Jarrar, C.M. Keet, and P. Dongilli. Multilingual verbalization of ORM conceptual models and axiomatized ontologies. STARLab Technical Report, Vrije Universiteit Brussels, Belgium. February 2006.

[4] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2017, 51:131-157.

[5] Mahlaza, Z., Keet, C. M. ToCT: A Task Ontology to Manage Complex Templates. Proceedings of the Joint Ontology Workshops 2021, FOIS’21 Ontology Showcase. Sanfilippo, E.M. et al. (Eds.). CEUR-WS vol. 2969. 9p.

Only answering competency questions is not enough to evaluate your ontology

How do you know whether the ontology you developed or want to reuse is any good? It’s not a new question. It has been investigated quite a bit, and so the answer to that is not a short one. Based on a number of anecdotes, however, it seems ever more people are leaning toward a short answer along the line of “it’ll be fine if it can answer my competency questions”.  That is most certainly not the right answer. Let me illustrate  this.

Here’s a set of 5 competency questions and a bad ontology (with the OWL file), being a newly mutilated version of the African Wildlife Ontology [1] modified with a popular South African pastime: the braai, i.e., a barbecue.

  • CQ1: Which animals are served at a barbecue? (Sample answers: kudu, impala,  warthog)
  • CQ2: What are the materials used for a barbecue? (Sample answers: tongs, skewers, poolbraai)
  • CQ3: What is the energy source for a braai device? (Sample answers: gas, coal)
  • CQ4: Which vegetables taste good with a braai? (Sample answers: tomatoes, onion, butternut)
  • CQ5: What food is eaten at a braai, or: what collection of edible things are offered?

The bad ontology does have answers to the competency questions, so a ‘CQs-only’ criterion for quality would suggest that the bad ontology is a good one. 100% good, even.

Why is it a bad one nonetheless?

That’s where years of methods, techniques, and tool development enter the stage (my textbook dedicates Section 5.2 to that), there are heuristics-based tips to prevent pitfalls [2] in general and for bio-ontologies with GoodOD, and there’s also a framework for ontology quality, OQuaRE [3], that all aim to approach this issue of quality systematically. Let’s have look at some of that.

Low-hanging fruit for a quick sanity check is to run the ontology through the Ontology Pitfall Scanner OOPS! [4]. Here’s the summary result, with two opened up that show what was flagged and why:

Mixing naming conventions is not neat. Examples of those in the badBBQ ontology are using CamelCase with PoolBraai but dash in tasty-plant and spaces converted to underscores in Food_Preparation_Material, and lower-case for some classes and upper case for others (PoolBraai and plant). An example of unconnected ontology element is Site: the idea is that if it isn’t really used anywhere in the ontology, then maybe it shouldn’t be in the ontology, or you forgot to add something there and OOPS! points you to that. Pitfall P11 may be contested, but if at all possible, one really should add domain and range to the object property so as to minimise unintended models and make the ontology closer to the reality (or understanding thereof) one aims to present. For instance, surely eats should not have any of the braai equipment on the left-hand side in the domain position, because equipment does not eat—only organisms do.

At the other end of the spectrum are the philosophy and Ontology-inspired methods. The most well-known one is OntoClean [5], which is summarised in the textbook and there’s a tutorial for it in Appendix A. The, perhaps, most straightforward (and simplified) rule within that package is that anti-rigid classes cannot subsume rigid classes, or, in layperson terminology: (physical) entities cannot be subclasses of things that are roles that entities play. Person cannot be a subclass of Employee, since not all persons are always employees. For the badBBQ: Food is a role that an organism or part thereof plays in a certain context, and animals and plants are not always food—they are organisms (or part thereof) irrespective of the roles they may play (or, worded differently: of the roles that they are the ‘bearer of’). 

Then there are the methods and tools in-between these two extremes. Take, for instance, Advocatus Diaboli / PEW (Possible World Explorer) [6], which helps you find places where disjointness axioms ought to be added. This is in the same line of thinking as adding those domain and range axioms: it helps you to be more precise and find mistakes. For instance, Site and BraaiEquipment are definitely intended to be disjoint: some location cannot be a concrete physical object. Adding the disjointness axiom results in an error, however: the PoolBraai is unsatisfiable because it was declared to be both a subclass of Site and of BraaiEquipment. Pool braais do exist, as there are braais that can be placed in or next to a pool. What the issue is here, is that there are two different meanings of the same term: once that device for the barbecue and once the ‘braai area by the pool’. That is, they are two different entities, not one, and so they either have to appear as two different entities in the ontology, with different names, or the intended one chosen and one of the subsumption axioms removed.

I also put some ugly things in the description of Braai: both those two ways of the source of heating and the member. While one may say informally that a braai involves a collection of things (CQ5), ontologically, it won’t fly with ‘member’. Membership is not arbitrary. There are foundational (or top-level) ontologies whose developers already did the heavy-lifting of ontological analysis of key elements and membership is one of them (see, among others, [7-9]). Such relations can simply be reused in one’s own ontology (e.g., imported from here), with their widely-agreed upon meaning; there’s even a tool to assist you with that [10]. If what you want is something else than that, then that relation is not membership but indeed something else. In this case, there are two options to fix it: 1) a braai as an event (rather than the device) will have objects (such as food, the tongs) participating in the event, or 2) for the braai as a device, it has accessories (related with has Accessory, if you will), such as the tongs, and it is used for preparing (/barbecuing/cooking/frying) food (/meals/dinners).

Then the source of heating. The one-off construct (with the {…}) is relatively popular in conceptual data modelling when you know the set of values is ever only allowed to be that, like the days of the week. But in our open world of ontologies, more just might be added or removed. And, ontologically, coal, gas, and electricity are not individuals, so also that is incorrect. The other option, with heatedBy xsd:String, has its own set of problems, largely because data properties with their data types entail application implementation decisions that ought not to be in an ontology that is supposed to be usable across multiple applications (see Section 6.1 ‘attributions’ for a longer explanation). It can be addressed by granting them their rightful status as classes in the OWL file and relating that to the braai.

This is not an exhaustive analysis of the badBBQ ontology, nor even close to a full list of the latest methods and techniques for good ontology development, but I hope I’ve illustrated my point about not relying on just CQs as evaluation of your ontology. Sample changes made to the badBBQ are included in the improvedBBQ OWL file. Here’s snapshot of the differences in the basic metrics (on the left). There’s room for another round of improvements, but I’ll leave that for later.

All this was not to say that competency questions are useless. They are not. They can be very useful to demarcate the scope of the ontology’s content, to keep on track with that since it’s easy to go astray from the intended scope once you begin or be subjected to scope creep, and to check whether at least the minimum content is in there somehow (and if not, why not). It’s the easy thing to check compared to the methods, techniques, and theory about good, sub-optimal, and bad ways of representing something. But such relative ease with CQs, perhaps unfortunately, does not mean it suffices to obtain a ‘good quality’ stamp of approval. Why the plethora of methods, techniques, theories, and tools aren’t used as often as they should, is a question I’d like to know the answer to, and may be a topic for another time.

References

[1] Keet, C.M. The African Wildlife Ontology tutorial ontologies. Journal of Biomedical Semantics, 2020, 11:4.

[2] Keet, C.M., Suárez-Figueroa, M.C., Poveda-Villalón, M. Pitfalls in Ontologies and TIPS to Prevent Them. Knowledge Discovery, Knowledge Engineering and Knowledge Management: IC3K 2013 Selected Papers. A. Fred et al. (Eds.). Springer CCIS vol. 454, pp. 115-131, 2015. preprint

[3] Duque-Ramos, A. et al. OQuaRE: A SQuaRE-based approach for evaluating the quality of ontologies. Journal of research and practice in information technology, 2011, 43(2): 159-176

[4] Poveda-Villalón, M., Gómez-Pérez, A., Suárez-Figueroa, M. C.. OOPS!(Ontology Pitfall Scanner!): An on-line tool for ontology evaluation. International Journal on Semantic Web and Information Systems, 2014, 10(2): 7-34.

[5] Guarino, N., Welty,C. An overview of OntoClean. In S. Staab and R. Studer (Eds.), Handbook on Ontologies, pp 201-220. Springer Verlag, 2009.

[6] Ferré, S., Rudolph, S. Advocatus diaboli exploratory enrichment of ontologies with negative constraints. In A ten Teije et al., editors, 18th International Conference on Knowledge Engineering and Knowledge Management (EKAW’12), volume 7603 of LNAI, pages 42-56. Springer, 2012. Oct 8-12, Galway, Ireland.

[7] Keet, C.M. and Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2): 91-110.

[8] Masolo, C., Borgo, S., Gangemi, A., Guarino, N., Oltramari, A. WonderWeb Deliverable D18–Ontology library. WonderWeb. 2003.

[9] Smith, B., et al. Relations in biomedical ontologies. Genome biology, 2005, 6.5: 1-15.

[10] Keet, C.M., Fernández-Reyes, F.C., Morales-González, A. Representing mereotopological relations in OWL ontologies with OntoPartS. 9th Extended Semantic Web Conference (ESWC’12), Simperl et al. (eds.), 27-31 May 2012, Heraklion, Crete, Greece. Springer, LNCS 7295, 240-254.

Riffling through readability metrics

I was interviewed recently about my ontology engineering textbook, following having won the 2021 UCT Open Textbook Award for it. The interviewer assumed initially it was a textbook for undergraduate students because it has the word ‘Introduction’ in the title. Not quite. Soon thereafter, one of the 3rd-year computer science students who arrived early in class congratulated me on the award and laughed that that was an introduction at a different level altogether. It is, by design, but largely so with respect to the topics covered: it does not assume the reader knows anything about ontologies—hence, the ‘introduction’—but it does take for granted that the reader knows some of the basics in computer science or software engineering. For instance, there’s no explanation on what a database is, or a conceptual data model, or object-oriented software.

In addition, and getting to this post’s topic, I had tried to make the textbook readable, and at least definitely more accessible than scientific papers and handbooks that were the only alternatives before this textbook saw the light of day. I think it is readable and I also have received feedback that the book was easily readable. Admittedly, though, the notion of assessing readability only came afore in the editing process of my memoir, for it is aimed at a broader audience than the textbook. This raised a nagging question. What is it that makes some text readable?

It’s one of those easy questions that just do not have a simple answer. The quickest answer is “use a readability metric standardised by grade level” for a home language/mother tongue speaker. Scratching that surface, it lays bare the next question: what parameters have to be taken into account in what way so as to come up with a score for the estimated grade level? Even the brief overview on the Wikipedia page on readability already lists 11 measurable parameters, and there are different ways to measure them and to possibly combine them as well. The same page lists 8 popular metrics and 4 advanced ones. That’s just for English. For instance, the Flesch reading ease is calculated as

206.835 – 1.015 * (total number of words / total number of sentence) – 84.6 * (total number of syllables / total number of words)

A rough categorisation of various texts for adults according to their respective Flesh Reading ease scores. Source: https://blog.cathy-moore.com/2017/07/how-to-get-everyone-to-write-like-ernest-hemingway/.

to result in rough bands of reading ease. For instance, 90-100 for an 11-year old, 60-70 as ‘plain English’, up to anything <30 down to 0 (and possibly even negative) for very to extremely difficult English texts and for professionals and graduate students. See also the figure on the right.

The Gunning fog index has fewer fantastically tweaked multipliers:

Grade level = 0.4 * (average sentence length + percentage of Hard Words)

but there’s a wonderful Hard Words variable. What is that supposed to mean exactly? The readability page says that they are those words with two or more syllables, but the Gunning fog index page says three or more syllables (excluding proper nouns, familiar jargon, or compound words, and not counting common suffixes either).

Either way, the popular metrics are all easy to measure computationally without human intervention. Parameters such as fatigue or speed of perception or background knowledge are not. Proxies for reading speed surely will be available by now somewhere; e,g., in the form of algorithms that analyse page-turning in eBook readers and a visitor’s behaviour scrolling webpages when reading a long article (the system likely knows that you probably won’t finish reading this post).

I don’t know why I never thought about all that before writing the textbook and why none of the writing guidelines I have looked up over the years had mentioned it. The most I did for readability, especially when I was writing my PhD thesis, was the “read aloud test” that was proposed in one of those writing guidelines: read your text aloud, and if you can’t, then something is wrong with the sentence. I used the Acrobat built-in screen reader for that as a first pass. If the text-to-speech algorithm stumbled over it, then it was time to reconsider the phrasing. I would then read it aloud myself and decide whether the Acrobat algorithm had to be improved upon or my sentence had to be revised.

How does the ontology engineering textbook fare? Are my blog posts any more readable? How much worse are the scientific papers? Is it true that the English in articles in science are a sort of a pidgin English whereas in other fields, notably humanities, the erudition and wordsmithery shines through in the readability metrics scores? I have no good answers now, but it would be easy to compute with a fine dataset of texts and the Python py-readability-metrics module for some quick ‘n dirty checks or to adapt some other open source code for batch processing (e.g., from here, among multiple options). Maybe later; there are some other kinks to straighten first.

Notably, one can game the system based on some of those key parameters. Besides sentences length—around 20 words is fine, I was told a long while ago—there are the number of syllables of the words and the vocabulary that are taken into account. More monosyllabic words in shorter sentences with fewer types will come out as more easily readable, according to the metric that is.

But ‘easier’ or ‘better’ lies in the eyes of the beholder: it may be such confetti so as to have become awful to read due to its lack of flow and coherence. Really. It is as I say. Don’t you think? It’s the way I see it. What say you? The “ Really. … you?” has a Flesch reading ease of 90.38 and a Gunning Fog index of 1.44 as number of years of formal education you would have needed to easily understand that. The “Notably, … and coherence” before it in this paragraph has a Flesch reading ease of 50.52 and a Gunning Fog index of 13.82.

Based on random sampling from my textbook, at least one of the paragraphs (p34, ‘purposes’) got a Flesch reading ease of 9.29 and a Gunning Fog index of 22.73, while other parts are around 30 and some are even in the 50-70 region for reading ease.

The illustration out of the way, let’s look at limitations. First, not all polysyllabic words are difficult and not all monosyllabic words are simple; e.g., the common, and therewith easy, ‘education’ and ‘interesting’ vs. the semi-obscure ‘nub’, ‘sloop’, ‘gry’, and ‘squick’ (more here). The longest monosyllabic words, such as ‘scraunched’ and ‘strengthed’, aren’t exactly easy to read either.

Plenty of other languages have predominantly polysyllabic words with lots of syllables, such as Dutch or German where new words can be formed by putting existing ones together. Dutch woord meervoudigepersoonlijkheidsstoornis puts together into one concept meervoudige and persoonlijkheid and stoornis (‘multiple personality disorder’). Agglutinating languages, such as isiZulu, not only compose long words, but have so many meaningful pieces that a single word may well be a whole sentence in a disjunctive language. For instance, the 10-syllabic word that one of my former students used to make the point: titukakimureeterahoganu ‘we have never ever brought it to him’. You get used to long words and there’s no reason why English speakers would be inherently incapable to handle that. Intelligence does not depend on one’s mother tongue. Perhaps, if one is used to a disjunctive orthography, one may have become lazy. Any use off aforementioned readability metrics for ‘non-English’ clearly will have to be revised to tailor it to a language.

Then there’s foreign language background that interferes with reading ease. Many a so-called supposedly ‘difficult‘ word in English comes from French, Italian, Latin, or Greek; e.g., oxymoron (Gr), camaraderie (Fr), quotidian (It), and obfuscate (La). For instance, we use oxymoron in Dutch as well, so there’s no ‘difficulty’ to it for a Dutch person, or take maalstroom that is pronounced nearly the same as ‘maelstrom’ and demagoog for ‘demagogue’ (also Greek origins, similar pronunciation) and algorithme for ‘algorithm’ (Persian origins, not an Anglicism), and recalcitrant is even spelled the same. The foreigner trying to speak or write English may not be erudite, but just winging it and hoping that the ‘copy and adapt’ works out. Conversely, supposedly ‘simpler’ words may not be: ‘wayward’ is a synonym for recalcitrant and with only two syllables, it will make the readability score better. It would make it less readable to at least Dutch, Spanish, Italian and so on readers who are trying to read English text, however, because there’s no connection with a familiar-looking word. About 80% of English words are borrowed from other languages.

Be that as it may, maybe I should reassess my textbook on the metric; maybe not. What does the algorithm know about computer science terminology anyhow? “Ontology Engineering is a specialisation in knowledge representation and reasoning.” has a Flesh reading ease of -31.73 and a Gunning Fog index of 20.00; a tough game it would be to get that back to a reading ease of 50.

It did affect a number of sentences in my memoir book. I don’t expect Joe and Joanne Soap to be interested, but teenagers who are shopping around for a university degree programme might, and then professionals, students, and academics with a little spare time to relax and read, too. In other words: a reading ease of around 40-60. Some long sentences could indeed be split up without losing content, coherence, and flow.

There were others where the simplification didn’t feel like an improvement. For instance, compare “according to my opinion” with “the way I saw it”: the former flows smoothly whereas the latter sounds alike a nagging firing off. The latter for sure improves the readability score with all those monosyllabic words. The copy editor changed the former into the latter. It still bugs me. Why? After some further pondering beyond just blaming the grating staccato of a sequence of monosyllabic words, perhaps it is because an opinion generally is (though need not be) formed after considering the facts and analysing them, whereas seeing something in some way may (but definitely need not) be based on facts and analysis. That is, on closer inspection, they’re not equivalent phrases, not at all. Nuances can be, and were, lost with shorter sentences and simpler words. One’s voice, too. So there’s that. Overall, though, I hope the balance leans toward more readable, to get the message across better to more readers.

Lastly, there seems to be plenty of scope for more research on readability metrics—ones that can be computed, that is. While there are several applications for other well-resourced languages, including easy web apps, such as for Spanish and German and even for Dutch, there are very many languages spoken around the globe that do not have such metrics and nice algorithms yet. But even the readability metrics for English could be tweaked. For instance, to tailor it to a genre or a discipline. Then one it would be easier to determine if a book is, say, an easy-reading popular science book for the holidays on the beach or one that requires some or even a lot of effort. For computer science, one could take Gunning Fog and adjust the Hard Words variable to exclude common jargon that is detrimental to the score, like ‘encapsulation’ and ‘representation’ (both 5 syllables); biochemistry would need that too, given the long names for chemical compounds. And to add a penalty for too many successive monosyllabic words. There will be more options to tweak the formulae and test it, but such additional digging is something for another time.

As to my question in the introductory paragraph of this post, “What is it that makes some text readable?”: if you’re made it all the way here reading this post, we’re all a bit wiser on readability, but a short and simple answer I still don’t have. It’s a long story with ifs and buts, and the last word is yet to be said about it.

As a bonus, here are a few hints to make something more readable, according to the readability calculator of the web-based editor tool of the The Conversation:

Screenshot I took some time halfway when working on a article for The Conversation.

p.s.: The ‘science of reading‘ adds more to it, to the point you wonder how there even can be metrics. But, their scope is broader.

pp.s.: The first full draft of this post had a reading ease of 52.37 and a Gunning Fog of 11.78, and the final one 54.37 and 11.18, respectively, which is fine by me. Length is probably more of an issue.