Launch of the isiZulu spellchecker

launchspellchecker

Langa Khumalo, ULPDO director, giving the spellchecker demo, pointing out a detected spelling error in the text. On his left, Mpho Monareng, CEO of PanSALB.

Yesterday, the isiZulu spellchecker was launched at UKZN’s “Launch of the UKZN isiZulu Books and Human Language Technologies” event, which was also featured on 702 live radio, SABC 2 Morning Live, and e-news during the day. What we at UCT have to do with it is that both the theory and the spellchecker tool were developed in-house by members of the Department of Computer Science at UCT. The connection with UKZN’s University Language Planning & Development Office is that we used a section of their isiZulu National Corpus (INC) [1] to train the spellchecker with, and that they wanted a spellchecker (the latter came first).

The theory behind the spellchecker was described briefly in an earlier post and it has been presented at IST-Africa 2016 [2]. Basically, we don’t use a wordlist + rules-based approach as some experiments of 20 years ago did, nor a wordlist + a few rules of the now-defunct translate.org.za OpenOffice v3 plugin seven years ago, but a data-driven approach with a statistical language model that uses tri-grams. The section of the INC we used were novels and news items, so, including present-day isiZulu texts. At the time of the IST-Africa’16 paper, based on Balone Ndaba’s BSc CS honours project, the spell checking was very proof-of-concept, but it showed that it could be done and still achieve a good enough accuracy. We used that approach to create an enduser-usable isiZulu spellchecker, which saw the light of day thanks to our 3rd-year CS@UCT student Norman Pilusa, who both developed the front-end and optimised the backend so that it has an excellent performance.

Upon starting the platform-independent isiZulu_spellchecker.jar file, the English interface version looks like this:

zuspellopen

You can write text in the text box, or open a txt or docx file, which then is displayed in the textbox. Click “Run”. Now there are two options: you can choose to step-through the words that are detected as misspelled one at a time or “Show All” words that are detected as misspelled. Both are shown for some sample text in the screenshot below.

zuspellonessection

processing one error at a time

zuspellallsection

highlighting all words detected as very probably misspelled

Then it is up to you to choose what to do with it: correct it in the textbox, “Ignore once”, “Ignore all”, or “Add” the word to your (local) dictionary. If you have modified the text, you can save it with the changes made by clicking “Save correction”. You also can switch the interface from the default English to isiZulu by clicking “File – Use English”, and back to English via “iFayela – ulimi lesingisi”. You can download the isiZulu spellchecker from the ULPDO website and from the GitHub repository for those who want to get their hands on the source code.

To anticipate some possible questions you may have: incorporating it as a plugin to Microsoft word, OpenOffice/LibreOffice, and Mozilla Firefox was in the planning. The former is technologically ‘closed source’, however, and the latter two have a certain way of doing spellchecking that is not amenable to the data-driven approach with the trigrams. So, for now, it is a standalone tool. By design, it is desktop-based rather than for mobile phones, because according to the client (ULPDO@UKZN), they expect the first users to be professionals with admin documents and emails, journalists writing articles, and such, writing on PCs and laptops.

There was also a trade-off between a particular sort of error: the tool now flags more words as probably incorrect than it could have, yet it will detect (a subset of) capitalization, correctly, such as KwaZulu-Natal whilst flagging some of the deviant spellings that go around, as shown in the screenshot below.

zuspellkznThe customer preferred recognising such capitalisation.

Error correction sounds like an obvious feature as well, but that will require a bit more work, not just technologically, but also the underlying theory. It will probably be an honours project topic for next year.

In the grand scheme of things, the current v1 of the spellchecker is only a small step—yet, many such small steps in succession will get one far eventually.

The launch itself saw an impressive line-up of speeches and introductions: the keynote address was given by Dr Zweli Mkhize, UKZN Chancellor and member of the ANC NEC; Prof Ramesh Krishnamurthy, from Aston University UK, gave the opening address; Mpho Monareng, CEO of PanSALB gave an address and co-launched the human language technologies; UKZN’s VC Andre van Jaarsveld provided the official welcome; and two of UKZN’s DVCs, Prof Renuka Vithal and Prof Cheryl Potgieter, gave presentations. Besides our ‘5-minutes of fame’ with the isiZulu spellchecker, the event also launched the isiZulu National Corpus, the isiZulu Term Bank, the ZuluLex mobile-compatible application (Android and iPhone), and two isiZulu books on collected short stories and an English-isiZulu architecture glossary.

 

References

[1] Khumalo, L. Advances in developing corpora in African languages. Kuwala, 2015, 1(2): 21-30.

[2] Ndaba, B., Suleman, H., Keet, C.M., Khumalo, L. The Effects of a Corpus on isiZulu Spellcheckers based on N-grams. IST-Africa 2016. May 11-13, 2016, Durban, South Africa.

Relations with roles / verbalising object properties in isiZulu

The narratives can be very different for the paper “A model for verbalising relations with roles in multiple languages” that was recently accepted paper at the 20th International Conference on Knowledge Engineering and Knowledge management (EKAW’16), for the paper makes a nice smoothie of the three ingredients of language, logic, and ontology. The natural language part zooms in on isiZulu as use case (possibly losing some ontologist or logician readers), then there are the logics about mapping the Description Logic DLR’s role components with OWL (lose possible interest of the natural language researchers), and a bit of philosophy (and lose most people…). It solves some thorny issues when trying to verbalise complicated verbs that we need for knowledge-to-text natural language generation in isiZulu and some other languages (e.g., German). And it solves the matching of logic-based representations popularised in mainly UML and ORM (that typically uses a logic in the DLR family of Description Logic languages) with the more commonly used OWL. The latter is even implemented as a Protégé plugin.

Let me start with some use-cases that cause problems that need to be solved. It is well-known that natural language renderings of ontologies facilitate communication with domain experts who are expected to model and validate the represented knowledge. This is doable for English, with ACE in the lead, but it isn’t for grammatically richer languages. There, there are complications, such as conjugation of verbs, an article that may be dependent on the preposition, or a preposition may modify the noun. For instance, works for, made by, located in, and is part of are quite common names for object properties in ontologies. They all do have a dependent preposition, however, there are different verb tenses, and the latter has a copulative and noun rather than just a verb. All that goes into the object properties name in an ‘English-based ontology’ and does not really have to be processed further in ontology verbalisation other than beautification. Not so in multiple other languages. For instance, the ‘in’ of located in ends up as affixes to the noun representing the object that the other object is located in. Like, imvilophu ‘envelope’ and emvilophini ‘in the envelope’ (locative underlined). Even something straightforward like a property eats can end up having to be conjugated differently depending on who’s eating: when a human eats, it is udla in isiZulu, but for, say, a dog, it is idla (modification underlined), which is driven by the system of noun classes, of which there are 17 in isiZulu. Many more examples illustrating different issues are described in the paper. To make a long story short, there are gradations in complicating effects, from no effect where a preposition can be squeezed in with the verb in naming an OP, to phonological conditioning, to modifying the article of the noun to modifying the noun. A ‘3rd pers. sg.’ may thus be context-dependent, and notions of prepositions may modify the verb or the noun or the article of the noun, or both. For a setting other than English ontologies (e.g., Greek, German, Lithuanian), a preposition may belong neither to the verb nor to the noun, but instead to the role that the object plays in the relation described by the verb in the sentence. For instance, one obtains yomuntu, rather than the basic noun umuntu, if it plays the role of the whole in a part-whole relation like in ‘heart is part of a human’ (inhliziyo iyingxenye yomuntu).

The question then becomes how to handle such a representation that also has to include roles? This is quite common in conceptual data modelling languages and in the DLR family of DL languages, which is known in ontology as positionalism [2]. Bumping up the role to an element in the representation language—thus, in addition to the relationship—enables one to attach information to it, like whether there is a (deep) preposition associated with it, the tense, or the case. Such role-based annotations can then be used to generate the right element, like einen Betrieb ‘some company’ to adjust the article for the case it goes with in German, or ya+umuntu=yomuntu ‘of a human’, modifying the noun in the object position in the sentence.

To get this working properly, with a solid theoretical foundation, we reused a part of the conceptual modelling languages’ metamodel [3] to create a language model for such annotations, in particular regarding the attributes of the classes in the metamodel. On its own, however, it is rather isolated and not immediately useful for ontologies that we set out to be in need of verbalising. To this end, it links to the ‘OWL way of representing relations’ (ontologically: the so-called standard view), and we separate out the logic-based representation from the readings that one can generate with the structured representation of the knowledge. All in all, the simplified high-level model looks like the picture below.

Simplified diagram in UML Class Diagram notation of the main components (see paper for attributes), linking a section of the metamodel (orange; positionalist commitment) to predicates (green; standard view) and their verbalisation (yellow). (Source: [1])

Simplified diagram in UML Class Diagram notation of the main components (see paper for attributes), linking a section of the metamodel (orange; positionalist commitment) to predicates (green; standard view) and their verbalisation (yellow). (Source: [1])

That much for the conceptual part; more details are described in the paper.

Just a fluffy colourful diagram isn’t enough for a solid implementation, however. To this end, we mapped one of the logics that adhere to positionalism to one of the standard view, being DLR [4] and OWL, respectively. It equally well could have been done for other pairs of languages (e.g., with Common Logic), but these two are more popular in terms of theory and tools.

Having the conceptual and logical foundations in place, we did implement it to see whether it actually can be done and to check whether the theory was sufficient. The Protégé plugin is called iMPALA—it could be an abbreviation for ‘Model for Positionalism And Language Annotation’—that both writes all the non-OWL annotations in a separate XML file and takes care of the renderings in Protégé. It works; yay. Specifically, it handles the interaction between the OWL file, the positionalist elements, and the annotations/attributes, plus the additional feature that one can add new linguistic annotation properties, so as to cater for extensibility. Here are a few screenshots:

OWL’s arbeitetFuer ‘works for’ is linked to the relationship arbeiten.

OWL’s arbeitetFuer ‘works for’ is linked to the relationship arbeiten.

The prey role in the axiom of the impala being eaten by the ibhubesi.

The prey role in the axiom of the impala being eaten by the ibhubesi.

 Annotations of the prey role itself, which is a role in the relationship ukudla.

Annotations of the prey role itself, which is a role in the relationship ukudla.

We did test it a bit, from just the regular feature testing to the African Wildlife ontology that was translated into isiZulu (spoken in South Africa) and a people and pets ontology in ciShona (spoken in Zimbabwe). These details are available in the online supplementary material.

The next step is to tie it all together, being the verbalisation patterns for isiZulu [5,6] and the OWL ontologies to generate full sentences, correctly. This is set to happen soon (provided all the protests don’t mess up the planning too much). If you want to know more details that are not, or not clearly, in the paper, then please have a look at the project page of A Grammar engine for Nguni natural language interfaces (GeNi), or come visit EKAW16 that will be held from 21-23 November in Bologna, Italy, where I will present the paper.

 

References

[1] Keet, C.M., Chirema, T. A model for verbalising relations with roles in multiple languages. 20th International Conference on Knowledge Engineering and Knowledge Management EKAW’16). Springer LNAI, 19-23 November 2016, Bologna, Italy. (in print)

[2] Leo, J. Modeling relations. Journal of Philosophical Logic, 2008, 37:353-385.

[3] Keet, C.M., Fillottrani, P.R. An ontology-driven unifying metamodel of UML Class Diagrams, EER, and ORM2. Data & Knowledge Engineering, 2015, 98:30-53.

[4] Calvanese, D., De Giacomo, G. The Description Logics Handbook: Theory, Implementation and Applications, chap. Expressive description logics, pp. 178-218. Cambridge University Press (2003).

[5] Keet, C.M., Khumalo, L. Toward a knowledge-to-text controlled natural language of isiZulu. Language Resources and Evaluation, 2016, in print.

[6] Keet, C.M., Khumalo, L. On the verbalization patterns of part-whole relations in isiZulu. Proceedings of the 9th International Natural Language Generation conference 2016 (INLG’16), Edinburgh, Scotland, Sept 2016. ACL, 174-183.

Surprising similarities and differences in orthography across several African languages

It is well-known that natural language interfaces and tools in one’s own language are known to be useful in ICT-mediated communication. For instance, tools like spellcheckers and Web search engines, machine translation, or even just straight-forward natural language processing to at least ‘understand’ documents and find the right one with a keyword search. Most languages in Southern Africa, and those in the (linguistically called) Bantu language family, are still under-resourced, however, so this is not a trivial task due to the limited data and researched and documented grammar. Any possibility to ‘bootstrap’ theory, techniques, and tools developed for one language and to fiddle just a bit to make it work for a similar one will save many resources compared to starting from scratch time and again. Likewise, it would be very useful if both the generic and the few language-specific NLP tools for the well-resourced languages could be reused or easily adapted across languages. The question is: does that work? We know very little about whether it does. Taking one step back, then: for that bootstrapping to work well, we need to have insight into how similar the languages are. And we may be able to find that out if only we knew how to measure similarity of languages.

The most well-know qualitative way for determining some notion of similarity started with Meinhof’s noun class system [1] and the Guthrie zones. That’s interesting, but not nearly enough for computational tools. An experiment has been done for morphological analysers [2], with promising results, yet it also had more of a qualitative flavour to it.

I’m adding here another proverbial “2 cents” to it, by taking a mostly quantitative approach to it, and focusing on orthography (how things are written down) in text documents and corpora. This was a two-step process. First, 12 versions of the Universal Declaration of Human Rights were examined on tokens and their word length; second, because the UDHR is a quite small document, isiZulu corpora were examined to see whether the UDHR was a representative sample, i.e., whether extrapolation from its results may be justified. The methods, results, and discussion are described in “An assessment of orthographic similarity measures for several African languages” [3].

The really cool thing of the language comparison is that it shows clusters of languages, indicating where bootstrapping may have more or less success, and they do not quite match with Guthrie zones. The cumulative frequency distributions of the words in the UDHR of several languages spoken in Sub-Saharan Africa is shown in the figure below, where the names of the languages are those of the file names of the NLTK data kit that contains the quality translations of the UDHR.

Cumulative frequency distributions of the words in the UDHR of several languages spoken in Sub-Saharan Africa (Source: [3]).

Cumulative frequency distributions of the words in the UDHR of several languages spoken in Sub-Saharan Africa (Source: [3]).

The paper contains some statistical tests, showing that the bottom cluster are not statistically significantly different form each other, but they are from the ‘middle’ cluster. So, the word length distribution of Kiswahili is substantially different from that of, among others, isiZulu, in that it has more shorter words and isiZulu more longer words, but Kiswahili’s pattern is similar to that of Afrikaans and English. This is important for NLP, for isiZulu is known to be highly agglutinating, but English (and thus also Kiswahili) is disjunctive. How important is such a difference? The simple answer is that grammatical elements of a sentences get ‘glued’ together in isiZulu, whereas at least some of them are written as separate words in Kiswahili. This is not to be conflated with, say, German, Dutch, and Afrikaans, where nouns can be concatenated to form new words, but, e.g., a preposition is glued onto a noun. For instance, ‘of clay’ is ngobumba, contracting nga+ubumba with a vowel coalescence rule (-a + u- = -o-), which thus happens much less often in a language with disjunctive orthography. This, in turn, affects the algorithms needed to computationally process the languages, hence, the prospects for bootstrapping.

Note that middle cluster looks deceptively isolating, but it isn’t. Sesotho and Setswana are statistically significantly different from the others, in that they are even more disjunctive than English. Sepedi (top-most line) even more so. While I don’t know that language, a hypothetical example suffice to illustrate this notion. There is conjugation of verbs, like ‘works’ or trabajas or usebenza (inflection underlined), but some orthographer a while ago could have decided to write that separate from the verb stem (e.g., trabaj as and u sebenza instead), hence, generating more tokens with fewer characters.

There are other aspects of language and orthography one can ‘play’ with to analyse quantitatively, like whether words mainly end in a vowel or not, and which vowel mostly, and whether two successive vowels are acceptable for a language (for some, it isn’t). This is further described in the paper [3].

Yet, the UDHR is just one document. To examine the generalisability of these observations, we need to know whether the UDHR text is a ‘typical’ one. This was assessed in more detail by zooming in on isiZulu both quantitatively and qualitatively with four other corpora and texts in different genres. The results show that the UHDR is a typical text document orthographically, at least for the cumulative frequency distribution of the word length.

There were some other differences across the other corpora, which have to do with genre and datedness, which was observed elsewhere for whole words [4]. For instance, news items of isiZulu newspapers nowadays include words like iFacebook and EFF, which surely don’t occur in a century-old bible translation. They do violate the ‘no two successive vowels’ rule and the ‘final vowel’ rule, though.

On the qualitative side of the matter, and which will have an effect on searching for information in texts, text summarization, and error correction of spellcheckers, is, again, that agglutination. For instance, searching on imali ‘money’ alone would be woefully inadequate to find all relevant texts; e.g., those news items also include kwemali, yimali, onemali, osozimali, kwezimali, and ngezimali, which are, respectively of -, and -, that/which/who has -, of – (pl.), about/by/with/per – (pl.) money. Searching on the stem or root only is not going to help you much either, however. Take, for instance -fund-, of which the results of just two days of Isolezwe news articles is shown in the table below (articles from 2015, when there were protests, too). Depending on what comes before fund and what comes after it, it can have a different meaning, such as abafundi ‘students’ and azifundi ‘they do not learn’.

isolezwefund

Placing this is the broader NLP scope, it also affects the widely-used notion of lexical diversity, which, in its basic form, is a type-to-token ratio. Lexical diversity is used as a proxy measure for ‘difficulty’ or level of a text (the higher the more difficult), language development in humans as they grow up, second-language learning, and related topics. Letting that loose on isiZulu text, it will count abafundi, bafundi, and nabafundi as three different tokens, so wheehee, high lexical diversity, yet in English, it amounts to ‘students’, ‘students’ and ‘and the students’. Put differently, somehow we have to come up with a more meaningful notion of lexical diversity for agglutinating languages. A first attempt is made in the paper in its section 4 [3].

Thus, the last word has not been said yet about orthographic similarity, yet we now do have more insight into it. The surprising similarity of isiZulu (South Africa) with Runyankore (Uganda) was exploited in another research activity, and shown to be very amenable to bootstrapping [5], so, in its own way providing supporting evidence for bootstrapping potential that the figure above also indicated as promising.

As a final comment on the tooling side of things, I did use NLTK (Python). It worked well for basic analyses of text, but it (and similar NLP tools) will need considerable customization for the agglutinating languages.

 

References

[1] C. Meinhof. 1932. Introduction to the phonology of the Bantu languages . Dietrich Reiner/Ernst Vohsen, Johannesburg. Translated, revised and enlarged in collaboration with the author and Dr. Alice Werner by N.J. Van Warmelo.

[2] L. Pretorius and S. Bosch. Exploiting cross-linguistic similarities in Zulu and Xhosa computational morphology: Facing the challenge of a disjunctive orthography. In Proceedings of the EACL 2009 Workshop on Language Technologies for African Languages – AfLaT 2009, pages 96–103, 2009.

[3] C.M. Keet. An assessment of orthographic similarity measures for several African languages. Technical report, arxiv 1608.03065. August 2016.

[4] Ndaba, B., Suleman, H., Keet, C.M., Khumalo, L. The Effects of a Corpus on isiZulu Spellcheckers based on N-grams. IST-Africa 2016. May 11-13, 2016, Durban, South Africa.

[5] J. Byamugisha, C. M. Keet, and B. DeRenzi. Bootstrapping a Runyankore CNL from an isiZulu CNL. In B. Davis et al., editors, 5th Workshop on Controlled Natural Language (CNL’16), volume 9767 of LNAI, pages 25–36. Springer, 2016. 25-27 July 2016, Aberdeen, UK.

More stuff: relating stuffs and amounts of stuff to their parts and portions

With all the protests going on in South Africa, writing this post is going to be a moment of detachment of it (well, I’m trying), for it concerns foundational aspects of ontologies with respect to “stuff”. Stuff is the philosophers’ funny term for those kind of things that cannot be counted, or only counted in quantities, and are in natural language generally referred to by mass nouns. For instance, water, gold, mayonnaise, oil, and wine as kinds of things, yet one can talk of individual objects of them only in quantities, like a glass of wine, a spoonful of mayonnaise, and a litre of oil. It is one thing to be able to say which types of stuff there are [1], it is another matter how they relate to each other. The latter is described in the paper recently accepted at the 20th International Conference on Knowledge Engineering and Knowledge management (EKAW’16), entitled “Relating some stuff to other stuff” [2].

Is something like that even relevant, when students are protesting for free education, among other demands? Yes. At the end of the day, it is part and parcel of a healthy environment to live in. For instance, one should be able to realise traceability in food and medicine supply chains, to foster practices, and check compliance, of good production processes and supply chains, so that you will not buy food that makes you ill or take medicines that are fake [3,4]. Such production processes and product logistics deal with ‘stuffs’ and their portions and parts that get separated and put together to make the final product. Current implementations have only underspecified ‘links’ (if at all) that doesn’t let one infer automatically what (or who) the culprit is. Existing theoretical accounts from philosophy and in domain ontologies are incomplete, so they wouldn’t help you further either. The research described in the paper solves this issue.

Seven relations for portions and stuff-parts were identified, which have a temporal dimension where needed. For instance, the upper-half of the wine in your wine glass is a portion of the whole amount of wine in the glass, yet that amount of wine was a portion of the amount of wine in the bottle when you opened it, and yet it has as part some amount of alcohol. (Some reader may not find this example nice, for it being with alcohol, but Western Cape, where Cape Town is situated, is the wine region of the country.) The relations are structured in a little hierarchy, as informally depicted in the figure below.

Section of the basic taxonomy of part-whole relations of [5] (less and irrelevant sections in grey or suppressed), extended with the stuff relations and their position in the hierarchy.

Section of the basic taxonomy of part-whole relations of [5] (less and irrelevant sections in grey or suppressed), extended with the stuff relations and their position in the hierarchy.

Their formal definitions are included in the paper.

Another aspect of the solution is that it distinguishes between 1) the extensional and intensional level—like, between ‘an amount of wine’ and ‘wine’—because different constraints apply (following from that latter can be instantiated the former cannot), and 2) the amount of stuff and the (repeatable) quantity, as one can have 1kg of many things.

Just theory isn’t good enough, though, for one would want to use it in some way to indeed get those benefits of traceability in the supply chains. After considering the implementation options (see paper for details), I settled for an extension to the Stuff Ontology core ontology that now also imports a special purpose module OMmini of the Ontology of Units of Measure (see also the Stuff Ontology page). The latter sounds easier than that it worked in praxis, but that’s a topic of a different post. The module is there, and the links between the OMmin.owl and stuff.owl have been declared.

Although the implementation is atemporal in the end, it is still possible to do some automated reasoning for traceability. This is mainly thought availing of property chains to approximate the relevant temporal aspects. For instance, with scatteredPortionOf \circ portionOf \sqsubseteq scatteredPortionOf then one can infer that a scattered portion in my glass of wine that was a portion of bottle #1234 of organic Pinotage wine of an amount of wine, contained in cask #3, with wine from wine farm X of Stellar Winery from the 2015 harvest is a scattered portion of that amount of matter (that cask). Or take the (high-level) pharmaceutical supply chain from [4]: a portion (that is on a ‘pallet’) of the quantity of medicine produced by the manufacturer goes to the warehouse, of which a portion (in a ‘case’) goes to the distribution centre. From there, a portion ends up on the dispensing shelf, and someone buys it. Then tracing any customer’s portion of medicine—i.e., regardless the actual instance—can be inferred with the following chain: scatteredPortionOf \circ scatteredPortionOf \circ scatteredPortionOf \sqsubseteq scatteredPortionOf

Sure, the research presented hasn’t solved everything yet, but at least software developers now have a (better) way to automate traceability in supply chains. It also allows one to be more fine-grained in the analysis where a culprit may be, so that there are fewer cases of needless scares. For instance, we know that when there’s an outbreak of Salmonella, then we only have to trace where the batch of egg yolk went (typically in the tiramisu served in homes for the elderly), where it came from (which farm), and got mixed with in the production process, while the amounts of egg white on your lemon merengue still would be safe to eat even when it came from the same batch that had at least one infected egg.

I’ll be presenting the paper at EKAW’16 in November in Bologna, Italy, and hope to see you there! It’s not a good time of the year w.r.t. weather, but that’s counterbalanced by the beauty of the buildings and art works, and the actual venue room is in one of the historical buildings of the oldest university of Europe.

 

References

[1] Keet, C.M. A core ontology of macroscopic stuff. 19th International Conference on Knowledge Engineering and Knowledge Management (EKAW’14). K. Janowicz et al. (Eds.). 24-28 Nov, 2014, Linkoping, Sweden. Springer LNAI vol. 8876, 209-224.

[2] Keet, C.M. Relating some stuff to other stuff. 20th International Conference on Knowledge Engineering and Knowledge Management EKAW’16). Springer LNAI, 19-23 November 2016, Bologna, Italy. (accepted)

[3] Donnelly, K.A.M. A short communication – meta data and semantics the industry interface: what does the food industry think are necessary elements for exchange? In: Proc. of Metadata and Semantics Research (MTSR’10). Springer CCIS vol. 108, 131-136.

[4] Solanki, M., Brewster, C. OntoPedigree: Modelling pedigrees for traceability in supply chains. Semantic Web Journal, 2016, 7(5), 483-491.

[5] Keet, C.M., Artale, A. Representing and Reasoning over a Taxonomy of Part-Whole Relations. Applied Ontology, 2008, 3(1-2):91-110.

My gender-balanced book reviews overall, yet with much fluctuation

In one of my random browsing moments, I stumbled upon a blog post of a writer who had her son complaining about the stories she was reading to him, as having so many books with women as protagonists. As it appeared, “only 27% of his books have a female protagonist, compared to 65% with a male protagonist.”. She linked back to another post about a similar issue but then for some TV documentary series called missed in history, where viewers complained that there were ‘too many women’ and more like a herstory than a missed in history. Their tally of the series’ episodes was that they featured 45% men, 21% women, and 34% were ungendered. All this made me wonder how I fared in my yearly book review blog posts. Here’s the summary table and the M/F/both or neither:

 

Year posted Book Nr M Nr F Both / neither Pct F
2012 Long walk to freedom, terrific majesty, racist’s guide, end of poverty, persons in community, African renaissance, angina monologues, master’s ruse, black diamond, can he be the one 4 3 3 33%
2013 Delusions of gender, tipping point, affluenza, hunger games, alchemist, eclipse, mieses karma 2 3 2 43%
2014 Book of the dead, zen and the art of motorcycle maintenance, girl with the dragon tattoo, outliers, abu ghraib effect, nice girls don’t get the corner office 2 1 3 17%
2015 Stoner, not a fairy tale, no time like the present, the time machine, 1001 nights, karma suture, god’s spy, david and goliath, dictator’s learning curve, MK 4 2 4 20%
2016 Devil to pay, black widow society, the circle, accidental apprentice, moxyland, muh, big short, 17 contradictions 2 4 2 50%
Total 14 13 14 32%

 

Actually, I did pretty well in the overall balance. It also shows that were I to have done a bean count for a single year only, the conclusion could have been very different. That said, I classified them from memory, and not by NLP of the text of the books, so the actual amount allotted to the main characters might differ. Related to this is the screenplay dialogue-based data-driven analysis of Hollywood movies, for which NLP was used. Their results show that even when there’s a female lead character, Hollywood manages to get men to speak more; e.g., The Little Mermaid (71%) and The Hunger Games (55% male). Even the chick flick Clueless is 50-50. (The website has several nice interactive graphs based on the lots of data, so you can check yourself.) For the Hunger Games, though, the books do have Katniss think, do, and say more than in the movies.

A further caveat of the data is that these books are not the only ones I’ve read over the past five years, just the ones written about. Anyhow, I’m pleased to discover there is some balance in what I pick out to write about, compared to unconscious bias.

As a last note on the fiction novels listed above, there was a lot of talk online the past week about Lionel Shriver’s keynote on defense on fiction writing-what-you-like and having had enough of the concept of ‘cultural appropriation’. Quite few authors in the list above would be thrown on the pile of authors who ‘dared’ to imagine characters different from the box they probably would by put in. Yet, most of them still did a good job to make it a worthwhile read, such as Hugh Fitzgerald Ryan on Alice the Kyteler in ‘The devil to pay’, David Safier with Kim Lange in ‘Mieses Karma’, Stieg Larsson with ‘Girl with the dragon tattoo’, and Richard Patterson in ‘Eclipse’ about Nigeria. Rather: a terrible character or setting that’s misrepresenting a minority or oppressed, marginalised, or The Other group in a novel is an indication of bad writing and the writer should educate him/herself better. For instance, JM Coetzee could come back to South Africa and learn a thing or two about the majority population here, and I hope for Zakes Mda he’ll meet some women who he can think favourably about and then reuse those experiences in a story. Anyway, even if the conceptually problematic anti-‘cultural appropriation’ police wins it from the fiction writers, then I suppose I can count myself lucky living in South Africa that, with its diversity, will have diverse novels to choose from (assuming they won’t go further overboard into dictating that I would be allowed to read only those novels that are designated to be appropriate for my (from the outside) assigned box).

UPDATE (20-9-2016): following the question on POC protagonist, here’s the table, where those books with a person (or group) of colour is a protagonist are italicised. Some notes on my counting: Angina monologues has three protagonists with 2 POCs so I still counted it, Hunger games’ Katniss is a POC in the books, Eclipse is arguable, abu ghraib effect is borderline and Moxyland is an ensemble cast so I still counted that as well. Non-POC includes cows as well (Muh), hence that term was chosen rather than ‘white’ that POC is usually contrasted with. As can be seen, it varies quite a bit by year as well.

Year posted Book POC

(italics in the list)

Non-POC or N/A Pct POC
2012 Long walk to freedom, terrific majesty, racist’s guide, end of poverty, persons in community, African renaissance, angina monologues, master’s ruse, black diamond, can he be the one 8 2 80%
2013 Delusions of gender, tipping point, affluenza, hunger games, alchemist, eclipse, mieses karma 2 5 29%
2014 Book of the dead, zen and the art of motorcycle maintenance, girl with the dragon tattoo, outliers, abu ghraib effect, nice girls don’t get the corner office 2 4 33%
2015 Stoner, not a fairy tale, no time like the present, the time machine, 1001 nights, karma suture, god’s spy, david and goliath, dictator’s learning curve, MK 4 6 40%
2016 Devil to pay, black widow society, the circle, accidental apprentice, moxyland, muh, big short, 17 contradictions 3 5 38%
Total 19 22 46%

 

Brief report on the INLG16 conference

Another long wait at the airport is being filled with writing up some of the 10 pages of notes I scribbled while attending the WebNLG’16 workshop and the 9th International Natural Language Generation conference 2016 (INLG’16), that were held from 6 to 10 September in Edinburgh, Scotland.

There were two keynote speakers, Yejin Choi and Vera Demberg, and several long and short presentations and a bunch of posters and demos, all of which had full or short papers in the (soon to appear) ACL proceedings online. My impression was that, overall, the ‘hot’ topics were image-to-text, summaries and simplification, and then some question generation and statistical approaches to NLG.

The talk by Yejin Choi was about sketch-to-text, or: pretty much anything to text, such as image captioning, recipe generation based on the ingredients, and one even could do it with sonnets. She used a range of techniques to achieve it, such as probabilistic CFGs and recurrent neural networks. Vera Demberg’s talk, on the other hand, was about psycholinguistics for NLG, starting from the ‘uniform information density hypothesis’ compared to surprisal words and grammatical errors and how that affects a person reading the text. It appears that there’s more pupil jitter when there’s a grammar error. The talk then moved on to see how one can model and predict information density, for which there are syntactic, semantic, and event surprisal models. For instance, with the semantic one: ‘peter felled a tree’: then how predictable is ‘tree’, given that its already kind of entailed in the word ‘felled’? Some results were shown for the most likely fillers for, e.g., ‘serves’ as in ‘the waitress serves…’ and ‘the prisoner serves…’, which then could be used to find suitable word candidates in the sentence generation.

The best paper award went to “Towards generating colour terms for referents in photographs: prefer the expected or the unexpected?”, by Sina Zarrieß and David Schlangen [1]. While the title might sound a bit obscure, the presentation was very clear. There is the colour spectrum, and people assign names to the colours, which one could take as RGB colour value for images. This is all nice and well on the colour strip, but when a colour is put in context of other colours and background knowledge, the colours humans would use to describe that patch on an image isn’t always in line with the actual RGB colour. The authors approached the problem by viewing it as a multi-class classification problem and used a multi-layer perceptron with some top-down recalibration—and voilá, the software returns the intended colour, most of the times. (Knowing the name of the colour, one then can go on trying to automatically annotate images with text.)

As for the other plenary presentations, I did make notes of all of them, but will select only a few due to time limitations. The presentation by Advaith Siddhartan on summarisation of news stories for children [2] was quite nice, as it needed three aspects together: summarising text (with NLG, not just repeating a few salient sentences), simplifying it with respect to children’s vocabulary, and editing out or rewording the harsh news bits. Another paper on summaries was presented by Sabita Acharya [3], which is likely to be relevant also to my student’s work on NLG for patient discharge notes [4]. Sabita focussed on trying to get doctor’s notes and plan of care into a format understandable by a layperson, and used the UMLS in the process. A different topic was NLG for automatically describing graphs to blind people, with grade-appropriate lexicons (4-5th grade learners and students) [5]. Kathy Mccoy outlined how they were happy to remember their computer science classes, and seeing that they could use graph search to solve it, with its states, actions, and goals. They evaluated the generated text for the graphs—as many others did in their research—with crowdsourcing using the Mechanical Turk. One other paper that is definitely on my post-conference reading list, is the one about mereology and geographic entities for weather forecasts [6], which was presented by Rodrigo de Oliveira. For instance, a Scottish weather forecast referring to ‘the south’ is a different region than that of the UK as a whole, and the task was how to generate the right term for the intended region.

inlg16parts

our poster on generating sentences with part-whole relations in isiZulu (click to enlarge)

My 1-minute lightning talk of Langa’s and my long paper [7] went well (one other speaker of the same session even resentfully noted afterward that I got all the accolades of the session), as did the poster and demo session afterward. The contents of the paper on part-whole relations in isiZulu were introduced in a previous post, and you can click on the thumbnail on the right for a png version of the poster (which is less text than the blog post). Note that the poster only highlights three part-whole relations from the 11 discussed in the paper.

ENLG and INLG will merge and become a yearly INLG, there is a SIG for NLG, (www.siggen.org), and one of the ‘challenges’ for this upcoming year will be on generating text from RDF triples.

Irrelevant for the average reader, I suppose, was that there were some 92 attendees, most of whom attended the social dinner where there was a ceilidh—Scottish traditional music by a band with traditional dancing by the participants—were it was even possible to have many (traditional) couples for the couples dances. There was some overlap in attendees between CNL16 and INLG16, so while it was my first INLG it wasn’t all brand new, yet also new people to meet and network with. As a welcome surprise, it was even mostly dry and sunny during the conference days in the otherwise quite rainy Edinburgh.

 

References

(links TBA shortly—neither Google nor duckduckgo found their pdfs yet)

[1] Sina Zarrieß and David Schlangen. Towards generating colour terms for referents in photographs: prefer the expected or the unexpected? INLG’16. ACL, 246-255.

[2] Iain Macdonald and Advaith Siddhartan. Summarising news stories for children. INLG’16. ACL, 1-10.

[3] Sabita Acharya. Barbara Di Eugenio, Andrew D. Boyd, Karen Dunn Lopez, Richard Cameron, Gail M Keenan. Generating summaries of hospitalizations: A new metric to assess the complexity of medical terms and their definitions. INLG’16. ACL, 26-30.

[4] Joan Byamugisha, C. Maria Keet, Brian DeRenzi. Tense and aspect in Runyankore using a context-free grammar. INLG’16. ACL, 84-88.

[5] Priscilla Morales, Kathleen Mccoy, and Sandra Carberry. Enabling text readability awareness during the micro planning phase of NLG applications. INLG’16. ACL, 121-131.

[6] Rodrigo de Oliveira, Somayajulu Sripada and Ehud Reiter. Absolute and relative properties in geographic referring expressions. INLG’16. ACL, 256-264.

[7] C. Maria Keet and Langa Khumalo. On the verbalization patterns of part-whole relations in isiZulu. INLG’16. ACL, 174-183.

UVa 11357 Ensuring truth solution description

We’re in the midst of preparing for the ICPC Southern Africa Regionals, to be held in October, and so I step up reading problems to find nice ones to train the interested students in a range of topics. The “Ensuring truth” problem was one of those, which I’ll discuss in the remainder of the post, since there’s no discussion of it online yet (only some code), and it is not as daunting as it may look like at first glance:

ensuringthruth

The task is to determine whether such a formula is satisfiable.

While it may ‘scare’ a 1st or 2nd-year student, when you actually break it down and play with an example or two, it turns out to be pretty easy. The ‘scary’ looking aspects are the basic propositional logic truth tables and the BNF grammar for (simplified!) Boolean formulas. Satisfiability of normal Boolean formulas is NP-compete, which you may have memorised, so that looks daunting as well, as if the contestant would have to come up with a nifty optimization to stay within the time limit. As it appears, not so.

Instead of being put off by it, let’s look at what is going on. The first line of the BNF grammar says that a formula can be a clause, or a formula followed by a clause that is separated by a disjunction (| ‘or’). The second line says that a clause is a conjunction of literals, which (in the third line) transpires to be just a series of ‘and’ (&) conjunctions between literals. The fourth lines states that a literal can be a variable or its negation, and the fifth line states that a variable is one of the letters in the alphabet.

Now try to generate a few inputs that adhere to this grammar. Swapping one variable at a time on the left of the “::=” sign for one of the elements on the right-hand side of the “::=” sign in the BNF grammar, with steps indicated with “=>”, then e.g.:

<formula> => <formula> | <clause> => <clause> | <clause> => (<conjunction-of-literals>) | <clause> => (<literal>) | <clause> => (<variable>) | <clause> => (a)| <clause> => (a)| (<conjunction-of-literals>) => (a)|(<conjunction-of-literals> & <literal>) => (a)|(<conjunction-of-literals> & <literal> & <literal>) => (a)|(<conjunction-of-literals> & <literal> & <literal> & <literal>) => (a)|(<literal> & <literal> & <literal> & <literal>) => (a)|(~<variable> & <literal> & <literal> & <literal>) => (a)|(~a & <literal> & <literal> & <literal>) => (a)|(~a & <variable> & <literal> & <literal>) => (a)|(~a&b& <literal> & <literal>) => (a)|(~a&b& <variable> & <literal>) => (a)|(~a&b&a& <literal>) => (a)|(~a&b&a& <variable>) => (a)|(~a&b&a&c)

That is, (a)|(~a&b&a&c) is in the language of the grammar, as are the two in the input given, being (a&b&c)|(a&b)|(a) and (x&~x). Do you see a pattern emerging of how the formulas look like with this grammar?

It’s a series of disjunctions of conjuncts, and only one of the conjuncts shouldn’t have a contradiction for the formula to be satisfiable. The only way we get a contradiction is if both a literal and its negation are in the same conjunct (analyse the truth tables if you didn’t know that). So, the only thing you have to do with the input is to check whether within the brackets there is, say, an x and a ~x, and with the first conjunct you encounter where there is no contradiction, then the formula is satisfiable and you print YES, else NO. That’s all. So, when given “(a)|(~a&b&a&c)”, you know upon processing the first conjunct “(a)”, that the answer is YES, because “(a)” is trivially not contradictory and thus we can ignore the “(~a&b&a&c)” that does have a contradiction (it doesn’t matter anymore, because we have found one already that doesn’t).

I’ll leave the implementation as an exercise to the reader 🙂.