# Dancing algorithms and algorithms for dance apps

Browsing through teaching material a few years ago, I had stumbled upon dancing algorithms, which illustrate common algorithms in computing using dance [1] and couldn’t resist writing about since I used to dance folk dances. More of them have been developed in the meantime. The list further below has them sorted by algorithm and by dance style, with links to the videos on YouTube. Related ideas have also been used in mathematics teaching, such as for teaching multiplication tables with Hip Hop singing and dancing in a class in Cape Town, dancing equations, mathsdance [2], and, stretching the scope a bit, rapping fractions elsewhere.

That brought me to the notion of algorithms for dancing, which takes a systematic and mathematical or computational approach to dance. For instance, the maths in salsa [2] and an ontology to describe some of dance [3], and a few more, which go beyond the hard-to-read Labanotation that is geared toward ballet but not pair dancing, let alone a four-couple dance [4] or, say, a rueda (multiple pairs in a circle, passing on partners). Since there was little for Salsa dance, I had proposed a computer science & dance project last year, and three computer science honours students were interested to develop their Salsational Dance App. The outcome of their hard work is that now there’s a demonstrated-to-be-usable API for the data structure to describe moves (designed for beats that counts in multiples of four), a grammar for the moves to construct valid sequences, and some visualization of the moves, therewith improving on the static information from Salsa is good that counted as baseline. The data structure is extensible to other dance styles beyond Salsa that have multiples of four, such as Bachata (without the syncopations, true).

In my opinion, the grammar is the coolest component, since it is both from a scientific and from an engineering perspective the most novel aspect and it was technically the most challenging task of the project. The grammar’s expressiveness remained within a context-free grammar, which is computationally still doable. This may be because of the moves covered—the usual moves one learns during a Salsa beginners course—or maybe in any case. The grammar has been tested to cover a series of test cases in the system, which all worked well (whether all theoretically physically feasible sequences feel comfortable to dance is a separate matter). The parsing is done by the JavaCC parser, which carries out a formal verification to check if the sequence of moves is valid, and it even does that on-the-fly. That is, when a user selects a move during the planning of a sequence of moves, it instantly re-computes which one(s) of the moves in the system can follow the last one selected, as can be seen in the following screenshot.

Screenshot of planning a sequence of moves.

The grammar thus even has a neat ‘wrapper’ in the form of an end-user usable tool, which was evaluated by several members of Evolution Dance Company in Cape Town. Special thanks go to its owner, Mr. Angus Prince, who served also as external expert on the project. Some more screenshots, the code, and the project papers by the three students—Alka Baijnath, Jordy Chetty, and Micara Marajh—are available from the CS honours project archive.

The project also showed that much more can be done, not just porting it to other dance styles, but also still for salsa. This concerns not only the grammar, but also how to encode moves in a user-friendly way and how to link it up to the graphics so that the ‘puppets’ will dance the sequence of moves selected, as well as meeting other requirements, such as a mobile app as ‘cheat sheet’ to quickly check a move during a social dance evening and choreography planning. Based on my own experiences goofing around writing down moves, the latter (choreography) seems to be less hard to realise [documenting, at least] than the ‘any move’ scenario. Either way, the honours projects topics are being finalised around now. Hopefully, there will be another 2-3 students interested in computing and dance this year, so we can build up a larger body of software tools and techniques to support dance.

Dancing algorithms by type

Sorting

– Quicksort as a Hungarian folk dance

– Bubble sort as a Hungarian dance, Bollywood style dance, and with synthetic music

– Shell sort as a Hungarian dance.

– Select sort as a gypsy/Roma dance

– Merge sort in ‘Transylvananian-Saxon’ dance style

– Insert sort in Romanian folk dance style

– Heap sort also as in Hungarian folk dance style

There are more sorting algorithms than these, though, so there’s plenty of choice to pick your own. A different artistic look at the algorithms is this one, with 15 sorts that almost sound like music (but not quite).

Searching

– Linear search in Flamenco style

– Binary search also in Flamenco style

Backtracking as a ballet performance

Dancing algorithms by dance style

European folk:

– Hungarian dance for the quicksort, bubble sort, shell sort, and heap sort;

– Roma (gypsy) dance for the select sort;

– Transylvananian-saxon dance for the merge sort;

– Romanian dance for an insert sort.

Latin American folk: Flamenco dancing a binary search and a linear search.

Bollywood dance where students are dancing a bubble sort.

Classical (Ballet) for a backtracking algorithm.

Modern (synthetic music) where a class attempts to dance a bubble sort.

That’s all for now. If you make a choreography for an algorithm, get people to dance it, record it, and want to have the video listed here, feel free to contact me and I’ll add it.

References

[1] Zoltan Katai and Laszlo Toth. Technologically and artistically enhanced multi-sensory computer programming education. Teaching and Teacher Education, 2010, 26(2): 244-251.

[2] Stephen Ornes. Math Dance. Proceedings of the National Academy of Sciences of the United States of America 2013. 110(26): 10465-10465.

[3] Christine von Renesse and Volker Ecke. Mathematics and Salsa Dancing. Journal of Mathematics and the Arts, 2011, 5(1): 17-28.

[4] Katerina El Raheb and Yannis Yoannidis. A Labanotation based ontology for representing dance movement. In: Gesture and Sign language in Human-Computer Interaction and Embodied Communication (GW’11). Springer, LNAI vol. 7206, 106-117. 2012.

[5] Michael R. Bush and Gary M. Roodman. Different partners, different places: mathematics applied to the construction of four-couple folk dances. Journal of Mathematics and the Arts, 2013, 7(1): 17-28.

# Gastrophysics and follies

Yes, turns out there is a science of eating, which is called gastrophysics, and a popular science introduction to the emerging field was published in an accessible book this year by Charles Spence (Professor [!] Charles Spence, as the front cover says), called, unsurprisingly, Gastrophysics—the new science of eating. The ‘follies’ I added to the blog post title refers to the non-science parts of the book, which is a polite term that makes it a nice alliteration in the pronunciation of the post’s title. The first part of this post is about the interesting content of the book; the second part about certain downsides.

The good and interesting chapters

Given that some people don’t even believe there’s a science to food (there is, a lot!), it is perhaps even a step beyond to contemplate there can be such thing as a science for the act of eating and drinking itself. Turns out—quite convincingly in the first couple of chapters of the book—that there’s more to eating than meets the eye. Or taste bud. Or touch. Or nose. Or ear. Yes, the ear is involved too: e.g., there’s a crispy or crunchy sound when eating, say, crisps or corn flakes, and it is perceived as an indicator of the freshness of the crisps/cornflakes. When it doesn’t crunch as well, the ratings are lower, for there’s the impression of staleness or limpness to it. The nose plays two parts: smelling the aroma before eating (olfactory) and when swallowing as volatile compounds are released in your throat that reach your nose from the back when breathing out (i.e., retronasal).

The first five chapters of the books are the best, covering taste, smell, sight, sound, and touch. They present easily readable interesting information that is based on published scientific experiments. Like that drinking with a straw ruins the smell-component of the liquid (and so does drinking from a bottle) cf drinking from a glass that sets the aromas free to combine the smell with the taste for a better overall evaluation of the drink. Or take the odd (?) thing that frozen strawberry dessert tastes sweeter from a white bowl than a black one, as is eating it from a round plate cf. from an angular plate. Turns out there’s some neuroscience to shapes (and labels) that may explain the latter. If you think touch and cutlery don’t matter: it’s been investigated, and it does. The heavy cutlery makes the food taste better. It’s surface matters, too. The mouth feel isn’t the same when eating with a plain spoon vs. from a spoon that was first dipped in lemon juice and then in sugar or ground coffee (let it dry first).

There is indeed, as the intro says, some fun fact on each of these pages. It is easy to see that these insights also can be interesting to play with for one’s dinner as well as being useful to the food industry, and to food science, be it to figure out the chemistry behind it or how to change the product, the production process, or even just the packaging. Some companies did so already. Like when you open a bag of (relatively cheap-ish) ground coffee: the smell is great, but that’s only because some extra aroma was added in the sealed air when it was packaged. Re-open the container (assuming you’ve transferred it into one), and the same coffee smell does not greet you anymore. The beat of the background music apparently also affects the speed of masticating. Of course, the basics of this sort of stuff were already known decades ago. For instance, the smell of fresh bread in the supermarket is most likely aroma in the airco, not the actual baking all the time when the shop is open (shown to increase buying bread, if not more), and the beat of the music in the supermarket affects your walking speed.

On those downsides of the book

After these chapters, it gradually goes downhill with the book’s contents (not necessarily the topics). There are still a few interesting science-y things to be learned from the research into airline food. For instance, that the overall ‘experience’ is different because of lower humidity (among other things) so your nose dries out and thus detects less aroma. They throw more sauce and more aromatic components into the food being served up in the air. However, the rest descends into a bunch of anecdotes and blabla about fancy restaurants, with the sources not being solid scientific outlets anymore, but mostly shoddy newspaper articles. Yes, I’m one of those who checks the footnotes (annoyingly as endnotes, but one can’t blame the author for that sort of publisher’s mistake). Worse, it gives the impression of being research-based, because it was so in the preceding chapters. Don’t be fooled by the notes in, especially, chapters 9-12. To give an example, there’s a cool-sounding section on “do robot cooks make good chefs?” in the ‘digital dining’ chapter. One expects an answer; but no, forget that. There’s some hyperbole with the author’s unfounded opinion and, to top it off, a derogatory remark about his wife probably getting excited about a 50K GBP kitchen gadget. Another example out of very many of this type: some opinion by some journalist who ate some day, in casu at über-fancy way-too-expensive-for-the-general-reader Pairet’s Ultraviolet (note 25 on p207). Daily Telegraph, New York Times, Independent, BBC, Condiment junkie, Daily Mail Online, more Daily Mail, BBC, FT Weekend Magazine, Wired, Newsweek etc. etc. Come on! Seriously?! It is supposed to be a popsci book, so then please don’t waste my time with useless anecdotes and gut-feeling opinions without (easily digestible) scientific explanations. Or they should have split the book in two: I) popsci and II) skippable waffle that any science editor ought not to have permitted to pass the popsci book writing and publication process. Professor Spence is encouraged to reflect a little on having gone down on a slippery slope a bit too much.

In closing

Although I couldn’t bear to finish reading the ‘experiential meal’ chapter, I did read the rest, and the final chapter. As any good meal that has to have a good start and finish, the final chapter is fine, including the closing [almost] with the Italian Futurists of the 1930s (or: weird dishes aren’t so novel after all). As to the suggestions for creating your own futurist dinner party, I can’t withhold here the final part of the list:

In conclusion: the book is worth reading, especially the first part. Cooking up a few experiments of my own sounds like a nice pastime.

Conjuring up or enhancing a new subdiscipline, say, gastromatics, computational gastronomy, or digital gastronomy could be fun. The first term is a bit too close to gastromatic (the first search hits are about personnel management software in catering), though, and the second one has been appropriated by the data mining and Big Data crowd already. Digital gastronomy has been coined as well and seems more inclusive on the technology side than the other two. If it all sounds far-fetched, here’s a small sampling: there are already computer cooking contests (at the case-based reasoning conferences) for coming up with the best recipe given certain constraints, a computational analysis of culinary evolution, data mining in food science and food pairing in Arab cuisine, robot cocktail makers are for sale (e.g., makr shakr and barbotics) and there’s also been research on robot baristas (e.g., the FusionBot and lots more), and more, much more, results over at least the past 10 years.

# Book reviews for 2016

I can’t resist adding another instalment of brief reviews of some of the books I’ve read over the past year, following the previous five editions and the gender analysis of them (with POC/non-POC added on request at the end). This time, there are three (well, four) non-fiction books and four fiction novels discussed in the remainder of the post. The links to the books used to be mostly to Kalahari.com online (an SA-owned bookstore), but they have been usurped by the awfully-sounding TakeALot, so the links to the books are diversified a bit more now.

Non-fiction

Writing what we like—a new generation speaks, edited by Yolisa Qunta (2016). This is a collection of short essays about how society is perceived by young adults in South Africa. I think this stock-taking of events and opinions thereof is a must-read for anyone wanting to know what goes on and willing to look a bit beyond the #FeesMustFall sound bites on Twitter and Facebook. For instance, “A story of privilege” by Shaka Sisulu describing his experiences coming to study at UCT, and Sophokuhle Mathe in “White supremacy vs transformation” on UCT’s new admissions policy, the need for transformation, and going to hold the university to account; Yolisa Qunta’s “Spider’s web” on the ghost of apartheid with the every-day racist incidents and the anger that comes with it; “Cape Town’s pretend partnership” by Ilham Rawoot on his observations of exclusion of most Capetonians regarding preparations of the World Design Capital in 2014. There are a few ‘lighter’ essays as well, like the fun side of taking the taxi (minibus) in “life lessons learnt from taking the taxi” by Qunta (indeed, travelling by taxi can be fun).

Elephants on Acid by Alex Boese (2007). This is a fun book about the weird and outright should-not-have-been-done research—and why we have ethics committees now. There are of course the ‘usual suspects’ (gorillas in our midst, Milgram’s experiment), the weird ones (testing LSD on elephants; didn’t turn out alright), funny ones (will your dog get help if you are in trouble [no]; how much pubic hair you lose during intercourse [not enough for the CSI people]; social facilitation with cockroach games; trying to weigh the mass of a soul), but also those of the do-not-repeat variety. The latter include trying to figure out whether a person under the guillotine will realise it has been ‘separated’ from his body, Little Albert, and the “depatterning” of ‘beneficial brainwashing’ (it wasn’t beneficial at all). The book is written in an entertaining way, either alike a ‘what on earth was their hypothesis to devise such an experiment?’, or, knowing the hypothesis, with some morbid fascination to see whether it was falsified. Most of the research referenced is, for obvious reasons, older. But well, that doesn’t mean there wouldn’t be any outrageous experiments being conducted nowadays when we look back in, say, 20 years time.

What if? by Randall Munroe (2014, Dutch translation, dwarsligger). Great; read it. Weird and outright absurd questions asked by xkcd readers are answered sort of seriously from a STEM perspective.

Say again? The other side of South African English by Jean Branford and Malcolm Venter (2016). This short review ended up a lot longer, so it got its own blog post two weeks ago.

Fiction

Red ink by Angela Makholwa (2007). This is a juicy crime novel, as the Black Widow Society by the same author is (that I reviewed last year), and definitely a recommendable read. The protagonist, Lucy Khambule, is a PR consultant setting up her company in Johannesburg, but used to be a gutsy journalist who had sent a convicted serial killer a letter asking for an interview. Five years hence, he invites her for that interview and asks her to write a book about him. As writing a book was her dream, she takes up the offer. Things get messy, partly as a result of that: more murders, intrigues, and some love and friendship (the latter with other people, not the serial killer) that put the people close to Lucy in harm’s way. As with the Black Widow Society, it ends well for some but not for others.

Things fall apart by Chinua Achebe (1958 [2008 edition]). This is a well-known book in Africa at least, and there are many analyses are available online, so I’m not going to repeat all that. The story documents both the mores in a rural village and how things—more precisely: the society—fall apart due to several reasons, both on how the society was organised and the influence of the colonialists and their religion. The storytelling has a slow start, but picks up in pace after a short while, and it is worthwhile to bite through that slow start. You can’t feel but a powerless onlooker to how the events unfold and sorry how things turn out.

Kassandra by Christa Wolf (1983, Dutch translation [1990] from the German original; also available in English). Greeks, Trojans, Achilles, Trojan Horse, and all that. Kassandra the seer and daughter of king Priamos and queen Hadebe, is an independent woman, who rambles on analysing her life’s main moments before her execution. It has an awkward prose that one needs to get used to, but there are some interesting nuggets. On only approaching things in duals, or alternative options, like endlessly win or loose wars or the third option of to live. It was a present from the last century that I ought to have read earlier; but better late than never.

De midlife club by Karin Belt (2014, in Dutch, dwarsligger). The story describes four women in their early 40s living in a province in the Netherlands (the author is from a city nearby where I grew up), for whom life didn’t quite turn out as they fantasised about in their early twenties, due to one life choice after another. Superficially, things seem ok, but something is simmering underneath, which comes to the surface when they go to a holiday house in France for a short retreat. (I’m not going to include spoilers). It was nice to read a Dutch novel with recognisable scenes and that contemplates choices. The suspense and twists were fun such that I really had to finish reading it as soon as possible.

As I still have some 150 pages to go to finish the 700-page tome of Indaba, my children by Credo Mutwa, a review will have to wait until next year. But I can already highly recommend it.

# OBDA/I Example in the Digital Humanities: food in the Roman Empire

A new installment of the Ontology Engineering module is about to start for the computer science honours students who selected it, so, in preparation, I was looking around for new examples of what ontologies and Semantic Web technologies can do for you, and that are at least somewhat concrete. One of those examples has an accompanying paper that is about to be published (can it be more recent than that?), which is on the production and distribution of food in the Roman Empire [1]. Although perhaps not many people here in South Africa might care about what happened in the Mediterranean basin some 2000 years ago, it is a good showcase of what one perhaps also could do here with the historical and archeological information (e.g., an inter-university SA project on digital humanities started off a few months ago, and several academics and students at UCT contribute to the Bleek and Lloyd Archive of |xam (San) cultural heritage, among others). And the paper is (relatively) very readable also to the non-expert.

So, what is it about? Food was stored in pots (more precisely: an amphora) that had engravings on it with text about who, what, where etc. and a lot of that has been investigated, documented, and stored in multiple resources, such as in databases. None of the resources cover all data points, but to advance research and understanding about it and food trading systems in general, it has to be combined somehow and made easily accessible to the domain experts. That is, essentially it is an instance of a data access and integration problem.

There are a couple of principal approaches to address that, usually done by an Extract-Transform-Load of each separate resource into one database or digital library, and then putting a web-based front-end on top of it. There are many shortcomings to that solution, such as having to repeat the ETL procedure upon updates in the source database, a single control point, and the, typically only, canned (i.e., fixed) queries of the interface. A more recent approach, of which the technologies finally are maturing, is Ontology-Based Data Access (OBDA) and Ontology-Based Data Integration (OBDI). I say “finally” here, as I still very well can remember the predecessors we struggled with some 7-8 years ago [2,3] (informally here, here, and here), and “maturing”, as the software has become more stable, has more features, and some of the things we had to do manually back then have been automated now. The general idea of OBDA/I applied to the Roman Empire Food system is shown in the figure below.

OBDA in the EPnet system (Source: [1])

There are the data sources, which are federated (one ‘middle layer’, though still at the implementation level). The federated interface has mapping assertions to elements in the ontology. The user then can use the terms of the ontology (classes and their relations and attributes) to query the data, without having to know about how the data is stored and without having to write page-long SQL queries. For instance, a query “retrieve inscriptions on amphorae found in the city of ‘Mainz” containing the text ‘PNN’” would use just the terms in the ontology, say, Inscription, Amphora, City, found in, and inscribed on, and any value constraint added (like the PNN), and the OBDA/I system takes care of the rest.

Interestingly, the authors of [1]—admitted, three of them are former colleagues from Bolzano—used the same approach to setting up the ontology component as we did for [3]. While we will use the Protégé Ontology Development Environment in the OE module, it is not the best modelling tool to overcome the knowledge acquisition bottleneck. The authors modelled together with the domain experts in the much more intuitive ORM language and tool NORMA, and first represented whatever needed to be represented. This included also reuse of relevant related ontologies and non-ontology material, and modularizing it for better knowledge management and thereby ameliorating cognitive overload. A subset of the resultant ontology was then translated into the Web Ontology Language OWL (more precisely: OWL 2 QL, a tractable profile of OWL 2 DL), which is actually used in the OBDA system. We did that manually back then; now this can be done automatically (yay!).

Skipping here over the OBDI part and considering it done, the main third step in setting up an OBDA system is to link the data to the elements in the ontology. This is done in the mapping layer. This is essentially of the form “TermInTheOntology <- SQLqueryOverTheSource”. Abstracting from the current syntax of the OBDA system and simplifying the query for readability (see the real one in the paper), an example would thus have the following make up to retrieve all Dressel 1 type of amphorae, named Dressel1Amphora in the ontology, in all the data sources of the system:

```Dressel1Amphora <-
SELECT ic.id
FROM ic JOIN at ON at.carrier=ic.id
WHERE at.type=’DR1’```

Or some such SQL query (typically larger than this one). This takes up a bit of time to do, but has to be done only once, for these mappings are stored in a separate mapping file.

The domain expert, then, when wanting to know about the Dressel1 amphorae in the system, would have to ask only ‘retrieve all Dressel1 amphorae’, rather than creating the SQL query, and thus being oblivious about which tables and columns are involved in obtaining the answer and being oblivious about that some data entry person at some point had mysteriously decided not to use ‘Dressel1’ but his own abbreviation ‘DR1’.

The actual ‘retrieve all Dressel1 amphorae’ is then a SPARQL query over the ontology, e.g.,

`SELECT ?x WHERE {?x rdf:Type :Dressel1Amphora.}`

which is surely shorter and therefore easier to handle for the domain expert than the SQL one. The OBDA system (-ontop-) takes this query and reasons over the ontology to see if the query can be answered directly by it without consulting the data, or else can be rewritten given the other knowledge in the ontology (it can, see example 5 in the paper). The outcome of that process then consults the relevant mappings. From that, the whole SQL query is constructed, which is sent to the (federated) data source(s), which processes the query as any relational database management system does, and returns the data to the user interface.

It is, perhaps, still unpleasant that domain experts have to put up with another query language, SPARQL, as the paper notes as well. Some efforts have gone into sorting out that ‘last mile’, such as using a (controlled) natural language to pose the query or to reuse that original ORM diagram in some way, but more needs to be done. (We tried the latter in [3]; that proof-of-concept worked with a neutered version of ORM and we have screenshots and videos to prove it, but in working on extensions and improvements, a new student uploaded buggy code onto the production server, so that online source doesn’t work anymore (and we didn’t roll back and reinstalled an older version, with me having moved to South Africa and the original student-developer, Giorgio Stefanoni, away studying for his MSc).

Note to OE students: This is by no means all there is to OBDA/I, but hopefully it has given you a bit of an idea. Read at least sections 1-3 of paper [1], and if you want to do an OBDA mini-project, then read also the rest of the paper and then Chapter 8 of the OE lecture notes, which discusses in a bit more detail the motivations for OBDA and the theory behind it.

References

[1] Calvanese, D., Liuzzo, P., Mosca, A., Remesal, J, Rezk, M., Rull, G. Ontology-Based Data Integration in EPNet: Production and Distribution of Food During the Roman Empire. Engineering Applications of Artificial Intelligence, 2016. To appear.

[2] Keet, C.M., Alberts, R., Gerber, A., Chimamiwa, G. Enhancing web portals with Ontology-Based Data Access: the case study of South Africa’s Accessibility Portal for people with disabilities. Fifth International Workshop OWL: Experiences and Directions (OWLED 2008), 26-27 Oct. 2008, Karlsruhe, Germany.

[3] Calvanese, D., Keet, C.M., Nutt, W., Rodriguez-Muro, M., Stefanoni, G. Web-based Graphical Querying of Databases through an Ontology: the WONDER System. ACM Symposium on Applied Computing (ACM SAC 2010), March 22-26 2010, Sierre, Switzerland. ACM Proceedings, pp1389-1396.

# A new selection of book reviews (from 2015)

By now a regular fixture for the new year (5th time in the 10th year of this blog), I’ll briefly comment on some of the fiction novels I have read the past year, then two non-fiction ones. They are in the picture on the right (minus The accidental apprentice). Unlike last year’s list, they’re all worthy of a read.

Fiction

The devil to pay by Hugh FitzGerald Ryan (2011). Although I’m not much of a history novel fan, the book is a fascinating read. It is a romanticised story based on the many historical accounts of Alice the Kyteler and her maidservant Petronilla de Midia, the latter who was the first person to be tortured and burned at the stake for heresy in Ireland (on 3 Nov 1324, in Kilkenny, to be precise). Unlike the usual histories where men play the centre stage, the protagonist, Alice the Kyteler, is a successful and rich businesswomen who had had four husbands (serially), and one thread through the story is a description of daily life in those middle ages for all people involved—rich, poor, merchant, craftsmen, monk, the English vs. Irish, and so on. It’s written in a way of a snapshot of life of the ordinary people that come and go, insignificant in the grander scheme of things. At some point, however, Alice and Petronilla are accused of sorcery by some made-up charges from people who want a bigger slice of the pie and are also motivated by envy, which brings to the foreground the second thread in the story: the power play between the Church that actively tried to increase its influence in those days, the secular politics with non-church and/or atheist people in power, and the laws and functioning legal system at the time. This clash is what turned the every-day-life setting into one that ended up having been recorded in writing and remembered and analysed by historians. All did not end well for the main people involved, but there’s a small sweet revenge twist at the end.

Black widow society by Angela Makholwa (2013). Fast-paced, with lots of twists and turns, this highly recommendable South African crime fiction describes the gradual falling apart of a secret society of women who had their abusive husbands murdered. The adjective ‘exciting’ is probably not appropriate for such a morbid topic, but it’s written in a way that easily sucks you into the schemes and quagmires of the four main characters (The Triumvirate and their hired assassin), and wanting to know how they get out of the dicey situations. Spoiler alert: some do, some don’t. See also the short extract, and there’s an ebook version for those who’d prefer that over buying a hardcopy in South Africa (if you’re nearby, you can borrow my hardcopy, of course).

The accidental apprentice by Vikas Swarup (2012). It’s a nice read, but my memory of the details is a bit sketchy by now and I lent out the book; I recall liking it more for reading a novel about India by an Indian author rather than the actual storyline, even though I had bought it for the latter reason only. The story is about a young female sales clerk in India who has to pass several ‘life tests’ somehow orchestrated by a very rich businessman; if she passes, she can become CEO of his company. The life tests are about one’s character in challenging situations and inventiveness to resolve it. Without revealing too much of how it ends, I think it would make a pleasant Bollywood or Hollywood movie.

Moxyland by Lauren Beukes (2008). Science fiction set in Cape Town. It has a familiar SF setting of a dystopian future of more technology and somehow ruled/enslaved by it, haves and have-nots divide, and a sinister authoritarian regime to suppress the masses. A few individuals try to act against it but get sucked into the system even more. It is not that great as a story, yet it is nice to read a SF novel that’s situated in the city I live in.

Muh by David Safir (2012). One of the cows in a herd on a farm in Germany finds out they’re all destined for the slaughterhouse, and the cow escapes with a few other cows and a bull to travel to the cows’ paradise on earth: India. The main part of the book is about that journey, interspersed with very obvious referrals to various religious ideas and prejudices. I bought it because I very much enjoyed the author’s other book, Mieses karma (reviewed here). Muh was readable enough—which is more than the few half-read books lying around in a state of abandon—but not nearly as good and fun as Mieses karma. On a different note, this book is probably only available in German.

Non-fiction

Big Short by Michael Lewis (2010). The book chronicles the crazy things that happened in the financial sector that led to the inevitable crash in 2008. It reads like a suspense thriller, but it is apparently a true account of what happened inside the system, making it jaw-dropping. There are irresponsible people in the system, and there are other irresponsible people in the system. Some of them—the “misfits, renegades and visionaries”—saw it coming, and betted that it would crash, making more money the bigger the misfortunes of others. Others didn’t see it coming, due to their feckless behaviour, laziness, greed, short-sightedness, ignorance and all that so that they bought into bond/shares/mortgage packages that could only go downhill and thus lost a lot of money. For those who are not economists and conversant in financial jargon, it is not always an easy read the more complex the crazy schemes get—that was also a problem for some of the people in the system, btw—but even if you read over some of the explanations of part of a scheme, the message will be clear: it’s so rotten. A movie based on the book just came out.

17 Contradictions and the end of capitalism by David Harvey (2014). There are good book reviews of this book online (e.g., here and here), which see it as a good schematic introduction to Marxist political economy. I have little to add to that. In Harvey’s on words, his two aims of the two books were “to define what anti-capitalism might entail… [and] to give rational reasons for becoming anti-capitalist in the light of the contemporary state of things.”. Overall, the dissecting and clearly describing the contradictions can be fertile ground indeed for helping to end capitalism, as contradictions are the weak spots of a system and cannot remain indefinitely. Its chapter 8 ‘Technology, work, and human disposability’ could be interesting reading material for a social issues and profession practice course on technology and society, to subsequent have some discussion session or essay writing on it. Locally, in the light of the student protests we recently had (discussed earlier): if you don’t have enough time to read the whole book, then check out at least chapters 13 ‘Social reproduction’ and 14 ‘Freedom and domination’, and, more generally with respect to society, chapter 17 ‘The revolt of human nature: universal alienation’, the conclusions & epilogue, and a few of the foundational contradictions, notably the one on private property & common wealth and capital & labour.

Previous editions: books on (South) Africa from 2011, some more and also general books in 2012, book suggestions from 2013, and the mixed bag from 2014.

# Reblogging 2007: AI and cultural heritage workshop at AI*IA’07

From the “10 years of keetblog – reblogging: 2007”: a happy serendipity moment when I stumbled into the AI & Cultural heritage workshop, which had its presentations in Italian. Besides the nice realisation I actually could understand most of it, I learned a lot about applications of AI to something really useful for society, like the robot-guide in a botanical garden, retracing the silk route, virtual Rome in the time of the Romans, and more.

AI and cultural heritage workshop at AI*IA’07, originally posted on Sept 11, 2007. For more recent content on AI & cultural heritage, see e.g., the workshop’s programme of 2014 (also collocated with AI*IA).

——–

I’m reporting live from the Italian conference on artificial intelligence (AI*IA’07) in Rome (well, Villa Mondrogone in Frascati, with a view on Rome). My own paper on abstractions is rather distant from near-immediate applicability in daily life, so I’ll leave that be and instead write about an entertaining co-located workshop about applying AI technologies for the benefit of cultural heritage that, e.g., improve tourists’ experience and satisfaction when visiting the many historical sites, museums, and buildings that are all over Italy (and abroad).

I can remember well the handheld guide at the Alhambra back in 2001, which had a story by Mr. Irving at each point of interest, but there was only one long story and the same one for every visitor. Current research in AI & cultural heritage looks into solving issues how this can be personalized and be more interactive. Several directions are being investigated how this can be done. This ranges from the amount of information provided at each point of interest (e.g., for the art buff, casual American visitor who ‘does’ a city in a day or two, or narratives for children), to location-aware information display (the device will detect which point of interest you are closest to), to cataloguing and structuring the vast amount of archeological information, to the software monitoring of Oetzi the Iceman. The remainder of this blog post describes some of the many behind-the-scenes AI technologies that aim to give a tourist the desired amount of relevant information at the right time and right place (see the workshop website for the list of accepted papers). I’ll add more links later; any misunderstandings are mine (the workshop was held in Italian).

First something that relates somewhat to bioinformatics/ecoinformatics: the RoBotanic [1], which is a robot guide for botanical gardens – not intended to replace a human, but as an add-on that appeals in particular to young visitors and get them interested in botany and plant taxonomy. The technology is based on the successful ciceRobot that has been tested in the Archeological Museum Agrigento, but having to operate outside in a botanical garden (in Palermo), new issues have to be resolved, such as tuff powder, irregular surface, lighting, and leaves that interfere with the GPS system (for the robot to stop at plants of most interest). Currently, the RoBotanic provides one-way information, but in the near-future interaction will be built in so that visitors can ask questions as well (ciceRobot is already interactive). Both the RoBotanic and ciceRobot are customized off-the shelf robots.

Continuing with the artificial, there were three presentations about virtual reality. VR can be a valuable add-on to visualize lost or severely damaged property, timeline visualizations of rebuilding over old ruins (building a church over a mosque or vice versa was not uncommon), to prepare future restorations, and general reconstruction of the environment, all based on the real archeological information (not Hollywood fantasy and screenwriting). The first presentation [2] explained how the virtual reality tour of the Church of Santo Stefano in Bologna was made, using Creator, Vega, and many digital photos that served for the texture-feel in the VR tour. [3] provided technical details and software customization for VR & cultural heritage. On the other hand, the third presentation [4] was from a scientific point most interesting and too full of information to cover it all here. E. Bonini et al. investigated if, and if yes how, VR can give added-value. Current VR being insufficient for the cultural heritage domain, they look at how one can do an “expansion of reality” to give the user a “sense of space”. MUDing on the via Flaminia Antica in the virtual room in the National Museum in Rome should be possible soon (CNR-ITABC project started). Another issue came up during the concluded Appia Antica project for Roman era landscape VR: behaviour of, e.g., animals are now pre-coded and become boring to the user quickly. So, what these VR developers would like to see (i.e., future work) is to have technologies for autonomous agents integrated with VR software in order to make the ancient landscape & environment more lively: artificial life in the historical era one wishes, based on – and constrained by – scientific facts so as to be both useful for science and educational & entertaining for interested laymen.

A different strand of research is that of querying & reasoning, ontologies, planning and constraints.
Arbitrarily, I’ll start with the SIRENA project in Naples (the Spanish Quarter) [5], which aims to provide automatic generation of maintenance plans for historical residential buildings in order to make the current manual plans more efficient, cost effective, and maintain them just before a collapse. Given the UNI 8290 norms for technical descriptions of parts of buildings, they made an ontology, and used FLORA-2, Prolog, and PostgreSQL to compute the plans. Each element has its own interval for maintenance, but I didn’t see much of the partonomy, and don’t know how they deal with the temporal aspects. Another project [6] also has an ontology, in OWL-DL, but is not used for DL-reasoning reasoning yet. The overall system design, including use of Sesame, Jena, SPARQL can be read here and after server migration, their portal for the archeological e-Library will be back online. Another component is the webGIS for pre- and proto-historical sites in Italy, i.e., spatio-temporal stuff, and the hope is to get interesting inferences – novel information – from that (e.g., discover new connections between epochs). A basic online accessible version of webGIS is already running for the Silk Road.
A third different approach and usage of ontologies was presented in [7]. With the aim of digital archive interoperability in mind, D’Andrea et al. took the CIDOC-CRM common reference model for cultural heritage and enriched it with DOLCE D&S foundational ontology to better describe and subsequently analyse iconographic representations, from, in this particular work, scenes and reliefs from the meroitic time in Egypt.
With In.Tou.Sys for intelligent tourist systems [8] we move to almost-industry-grade tools to enhance visitor experience. They developed software for PDAs one takes around in a city, which then through GPS can provide contextualized information to the tourist, such as the building you’re walking by, or give suggestions for the best places to visit based on your preferences (e.g., only baroque era, or churches, or etc). The latter uses a genetic algorithm to compute the preference list, the former a mix of RDBMS on the server-side, OODBMS on the client (PDA) side, and F-Logic for the knowledge representation. They’re now working on the “admire” system, which has a time component built in to keep track of what the tourist has visited before so that the PDA-guide can provide comparative information. Also for city-wide scale and guiding visitors is the STAR project [9], bit different from the previous, it combines the usual tourist information and services – represented in a taxonomy, partonomy, and a set of constraints – with problem solving and a recommender system to make an individualized agenda for each tourist; so you won’t stand in front of a closed museum, be alerted of a festival etc. A different PDA-guide system was developed in the PEACH project for group visits in a museum. It provides limited personalized information, canned Q & A, and visitors can send messages to their friend and tag points of interest that are of particular interest.

Utterly different from the previous, but probably of interest to the linguistically-oriented reader is philology & digital documents. Or: how to deal with representing multiple versions of a document. Poets and authors write and rewrite, brush up, strike through etc. and it is the philologist’s task to figure out what constitutes a draft version. Representing the temporality and change of documents (words, order of words, notes about a sentence) is another problem, which [10] attempts to solve by representing it as a PERT/CPM graph structure augmented with labeling of edges, the precise definition of a ‘variant graph’, and a method of compactly storing it (ultimately stored in XML). The test case as with a poem from Valerio Magrelli.

The proceedings will be put online soon (I presume), is also available on CD (contact the WS organizer Luciana Bordoni), and probably several of the articles are online on the author’s homepages.

[1] A. Chella, I. Macaluso, D. Peri, L. Riano. RoBotanic: a Robot Guide for Botanical Gardens. Early Steps.
[2] G. Adorni. 3D Virtual Reality and the Cultural Heritage.
[3] M.C.Baracca, E.Loreti, S. Migliori, S. Pierattini. Customizing Tools for Virtual Reality Applications in the Cultural Heritage Field.
[4] E. Bonini, P. Pierucci, E. Pietroni. Towards Digital Ecosystems for the Transmission and Communication of Cultural Heritage: an Epistemological Approach to Artificial Life.
[5] A. Calabrese, B. Como, B. Discepolo, L. Ganguzza , L. Licenziato, F. Mele, M. Nicolella, B. Stangherling, A. Sorgente, R Spizzuoco. Automatic Generation of Maintenance Plans for Historical Residential Buildings.
[6] A.Bonomi, G. Mantegari, G.Vizzari. Semantic Querying for an Archaeological E-library.
[7] A. D’Andrea, G. Ferrandino, A. Gangemi. Shared Iconographical Representations with Ontological Models.
[8] L. Bordoni, A. Gisolfi, A. Trezza. INTOUSYS: a Prototype Personalized Tourism System.
[9] D. Magro. Integrated Promotion of Cultural Heritage Resources.
[10] D. Schmidt, D. Fiormonte. Multi-Version Documents: a Digitisation Solution for Textual Cultural Heritage Artefacts

# Dancing Algorithms

Yes, it appears that the two can go together. Not in that the algorithms are dancing, but one can do a dance with a choreography such that it demonstrates an algorithm. Zoltan Katai and Laszlo Toth from Romania came up with the idea of this intercultural computer science education [1], with a theoretical motivation traced all the way back to Montessori. It has nothing to do with the scope of my earlier post on folk dancing and cultural heritage preservation, yet at the same time, it contributes to it: watching the videos of the dances immerses you in the folk music, rhythm, the traditional clothes of the region, and some typical steps and and movements used in their dances.

The context, in short: learning to program is not easy for most students—as our almost 900 first-year students are starting to experience from next week onwards—and especially understanding the workings of algorithms. Katai and Toth’s approach is to involve ‘playing out’ the algorithm with people, not by clumsily walking around, but using folk dance and music to make students understand and remember it more easily. They took several sorting algorithms to demonstrate the idea, and tested it on their students, demonstrating that it improved understanding significantly [1].

Perhaps because of my bias toward the dancing, I didn’t take note of the algorithm being danced-out when I watched it the first time, or perhaps it is useful to have read the core steps of the algorithm before watching anyway. You choose: watch the video of the selection sort algorithm—given a list, repeatedly select the smallest remaining element and move that to the ‘sorted’ section of the list—with a Gypsy (Roma) folk dance, or read further below what selection sort is. (note: the video goes on double speed in the middle, for it gets a bit repetitive.)

So, what was happening in the dance? We have one ‘comparer’ (x, for short) and one ‘compared with’ (y). The left-most dancer, with number 3 (our first value of x), starts to dance to the front, calls on the second one in line, being the guy with 0 (our first y), he swirls her back in the line in the spot he came from and stays at the front (0 being the new value of x), and calls on the next, the lady with the 1 (the new value of y), who gets back in the line; and so on till the last one (with number 6). Dancer 0 does a solo act and goes to the first spot: he’s now the first one in the ‘sorted’ part, and we have completed one iteration. Starting with the second main iteration: now number 3 is again at the front of the unsorted, and she dances again to the front (so the value of our x is 3 at this point), calling the second one in the unsorted list, who has number 1, so the lady with number 3 goes back in the unsorted again, and the dancer with 1 continues through the remainder of the list, has her solo, and joins the guy with the 0 in the sorted part, having completed the second main iteration. And so on until about 6:20 minutes into the video clip when the list is sorted and the dancers do a little closing act.

A bit more structured, the following is happening in the choreography of the dance in the video:

1. Divide the list into a ‘sorted’ part (initially empty) and an ‘unsorted’ part (initially the list you want to sort)

2. Do for as long as there’s more than one item in the ‘unsorted’ (find the smallest item in that list):

1. select first element of the unsorted list (with some value, that we refer to with x)

2. if we’re not at the end of the ‘unsorted’ list, then get the next element of the ‘unsorted’ list (with some value, let’s call that one y)

1. if x < y, then y is put back in the same spot in the unsorted list, and we return to the start of step (b) to get the next item to test x against (being the one after the one we just tested)

2. else (i.e., x > y), then the value of x takes y‘s spot in the unsorted list, we assign the value of y to x, and we return to the start of step (b) to get the next element from the unsorted list

3. else (i.e., we’re at the end of the list and thus x is the lowest value), place (the value of) x in the next available spot in the ‘sorted’ part. Then go back to the start of step 2.

3. Place the last item from ‘unsorted’ at the end of the ‘sorted’.

4. Done (i.e., there’s nothing more in ‘unsorted’ to sort)

The algorithm itself is less cumbersome by not having those “let’s pick out one and come to the front” steps, but direct comparisons. I did not plan to include here, but do after all for it makes a nice sequence of artsy → informal analysis → semi-precise structure → specification the computer can work with. (Never mind that that was not the order things came about). Using our CSC1015F course material (2012 samples) that teaches Python, one of the possible sample code snippets is as follows:

```def selection_sort ( values ):
"""Sort values using selection sort algorithm."""
# iterate over outer positions in list
for outer in range (len (values)):
# assume first value is minimum
minimum = outer
# compare minimum to rest of list and update
for inner in range (outer+1, len (values)):
if values[inner] < values[minimum]:
minimum = inner
# swap minimum with outer position
temp = values[minimum]
values[minimum] = values[outer]
values[outer] = temp
return values```

This is not the only way of achieving it, btw, and the 1017F lecture and lab lecture files have another version of achieving the same (I just thought that this one may be more readable by non-CS readers of the post).

Other danced sorting algorithms are insertion sort (video) with Romanian dance, and shell-sort (video) and bubble sort (video) on Hungarian dances, and more information is also available from the Algo-rythmics website. This isn’t new, and dances on many other tunes can be viewed on youtube, e.g., various bubble-sort dances. Reconstructing the bubble-sort algorithm from the dance below (5min, no fastforward) is an exercise left to the reader… likewise making dances on other algorithms (the ICPC’14 solution to the baggage problem seems like a fun candidate line dance-like), and the same dances but with other folk dances and music.

References

[1] Zoltan Katai and Laszlo Toth. Technologically and artistically enhanced multi-sensory computer programming education. Teaching and Teacher Education, 2010, 26(2): 244-251.

# On the need for bottom-up language-specific terminology development

Peoples of several languages intellectualise their vocabulary so as to maintain their own language as medium of instruction (or: LoLT, language of teaching and learning), to conduct scientific discussions among peers and, in some cases, still, publish research in their own language. Some languages I know of who do this are French, Spanish, German, and Italian; e.g., the English ‘set’ is conjunto (Sp.) and insieme (It.), and the Dutch for ‘garbage collection’ (in computing) is geheugensanering. I found out the hard way last month that my Italian scientific vocabulary was better than my Dutch one, never really having practiced the latter in my field of specialisation and I noticed that over the years that I have been globetrotting, quite a few Anglicisms in Dutch had been replaced with Dutch words and some were there for a while already (as excuse: I studied a different discipline in the Netherlands). How do these new words come about? There are many ways of word creation, and then it depends on the country or language region how it gets incorporated in the language. For instance, French uses a top-down approach with the Académie Française and Spain has the Real Academia Española. The Netherlands has De Nederlandse Taalunie that isn’t as autocratic, it seems; for instance, to follow suit with the French mot dièse for the twitter ‘hashtag’, there was some consultation and online voting (sound file) to come up with an agreeable Dutch term for hashtag. But how does that happen elsewhere?

We found out that there is a mode of practice for language-specific terminology development that happens in small ‘workshops’ of some 13-15 people, constituting mainly of terminologists and linguists, and 1-3 subject matter experts. There may be a consultative event with stakeholders, who are not necessarily with subject matter experts. Shocking. The sheer arrogance of the former, who ‘magically’ grasp the concepts that typically take a while to understand when it comes to science, but they supposedly nevertheless understand it well enough to come up with a meaningful local-language word. But maybe, you say, I’m too arrogant in thinking subject matter experts, such as myself, can come up with decent local-language terms. Maybe that’s partially true, but what may be more problematic, is that only a few subject matter experts are involved, so there is an over-reliance on those mere few. Maybe, you say, that’s not a problem. We put that to the test for a computing and computer literacy terminology development for isiZulu, and found out it was: it depends on who you ask what comes out of the term harvesting and term preference. And then asking just a few people is a problem for a term’s uptake. (The students involved in the experiments did not even know there was a computer literacy term list from the South African Department of Arts and Culture, published in 2005, and boo-ed away several of the terms.)

The way we tested it, was with three experiments. The first experiment was an experts-only workshop, with ‘experts’ being 4th-year computer science students who have isiZulu as home language, as there were no isiZulu-speaking MSc and PhD students, nor colleagues, in CS at the University of KwaZulu-Natal, where we did the experiment. The second experiment was an isiZulu-localised survey among undergraduate CS students to collect terms, where we hoped to see a difference between a survey where they were given the entity with an English name and the entity as a picture. The third experiment was a survey where computer literacy students (1st-year science students) could vote for terms for which there was more than one isiZulu term proposed. The details of the set-up and the results have been published recently in the Alternation open-access journal article “Limitations of Regular Terminology Development Practices: The Case of isiZulu Computing Terminology”, in the special issue on “Re-envisioning African Higher Education: Alternative Paradigms, Emerging Trends and New Directions”, edited by Rubby Dhunpath, Nyna Amin and Thabo Msibi. It describes which isiZulu terms from where are affected, ranging from a higher incidence of ‘zulufying’ English terms in aforementioned list by the South African Department of Arts and Culture cf. the proposals by the experiments’ participants, and, e.g., expert consensus for inqolobane for database, versus a preference for imininingo egciniwe by the computer literacy students (see paper for more cases). Further, when all respondents across the survey are aggregated and go for majority voting, the proposed terms by the experts are snowed under. The latter is particularly troublesome in a country where computing is a designated critical skill (or: there aren’t nearly enough of them).

A byproduct of the experiments was that we have collected the, to date, longest list of isiZulu computing terms, which have gone through a standardisation process in the meantime. The latter is mainly thanks to the tireless efforts of Khumbulani Mngadi of the ULPDO of UKZN, and the two expert CS honours students who volunteered in the process, Sibonelo Dlamini and Tanita Singano.

Our approach was already less exclusionary cf. the aforementioned traditional/standard way, but it also shows that broader participation is needed both to collect and to choose terms; or, in the words of the special issue editors [2]: a “democratization of the terminology development process” that “transcends the insularity and purism which characterises traditional laboratory approaches to development”. We are still working on-and-off to achieve this with crowdsourcing, and maybe we should start thinking of crowdfunding that crowdsourcing effort to speed up the whole thing and complete the commuterm project.

As a last note: in case you are interested in other contributions to “re-envisioning African higher education”: scan through the main page online, read the editorial [2] for main outcomes of each of the papers, and/or read the papers, on topics as diverse as postgrad supervision in isiZulu, teaching sexual and gender diversity to pre-service teachers, maths education, IKS in HE, and much more.

References

[1] Keet, C.M., Barbour, G. Limitations of Regular Terminology Development practices: the case of the isiZulu Computing Terminology. Alternation, 2014, 12: 13-48.

[2] Dhunpath, R., Amin, N. Msibi, T. Editorial: Re-envisioning African and Higher Education: Alternative Paradigms, Emerging Trends and New Directions. Alternation, 2014, 12: 1-12.

# Even more short reviews of books I’ve read in 2014

I’m not sure whether I’ll make it a permanent fixture for years to come, but, for now, here’s another set of book suggestions, following those on books on (South) Africa from 2011, some more and also general read in 2012, and even more fiction & non-fiction book suggestions from 2013. If nothing else, it’s actually a nice way to myself to recall the books’ contents and decide which ones are worthwhile mentioning here, for better or worse. To summarise the books I’ve read in 2014 in a little animated gif:

Let me start with fiction books this time, which includes two books/authors suggested by blog readers. (note: most book and author hyperlinks are to online bookstores and wikipedia or similar, unless I could find their home page)

Fiction

Stoner by John Williams (1965). This was a recommendation by a old friend (more precisely on the ‘old’: she’s about as young as I am, but we go way back to kindergarten), and the book was great. If you haven’t heard about it yet: it tells the life of a professor coming from a humble background and dying in relative anonymity, in a way of the ups and downs of the life of an average ‘Joe Soap’, without any heroic achievements (assuming that you don’t count becoming a professor one). That may sound dull, perhaps, but it isn’t, not least in the way it is narrated, which gives a certain beauty to the mundane. I’ll admit I have read it in its Dutch translation, even in dwarsligger format (which appeared to be a useful invention), as I couldn’t find the book in the shops here, but better in translated form than not having read it at all. There’s more information over at wikipedia, the NYT’s review, the Guardian’s review, and many other places.

Not a fairy tale by Shaida Kazie Ali (2010). The book is fairly short, but many things happen nevertheless in this fast-paced story of two sisters who grow up in Cape Town in a Muslim-Indian family. The sisters have very different characters—one demure, the other willful and more adventurous—and both life stories are told in short chapters that cover the main events in their lives, including several same events from each one’s vantage point. As the title says, it’s not a fairy tale, and certainly the events are not all happy ones. Notwithstanding its occasional grim undertones, to me, it is told in a way to give a fascinating ‘peek into the kitchen’ of how people live in this society across the decennia. Sure, it is a work of fiction, but there are enough recognizable aspects that give the impression that it could have been pieced together from actual events from different lives. The story is interspersed with recipes—burfi, dhania chutney, coke float, falooda milkshake, masala tea, and more—which gives the book a reminiscence of como agua para chocolate. I haven’t tried them all, but if nothing else, now at least I know what a packet labelled ‘falooda’ is when I’m in the supermarket.

The time machine, by HG Wells (1895). It is the first work of fiction that considers time travel, the possible time anomalies when time travelling, and to ponder what a future society may be like from the viewpoint of the traveller. It’s one of those sweet little books that are short but has a lot of story in it. Anyone who likes this genre ought to read this book.

One thousand and one nights, by Hanan Al-Shaykh (2011). Yes, what you may expect from the title. The beginning and end are about how Scheherazade (Shahrazad) ended up telling stories to King Shahrayar all night, and the largest part of the book is devoted to story within a story within another story etc., weaving a complex web of tales from across the Arab empire so that the king would spare her for another day, wishing to know how the story ends. The stories are lovely and captivating, and also I kept on reading, indeed wanting to know how the stories end.

Karma Suture, by Rosamund Kendall (2008). Because I liked the Angina Monologues by the same author (earlier review), I’ve even read that book for a second time already, and Karma Suture is also about medics in South Africa’s hospitals, I thought this one would be likable, too. The protagonist is a young medical doctor in a Cape Town hospital who lost the will to do that work and needs to find her vibe. The story was a bit depressing, but maybe that’s what 20-something South African women go through.

God’s spy by Juan Gómez-Jurado (2007) (espía de dios; spanish original). A ‘holiday book’ that’s fun, if that can be an appropriate adjective for a story about a serial killer murdering cardinals before the conclave after Pope John Paul’s death. It has recognizable Italian scenes, the human interaction component is worked out reasonably well, it has good twists and turns and suspense-building required for a crime novel, and an plot you won’t expect. (also on goodreads—it was a bestseller in Spain)

Non-fiction

This year’s non-fiction selection is as short as the other years, but I have less to say about them cf. last year.

David and Goliath—Underdogs, misfits, and the art of battling giants, by Malcolm Gladwell (2013). What to say: yay! another book by Gladwell, and, like the others I read by Gladwell (Outliers, The tipping point), also this one is good. Gladwell takes a closer look at how seemingly underdogs are victorious against formidable opponents. Also in this case, there’s more to it than meets the eye (or some stupid USA Hollywood movie storyline of ‘winning against the odds’), such as playing by different rules/strategy than the seemingly formidable opponent does. The book is divided into three parts, on the advantages of disadvantages, the theory of desirable difficulty, and the limits of power, and, as with the other books, explores various narratives and facts. One of those remarkable observations is that, for universities in the USA at least, a good student is better off at a good university than at a top university. This for pure psychological reasons—it feels better to be the top of an average/good class than the average mutt in a top class—and that the top of a class gets more attention for nice side activities, so that the good student at a good (vs top) university gets more useful learning opportunities than s/he would have gotten at a top university. Taking another example from education: a ‘big’ class at school (well, just some 30) is better than a small (15) one, for it give more “allies in the adventures of learning”.

The dictator’s learning curve by William J. Dobson (2013), or: some suggestions for today’s anti-government activists. It’s mediocre, one of those books where the cover makes it sound more interesting than it is. The claimed thesis is that dictators have become more sophisticated in oppression by giving it a democratic veneer. This may be true at least in part, and in the sense there is a continuum from autocracy (tyranny, as Dobson labels it in the subtitle) to democracy. To highlight that notion has some value. However, it’s written from a very USA-centric viewpoint, so essentially it’s just highbrow propaganda for dubious USA foreign policy with its covert interventions not to be nice to countries such as Russia, China, and Venezuela—and to ‘justifiably’ undercut whatever plans they have through supporting opposition activists. Interwoven in the dictator’s learning curve storyline is his personal account of experiencing that there is more information sharing—and how—about strategy and tactics among activists across countries on how to foment dissent for another colour/flower-revolution. I was expecting some depth about autocracy-democracy spiced up with pop-politics and events, but it did not live up to that expectation. A more academic, and less ideologically tainted, treatise on the continuum autocracy-democracy would have been a more useful way of spending my time. You may find the longer PS Mag review useful before/instead of buying the book.

Umkhonto weSizewe (pocket history) by Janet Cherry (2011). There are more voluminous books about the armed organisation of the struggle against Apartheid, but this booklet was a useful introduction to it. It describes the various ‘stages’ of MK, from deciding to take up arms to the end to lay them down, and the successes and challenges that were faced and sacrifices made as an organisation and by its members.

I’m still not finished reading Orientalism by Edward Said—some day, I will, and will write about it. If you want to know about it now already, then go to your favourite search engine and have a look at the many reviews and (academic and non-academic) analyses. Reading A dream deferred (another suggestion) is still in the planning.

# VocabLift to learn some isiZulu, Shona, French, and English words

While I’ll be at EKAW’14 to network, present the stuff ontology, and support SUGOI, some of my students will hold the fort locally at the African Language Technologies Workshop (AFLaT’14) on 27-28 November in Cape Town. One of the two posters & demos I contributed to is about a cute tool that two 3rd-year students—Ntokozo Zwane and Sungunani Silubonde—designed and implemented as their capstone project for software engineering, which they called VocabLift (zip). The capstone groups’ task was to develop a tool that can help someone to learn vocabulary in a playful way, which had some leeway to be creative in how to realize that.

The context is that everyone has to learn vocabulary over the years, from basic words in primary school to scientific terminology at university, and any time when one is learning a new language. Besides memorizing ‘boring’ lists of words from a sheet of paper, there are more playful ways to do this, like the multi-player dictionary game and hangman, or single-player memory cards game from the EuroTalk DVDs. There are indeed many word games online, e.g., for English, and learning a foreign language on duolingo, but there is less for multilingualism and the languages in Southern Africa. EuroTalk DVDs for Zulu, Shona, Swahili, Yoruba and a few other African languages do exist, true, but at a cost and they are inflexible in a teaching setting. Enter VocabLift, which is both technologically interesting and for the target languages chosen: isiZulu and Shona, and English and French. Conceptually, it is based on natural language-independent root questions that are mapped to the language of choice, so another language easily can be added, and, unlike the usual ‘closed’ world of the computer-based language games, a teacher can add words to the dictionary, making it in principle adaptable to the desired level of language learning.

Currently, VocabLift has three games: the Picture Matcher, Vocab Trainer, and Word Tetris. In Picture Matcher, the name of the object in the picture has to be provided by the user, with as objective to improve memory and spelling in the chosen language; a screenshots for avocado in isiZulu and pineapple Shona are shown below.

Avocado in isiZulu, right before selecting ‘confirm word’

Pineapple in Shona, after I clicked ‘I don’t know’

Vocab Trainer tests the user’s ability to recall the word given in English in the target language; screenshots for green in isiZulu and gray in French are shown below.

Choosing the right word for ‘green’ in isiZulu (the answer also can be found further below in another screenshot)

Same story, and just to show it works for French, too.

The third game, Word Tetris is included so that the user can learn to match the word to the picture. The user has to type the word associated with the picture before it falls below the bar; a screenshot is shown below (I lost points due to trying make nice screenshots, really).

Halfway playing ‘word tetris’

One needs to be logged in as administrator to add words (admin 1234 will do the trick) and use the tool in ‘dictionary mode’, as illustrated in the next two screenshots.