Gastrophysics and follies

Yes, turns out there is a science of eating, which is called gastrophysics, and a popular science introduction to the emerging field was published in an accessible book this year by Charles Spence (Professor [!] Charles Spence, as the front cover says), called, unsurprisingly, Gastrophysics—the new science of eating. The ‘follies’ I added to the blog post title refers to the non-science parts of the book, which is a polite term that makes it a nice alliteration in the pronunciation of the post’s title. The first part of this post is about the interesting content of the book; the second part about certain downsides.

The good and interesting chapters

Given that some people don’t even believe there’s a science to food (there is, a lot!), it is perhaps even a step beyond to contemplate there can be such thing as a science for the act of eating and drinking itself. Turns out—quite convincingly in the first couple of chapters of the book—that there’s more to eating than meets the eye. Or taste bud. Or touch. Or nose. Or ear. Yes, the ear is involved too: e.g., there’s a crispy or crunchy sound when eating, say, crisps or corn flakes, and it is perceived as an indicator of the freshness of the crisps/cornflakes. When it doesn’t crunch as well, the ratings are lower, for there’s the impression of staleness or limpness to it. The nose plays two parts: smelling the aroma before eating (olfactory) and when swallowing as volatile compounds are released in your throat that reach your nose from the back when breathing out (i.e., retronasal).

The first five chapters of the books are the best, covering taste, smell, sight, sound, and touch. They present easily readable interesting information that is based on published scientific experiments. Like that drinking with a straw ruins the smell-component of the liquid (and so does drinking from a bottle) cf drinking from a glass that sets the aromas free to combine the smell with the taste for a better overall evaluation of the drink. Or take the odd (?) thing that frozen strawberry dessert tastes sweeter from a white bowl than a black one, as is eating it from a round plate cf. from an angular plate. Turns out there’s some neuroscience to shapes (and labels) that may explain the latter. If you think touch and cutlery don’t matter: it’s been investigated, and it does. The heavy cutlery makes the food taste better. It’s surface matters, too. The mouth feel isn’t the same when eating with a plain spoon vs. from a spoon that was first dipped in lemon juice and then in sugar or ground coffee (let it dry first).

There is indeed, as the intro says, some fun fact on each of these pages. It is easy to see that these insights also can be interesting to play with for one’s dinner as well as being useful to the food industry, and to food science, be it to figure out the chemistry behind it or how to change the product, the production process, or even just the packaging. Some companies did so already. Like when you open a bag of (relatively cheap-ish) ground coffee: the smell is great, but that’s only because some extra aroma was added in the sealed air when it was packaged. Re-open the container (assuming you’ve transferred it into one), and the same coffee smell does not greet you anymore. The beat of the background music apparently also affects the speed of masticating. Of course, the basics of this sort of stuff were already known decades ago. For instance, the smell of fresh bread in the supermarket is most likely aroma in the airco, not the actual baking all the time when the shop is open (shown to increase buying bread, if not more), and the beat of the music in the supermarket affects your walking speed.

On those downsides of the book

After these chapters, it gradually goes downhill with the book’s contents (not necessarily the topics). There are still a few interesting science-y things to be learned from the research into airline food. For instance, that the overall ‘experience’ is different because of lower humidity (among other things) so your nose dries out and thus detects less aroma. They throw more sauce and more aromatic components into the food being served up in the air. However, the rest descends into a bunch of anecdotes and blabla about fancy restaurants, with the sources not being solid scientific outlets anymore, but mostly shoddy newspaper articles. Yes, I’m one of those who checks the footnotes (annoyingly as endnotes, but one can’t blame the author for that sort of publisher’s mistake). Worse, it gives the impression of being research-based, because it was so in the preceding chapters. Don’t be fooled by the notes in, especially, chapters 9-12. To give an example, there’s a cool-sounding section on “do robot cooks make good chefs?” in the ‘digital dining’ chapter. One expects an answer; but no, forget that. There’s some hyperbole with the author’s unfounded opinion and, to top it off, a derogatory remark about his wife probably getting excited about a 50K GBP kitchen gadget. Another example out of very many of this type: some opinion by some journalist who ate some day, in casu at über-fancy way-too-expensive-for-the-general-reader Pairet’s Ultraviolet (note 25 on p207). Daily Telegraph, New York Times, Independent, BBC, Condiment junkie, Daily Mail Online, more Daily Mail, BBC, FT Weekend Magazine, Wired, Newsweek etc. etc. Come on! Seriously?! It is supposed to be a popsci book, so then please don’t waste my time with useless anecdotes and gut-feeling opinions without (easily digestible) scientific explanations. Or they should have split the book in two: I) popsci and II) skippable waffle that any science editor ought not to have permitted to pass the popsci book writing and publication process. Professor Spence is encouraged to reflect a little on having gone down on a slippery slope a bit too much.

In closing

Although I couldn’t bear to finish reading the ‘experiential meal’ chapter, I did read the rest, and the final chapter. As any good meal that has to have a good start and finish, the final chapter is fine, including the closing [almost] with the Italian Futurists of the 1930s (or: weird dishes aren’t so novel after all). As to the suggestions for creating your own futurist dinner party, I can’t withhold here the final part of the list:

In conclusion: the book is worth reading, especially the first part. Cooking up a few experiments of my own sounds like a nice pastime.

Conjuring up or enhancing a new subdiscipline, say, gastromatics, computational gastronomy, or digital gastronomy could be fun. The first term is a bit too close to gastromatic (the first search hits are about personnel management software in catering), though, and the second one has been appropriated by the data mining and Big Data crowd already. Digital gastronomy has been coined as well and seems more inclusive on the technology side than the other two. If it all sounds far-fetched, here’s a small sampling: there are already computer cooking contests (at the case-based reasoning conferences) for coming up with the best recipe given certain constraints, a computational analysis of culinary evolution, data mining in food science and food pairing in Arab cuisine, robot cocktail makers are for sale (e.g., makr shakr and barbotics) and there’s also been research on robot baristas (e.g., the FusionBot and lots more), and more, much more, results over at least the past 10 years.

Advertisement

The isiZulu spellchecker seems to contribute to ‘intellectualisation’ of isiZulu

Perhaps putting ‘intellectualisation’ in sneer quotes isn’t nice, but I still find it an odd term to refer to a process of (in short, from [1]) coming up with new vocabulary for scientific speech, expression, objective thinking, and logical judgments in a natural language. In the country I grew up, terms in our language were, and still are, invented more because of a push against cultural imperialism and for home language promotion rather than some explicit process to intellectualise the language in the sense of “let’s invent some terms because we need to talk about science in our own language” or “the language needs to grow up” sort of discourses. For instance, having introduced the beautiful word geheugensanering (NL) that captures the concept of ‘garbage collection’ (in computing) way better than the English joke-term for it, elektronische Datenverarbeitung (DE) for ‘ICT’, técnicas de barrido (ES) for ‘sweep line’ algorithms, and mot-dièse (FR) for [twitter] ‘hashtag’, to name but a few inventions.

Be that as it may, here in South Africa, it goes under the banner of intellectualisation, with particular reference to the indigenous languages [2]; e.g., having introduced umakhalekhukhwini ‘cell/mobile phone’ (decomposed: ‘the thing that rings in your pocket’) and ukudlulisa ikheli for ‘pass by reference’ in programming (longer list of isiZulu-English computing and ICT terms), which is occurring for multiple subject domains [3]. Now I ended up as co-author of a paper that has ‘intellectualisation’ in its title [4]: Evaluation of the effects of a spellchecker on the intellectualization of isiZulu that appeared just this week in the Alternation journal.

The main general question we sought to answer was whether human language technologies, and in particular the isiZulu spellchecker launched last year, contribute to the language’s intellectualisation. More specifically, we aimed to answer the following three questions:

  1. Is the spellchecker meeting end-user needs and expectations?
  2. Is the spellchecker enabling the intellectualisation of the language?
  3. Is the lexicon growing upon using the spellchecker?

The answers in a nutshell are: 1) yes, the spellchecker does meet end-user needs and expectations (but there are suggestions further improving its functionality), 2) users perceive that the spellchecker enables the intellectualisation of the language, and 3) non-dictionary words were added, i.e., the lexicon is indeed growing.

The answer to the last question provides some interesting data for linguists to bite their teeth in. For instance, a user had added to the spellchecker’s dictionary LikaSekelaShansela, which is an inflected form of isekelashansela ‘Vice Chancellor’ (that is recognised as correct by the spellchecker). Also some inconsistencies—from a rule-of-thumb viewpoint—in word formation were observed; e.g., usosayensi ‘scientist’ vs. unompilo ‘nurse’. If one were to follow consistently the word formation process for various types of experts in isiZulu, such as usosayensi ‘scientist’, usolwazi ‘professor’, and usomahlaya ‘comedian’, then one reasonably could expect ‘nurse’ to be *usompilo rather than unompilo. Why it isn’t, we don’t know. Regardless, the “add to dictionary” option of the spellchecker proved to be a nice extra feature for a data-driven approach to investigate intellectualisation of a language.

Version 1 of the isiZulu spellchecker that was used in the evaluation was ok and reasonably could not have interfered negatively with any possible intellectualisation (average SUS score of 75 and median 82.5, so ‘good’). It was ok in the sense that a majority of respondents thought that the entire tool was helpful, no features should be removed, it enhances their work, and so on (see paper for details). For the software developers among you who have spare time: they’d like, mainly, to have it as a Chrome and MS Word plugin, predictive text/autocomplete, and have it working on the mobile phone. The spellchecker has improved in the meantime thanks to two honours students, and I will write another blog post about that next.

As a final reflection: it turned out there isn’t a way to measure the level of intellectualisation in a ‘hard sciences’ way, so we concluded the other answers based on data that came from the somewhat fluffy approach of a survey and in-depth interviews (a ‘mixed-methods’ approach, to give it a name). It would be nice to have a way to measure it, though, so one would be able to say which languages are more or less intellectualised, what level of intellectualisation is needed to have a language as language of instruction and science at tertiary level of education and for dissemination of scientific knowledge, and to what extent some policy x, tool y, or activity z contributes to the intellectualization of a language.

 

References

[1] Havránek, B. 1932. The functions of literary language and its cultivation. In Havránek, B and Weingart, M. (Eds.). A Prague School Reader on Esthetics, Literary Structure and Style. Prague: Melantrich: 32-84.

[2] Finlayson, R, Madiba, M. The intellectualization of the indigenous languages of South Africa: Challenges and prospects. Current Issues in Language Planning, 2002, 3(1): 40-61.

[3]Khumalo, L. Intellectualization through terminology development. Lexikos, 2017, 27: 252-264.

[4] Keet, C.M., Khumalo, L. Evaluation of the effects of a spellchecker on the intellectualization of isiZulu. Alternation, 2017, 24(2): 75-97.

Orchestrating 28 logical theories of mereo(topo)logy

Parts and wholes, again. This time it’s about the logic-aspects of theories of parthood (cf. aligning different hierarchies of (part-whole) relations and make them compatible with foundational ontologies). I intended to write this post before the Ninth Conference on Knowledge Capture (K-CAP 2017), where the paper describing the new material would be presented by my co-author, Oliver Kutz. Now, afterwards, I can add that “Orchestrating a Network of Mereo(topo) logical Theories” [1] even won the Best Paper Award. The novelties, in broad strokes, are that we figured out and structured some hitherto messy and confusing state of affairs, showed that one can do more than generally assumed especially with a new logics orchestration framework, and we proposed first steps toward conflict resolution to sort out expressivity and logic limitations trade-offs. Constructing a tweet-size “tl;dr” version of the contents is not easy, and as I have as much space here on my blog as I like, it ended up to be three paragraphs here: scene-setting, solution, and a few examples to illustrate some of it.

 

Problems

As ontologists know, parthood is used widely in ontologies across most subject domains, such as biomedicine, geographic information systems, architecture, and so on. Ontology (the philosophers) offer a parthood relation that has a bunch of computationally unpleasant properties that are structured in a plethora of mereologicial and meretopological theories such that it has become hard to see the forest for the trees. This is then complicated in practice because there are multiple logics of varying expressivity (support more or less language features), with the result that only certain fragments of the mereo(topo)logical theories can be represented. However, it’s mostly not clear what can be used when, during the ontology authoring stage one may want to have all those features so as to check correctness, and it’s not easy to predict what will happen when one aligns ontologies with different fragments of mereo(topo)logy.

 

Solution

We solved these problems by specifying a structured network of theories formulated in multiple logics that are glued together by the various linking constructs of the Distributed Ontology, Model, and Specification Language (DOL). The ‘structured network of theories’-part concerns all the maximal expressible fragments of the KGEMT mereotopological theory and five of its most well-recognised sub-theories (like GEM and MT) in the seven Description Logics-based OWL species, first-order logic, and higher order logic. The ‘glued together’-part refers to relating the resultant 28 theories within DOL (in Ontohub), which is a non-trivial (understatement, unfortunately) metalanguage that has the constructors for the glue, such as enabling one to declare to merge two theories/modules represented in different logics, extending a theory (ontology) with axioms that go beyond that language without messing up the original (expressivity-restricted) ontology, and more. Further, because the annoying thing of merging two ontologies/modules can be that the merged ontology may be in a different language than the two original ones, which is very hard to predict, we have a cute proof-of-concept tool so that it assists with steps toward resolution of language feature conflicts by pinpointing profile violations.

 

Examples

The paper describes nine mechanisms with DOL and the mereotopological theories. Here I’ll start with a simple one: we have Minimal Topology (MT) partially represented in OWL 2 EL/QL in “theory8” where the connection relation (C) is just reflexive (among other axioms; see table in the paper for details). Now what if we add connection’s symmetry, which results in “theory4”? First, we do this by not harming theory8, in DOL syntax (see also the ESSLI’16 tutorial):

logic OWL2.QL
ontology theory4 =
theory8
then
ObjectProperty: C Characteristics: Symmetric %(t7)

What is the logic of theory4? Still in OWL, and if so, which species? The Owl classifier shows the result:

 

Another case is that OWL does not let one define an object property; at best, one can add domain and range axioms and the occasional ‘characteristic’ (like aforementioned symmetry), for allowing arbitrary full definitions pushes it out of the decidable fragment. One can add them, though, in a system that can handle first order logic, such as the Heterogeneous toolset (Hets); for instance, where in OWL one can add only “overlap” as a primitive relation (vocabulary element without definition), we can take such a theory and declare that definition:

logic CASL.FOL
ontology theory20 =
theory6_plus_antisym_and_WS
then %wdef
. forall x,y:Thing . O(x,y) <=> exists z:Thing (P(z,x) /\ P(z,y)) %(t21)
. forall x,y:Thing . EQ(x,y) <=> P(x,y) /\ P(y,x) %(t22)

As last example, let me illustrate the notion of the conflict resolution. Consider theory19—ground mereology, partially—that is within OWL 2 EL expressivity and theory18—also ground mereology, partially—that is within OWL 2 DL expressivity. So, they can’t be the same; the difference is that theory18 has parthood reflexive and transitive and proper parthood asymmetric and irreflexive, whereas theory19 has both parthood and proper parthood transitive. What happens if one aligns the ontologies that contain these theories, say, O1 (with theory18) and O2 (with theory19)? The Owl classifier provides easy pinpointing and tells you the profile: OWL 2 full (or: first order logic, or: beyond OWL 2 DL—top row) and why (bottom section):

Now, what can one do? The conflict resolution cannot be fully automated, because it depends on what the modeller wants or needs, but there’s enough data generated already and there are known trade-offs so that it is possible to describe the consequences:

  • Choose the O1 axioms (with irreflexivity and asymmetry on proper part of), which will make the ontology interoperable with other ontologies in OWL 2 DL, FOL or HOL.
  • Choose O2’s axioms (with transitivity on part of and proper part of), which will facilitate linking to ontologies in OWL 2 RL, 2 EL, 2 DL, FOL, and HOL.
  • Choose to keep both sets will result in an OWL 2 Full ontology that is undecidable, and it is then compatible only with FOL and HOL ontologies.

As serious final note: there’s still fun to be had on the logic side of things with countermodels and sub-networks and such, and with refining the conflict resolution to assist ontology engineers better. (or: TBC)

As less serious final note: the working title of early drafts of the paper was “DOLifying mereo(topo)logy”, but at some point we chickened out and let go of that frivolity.

 

References

[1] Keet, C.M., Kutz, O. Orchestrating a Network of Mereo(topo)logical Theories. Ninth International Conference on Knowledge Capture (K-CAP’17), Austin, Texas, USA, December 4-6, 2017. ACM Proceedings.