On the (un)reasonable effectiveness of mathematics in ecology

An article appeared last week in Ecological Modeling that has the intention to be thought-provoking; it looks at effectiveness of mathematics in ecological theory [1], but it just as well can be applied to bioinformatics, computational biology, and bio-ontologies. In short: mathematical models are useful only if they are not too general to be trivially true and not too specific to be applicable to one data set only. But how to go about finding the middle way? Ginzburg et al fail to clearly answer this question, but there are some pointers worth mentioning. In the words of the authors (bold face my emphasis):

A good theory is focused without being blurred by extraneous detail or overgenerality. Yet ecological theories frequently fail to achieve this desirable middle ground. Here, we review the reasons for the mismatch between what theorists seek to achieve and what they actually accomplish. In doing so, we argue on pragmatic grounds against mathematical literalism as an appropriate constraint to mathematical constructions: such literalism would allow mathematics to constrain biology when the biology ought to be constraining mathematics. We also suggest a method for differentiating theories with the potential to be “unreasonably effective” from those that are simply overgeneral. Simple axiomatic assumptions about an ecological system should lead to theoretical predictions that can then be compared with existing data. If the theory is so general that data cannot be used to test it, the theory must be made more specific.

What then about this pragmatism and mathematical literalism? The pragmatism sums up as a “theories never work perfectly” anyway and, well, reality is surpassing us given that “we face an ever-increasing number of ecological crises, social demand will be for crude, imperfect descriptions of ecological phenomena now rather than more detailed, complex understanding later” (as aside and to rub it in: the latter is a different argumentation for pragmatism than the ‘I need a program from you today in order to analyse my lab data so that I can submit the article tomorrow and beat the competition’). The former I concur with, the latter on preferring imperfection over more thought-through theories is a judgment call and I leave that for what it is.
Mathematical literalism roughly means strict adherence to some limited mathematical model for its mathematical characteristics and limitations. For instance, in several ecological models (and elsewhere) processes are interpreted as strictly instantaneous—the “mechanistic” models—whereas those models that do not are mocked as “phenomenological”. But, so the authors argue, we should not fit nature to match the maths, but use mathematics to describe nature. Now this likely does ring a bell or two with developers of formal (logic-based) bio-ontologies: describe your bio stuff with the constructs that OWL gives you! And not—but probably should be—which formal language (i.e, which constructs) do I actually need to describe my subject domain? (Some follow-up questions on the latter are: if you can’t represent it, what exactly is it that you can’t represent? Do you really need it for the task at hand? Can you represent it in another [logical/conceptual modeling] language?)

It is not this black-and-white, however. As Ginzburg et al mention a couple of times in the article (kicking in an open door), trying to make a mathematical model of the biological theory greatly helps to be more precise about the underlying assumptions and to make those explicit. This, in turn aids making predictions based on those assumptions & theory, which subsequently should be tested against real data; if you can’t test it against data, then the theory is no good. This is a bit harsh because it may be that for some practical reasons something cannot be tested, but on the other hand, if that is the case, one may want to think twice about the usefulness of the theory.
Last, “The most useful theories emphasize explanation over description and incorporate a “limit myth” (i.e., they describe a pure situation without extraneous factors, as with the assumption in physics that surfaces are frictionless).” While it is true that one seeks for explanations, this conveniently brushes over the fact that first one has to have a way to describe things in order to incorporate them in an explanatory theory! If the theory fails, then thanks to a structured approach for the descriptions—say, some formal language or [annotated] mathematical equations—it will be easier to fiddle with the theory and reuse parts of it to come up with a new one. If the theory succeeds, it will be easier to link it up to another properly described and annotated theory to make more complex explanatory models.

Overall, the contents of the article is a bit premature and would have benefited from a thorough analyses of the too-general and too-specific theories other than anecdotal evidence with a couple of examples. Also, the “method for differentiating theories” advertised in the abstract is buried somewhere in the text, so, some sort of a bullet-pointed checklist for assessing one’s own pet theory on too-general/specific would have been useful. Despite this, it is good material to feed a confirmation bias for being against too much and too strict adherence to mathematics… as well as against no mathematics.

[1] Lev R. Ginzburg, Christopher X.J. Jensen and Jeffrey V. Yule. (2007). Aiming the “unreasonable effectiveness of mathematics” at ecological theory. Ecological Modeling, 207(2-4): 356-362. doi:10.1016/j.ecolmodel.2007.05.015


more on AI and (contemporary) cultural heritage: poeme electronique, RAI, and robocup

In the comment on the previous post on AI & cultural heritage, my brother wondered if something concrete would come from using the variety of (AI) techniques. Apart from the ones mentioned in that post that are being implemented in Italy and Haifa, there’s also one – working! – closer to home: the Poeme Electronique (virtual reality), and others, such as TV genre classification (neural networks) and Robocup (game theory).

Following up on the previously mentioned VR & cultural heritage, a VR version to regain lost art, and a documentary about the making-of, is available online for free, but you need a lot of bandwidth. It has been developed in the Virtual Poeme Electronique project led by Vincenzo Lombardo. The Electronic Poem was developed for & by Philips Eindhoven for the 1958 international exposition in Brussels to demonstrate its technology for society (as opposed to use technology for war) and thereby the first multimedia project of the electronic era. Unfortunately, it was broken down 2 months after the exhibition hosted 2 million visitors, and lost – other than various bits and pieces to build it, such as the sketches of Le Corbusier, music scores by Edgar Varese, the hyperbolic paraboloids drawings by Xenakis (the plan resembles a stomach, as metaphor that expo visitors would be ‘digested’ by the multimedia presentation in the building), and an old Philips video; see the materials section. The VR reconstruction is interesting from, a.o, a technological viewpoint: “the unity of this installation, that conveys images and sound paths of a great complexity in a common digital space, requires challenging solutions in the integration of the various media (design of the digital space, display of the visual show, organization of the sound paths) and in the interaction between the spectator and the installation” (copied from the project website); see also a summary on the fascinating cultural historical & artistic aspects. In the upcoming years, the expo will be rebuilt by Philips at the site of the former headquarters in Eindhoven.

Of course one could argue if TV is, or approximates, contemporary art. Instead of one’s subjective view, one could investigate this in a more structured fashion by analyzing the archives of the various types of programmes. However, e.g. RAI (the Italian public radio and TV organization) already has some 560 000 hours TV archive and 700 000 hours of radio archive. One step in the structuring of the archive to query and analyse it better is TV genre classification. [1] used a feed-forward neural network after the feature extraction process. Features of the video files are, a.o., luminance colour histograms, texture signature, average speech rate, and displaced frame difference. It works for the basics categories, such as news and talk shows, but I wonder about more recent, finer-grained distinctions, such as “infotainment” and “docudrama” that is blurring the lines between information proper and the fantasy-part that are being mixed and which ought to result in rough or fuzzy boundaries of the classes, or at least groups of TV programmes that are difficult, if not impossible, to classify automatically.

The third “contemporary cultural heritage” (???) with AI, or at least close to society, is football/soccer, and the seeming folly of robocup – the football cup for robots – which provides “a standard problem where wide range of technologies can be integrated and examined”. Moreover, “RoboCup chose to use soccer game as a central topic of research, aiming at innovations to be applied for socially significant problems and industries” (copied from the Robocup website). As the invited speaker Manuela Veloso explained with great enthusiasm and conviction, it’s not about soccer but, in her specialisation, modeling and implementing game theoretic team strategies. That is, how to represent team strategies, how to generate a team response to an adversary, and how to make strategic decisions in timed zero-sum games? A first step is to separate out skills (actions) from tactics (sequence of actions) from plays (web of tactics). Another aspect is the notion of winning: the usual game-theoretic maximum reward pay-off versus a threshold-win scenario (see also [2], which has won the Outstanding Paper Award at AAAI’07). One contradictory issue in the presentation, however, was the team-individual (non-)balance. To win Robocup, Veloso et al use (a.o.) team strategies alike the “playbooks” in American football that specify multiple play-plans. During “locker-room agreements”, the coach decides which play plan to execute (which can be changed during the game), and all players stick to these rules; hence, the individual player is subordinate to the team. In contrast to this American football, with football/soccer, one regularly sees games with individuals who not always play as a team. Veloso is passionate about the approach off the team-oriented playbook approach of American football, but at the end of the presentation the take-home message for future research was that the team-approach was wrong and that one should look for player strategies that are based on the individual and who only cooperate when “needed”… which pretty much ends up as the average football/soccer game of which she laments the play strategy. To put it positively, I think she probably meant that it is the balance between individual behaviour of the player/robot and the team of players/robots as a whole; hence, when, how, and why switch between these two fundamentally different strategies.

Manuela veloso and Daniele Nardi during the invited talkManuela Veloso and Daniele Nardi during the invited talk (picture from AI*IA website).

[1] Maurizio Montagnuolo and Alberto Messina. TV Genre Classification Using Multimodal Information and Multilayer Perceptrons. Proc. of AI*IA ’07, Rome, 2007.
[2] Colin McMillen and Manuela Veloso. Thresholded Rewards: Acting Optimally in Timed, Zero-Sum Games. In Proceedings of AAAI’07, Vancouver, Canada, July 2007.

AI and cultural heritage workshop at AI*IA’07

I’m reporting live from the Italian conference on artificial intelligence (AI*IA’07) in Rome (well, Villa Mondrogone in Frascati, with a view on Rome). My own paper on abstractions is rather distant from near-immediate applicability in daily life, so I’ll leave that be and instead write about an entertaining co-located workshop about applying AI technologies for the benefit of cultural heritage that, e.g., improve tourists’ experience and satisfaction when visiting the many historical sites, museums, and buildings that are all over Italy (and abroad).

I can remember well the handheld guide at the Alhambra back in 2001, which had a story by Mr. Irving at each point of interest, but there was only one long story and the same one for every visitor. Current research in AI & cultural heritage looks into solving issues how this can be personalized and be more interactive; several directions are being investigated how this can be done. This ranges from the amount of information provided at each point of interest (e.g., for the art buff, casual American visitor who ‘does’ a city in a day or two, or narratives for children), to location-aware information display (the device will detect which point of interest you are closest to), to cataloguing and structuring the vast amount of archeological information, to the software monitoring Oetzi the Iceman. The remainder of this blog post describes some of the many behind-the-scenes AI technologies that aim to give a tourist the desired amount of relevant information at the right time and right place (see the workshop website for the list of accepted papers). I’ll add more links later; any misunderstandings are mine (the workshop was held in Italian).

First something that relates somewhat to bioinformatics/ecoinformatics: the RoBotanic [1], which is a robot guide for botanical gardens – not intended to replace a human, but as an add-on that appeals in particular to young visitors and get them interested in botany and plant taxonomy. The technology is based on the successful ciceRobot that has been tested in the Archeological Museum Agrigento, but having to operate outside in a botanical garden (in Palermo), new issues have to be resolved, such as tuff powder, irregular surface, lighting, and leaves that interfere with the GPS system (for the robot to stop at plants of most interest). Currently, the RoBotanic provides one-way information, but in the near-future interaction will be built in so that visitors can ask questions as well (ciceRobot is already interactive). Both the RoBotanic and ciceRobot are customized off-the shelf robots.

Continuing with the artificial, there were three presentations about virtual reality. VR can be a valuable add-on to visualize lost or severely damaged property, timeline visualizations of rebuilding over old ruins (building a church over a mosque or vice versa was not uncommon), to prepare future restorations, and general reconstruction of the environment, all based on the real archeological information (not Hollywood fantasy and screenwriting). The first presentation [2] explained how the virtual reality tour of the Church of Santo Stefano in Bologna was made, using Creator, Vega, and many digital photos that served for the texture-feel in the VR tour. [3] provided technical details and software customization for VR & cultural heritage. On the other hand, the third presentation [4] was from a scientific point most interesting and too full of information to cover it all here. E. Bonini et al. investigated if, and if yes how, VR can give added-value. Current VR being insufficient for the cultural heritage domain, they look at how one can do an “expansion of reality” to give the user a “sense of space”. MUDing on the via Flaminia Antica in the virtual room in the National Museum in Rome should be possible soon (CNR-ITABC project started). Another issue came up during the concluded Appia Antica project for Roman era landscape VR: behaviour of, e.g., animals are now pre-coded and become boring to the user quickly. So, what these VR developers would like to see (i.e., future work) is to have technologies for autonomous agents integrated with VR software in order to make the ancient landscape & environment more lively: artificial life in the historical era one wishes, based on – and constrained by – scientific facts so as to be both useful for science and educational & entertaining for interested laymen.

A different strand of research is that of querying & reasoning, ontologies, planning and constraints.
Arbitrarily, I’ll start with the SIRENA project in Naples (the Spanish Quarter) [5], which aims to provide automatic generation of maintenance plans for historical residential buildings in order to make the current manual plans more efficient, cost effective, and maintain them just before a collapse. Given the UNI 8290 norms for technical descriptions of parts of buildings, they made an ontology, and used FLORA-2, Prolog, and PostgreSQL to compute the plans. Each element has its own interval for maintenance, but I didn’t see much of the partonomy, and don’t know how they deal with the temporal aspects. Another project [6] also has an ontology, in OWL-DL, but is not used for DL-reasoning reasoning yet. The overall system design, including use of Sesame, Jena, SPARQL can be read here and after server migration, their portal for the archeological e-Library will be back online. Another component is the webGIS for pre- and proto-historical sites in Italy, i.e., spatio-temporal stuff, and the hope is to get interesting inferences – novel information – from that (e.g., discover new connections between epochs). A basic online accessible version of webGIS is already running for the Silk Road.
A third different approach and usage of ontologies was presented in [7]. With the aim of digital archive interoperability in mind, D’Andrea et al. took the CIDOC-CRM common reference model for cultural heritage and enriched it with DOLCE D&S foundational ontology to better describe and subsequently analyse iconographic representations, from, in this particular work, scenes and reliefs from the meroitic time in Egypt.
With In.Tou.Sys for intelligent tourist systems [8] we move to almost-industry-grade tools to enhance visitor experience. They developed software for PDAs one takes around in a city, which then through GPS can provide contextualized information to the tourist, such as the building you’re walking by, or give suggestions for the best places to visit based on your preferences (e.g., only baroque era, or churches, or etc). The latter uses a genetic algorithm to compute the preference list, the former a mix of RDBMS on the server-side, OODBMS on the client (PDA) side, and F-Logic for the knowledge representation. They’re now working on the “admire” system, which has a time component built in to keep track of what the tourist has visited before so that the PDA-guide can provide comparative information. Also for city-wide scale and guiding visitors is the STAR project [9], bit different from the previous, it combines the usual tourist information and services – represented in a taxonomy, partonomy, and a set of constraints – with problem solving and a recommender system to make an individualized agenda for each tourist; so you won’t stand in front of a closed museum, be alerted of a festival etc. A different PDA-guide system was developed in the PEACH project for group visits in a museum. It provides limited personalized information, canned Q & A, and visitors can send messages to their friend and tag points of interest that are of particular interest.

Utterly different from the previous, but probably of interest to the linguistically-oriented reader is philology & digital documents. Or: how to deal with representing multiple versions of a document? Poets and authors write and rewrite, brush up, strike through etc. and it is the philologist task to figure out what constitutes a draft version. Representing the temporality and change of documents (words, order of words, notes about a sentence) is another problem, which [10] attempts to solve by representing it as a PERT/CPM graph structure augmented with labeling of edges, the precise definition of a ‘variant graph’, and a the method of compactly storing it (ultimately stored in XML). The test case as with a poem from Valerio Magrelli.

The proceedings will be put online soon (I presume), is also available on CD (contact the WS organizer Luciana Bordoni), and probably several of the articles are online on the author’s homepages.

[1] A. Chella, I. Macaluso, D. Peri, L. RianoRoBotanic: a Robot Guide for Botanical Gardens. Early Steps.
[2] G.Adorni. 3D Virtual Reality and the Cultural Heritage.
[3] M.C.Baracca, E.Loreti, S.Migliori, S.Pierattini. Customizing Tools for Virtual Reality Applications in the Cultural Heritage Field.
[4] E. Bonini, P. Pierucci, E. Pietroni. Towards Digital Ecosystems for the Transmission and Communication of Cultural Heritage: an Epistemological Approach to Artificial Life.
[5] A.Calabrese, B. Como, B Discepolo, L. Ganguzza , L. Licenziato, F. Mele, M. Nicolella, B. Stangherling, A. Sorgente, R Spizzuoco. Automatic Generation of Maintenance Plans for Historical Residential Buildings.
[6] A.Bonomi, G. Mantegari, G.Vizzari. Semantic Querying for an Archaeological E-library.
[7] A. D’Andrea, G. Ferrandino, A. Gangemi. Shared Iconographical Representations with Ontological Models.
[8] L. Bordoni, A. Gisolfi, A. Trezza. INTOUSYS: a Prototype Personalized Tourism System.
[9] D.Magro. Integrated Promotion of Cultural Heritage Resources.
[10] D. Schmidt, D. Fiormonte. Multi-Version Documents:a Digitisation Solution for Textual Cultural Heritage Artefacts