Thanks and best wishes for 2010

I would like to say many thanks to all of you dear readers for visiting my blog and especially those known and previously unknown visitors who took the effort to leave comments and comments on comments (as that’s the particular feature of blogs anyway)! I hope you have found it was time well spent.

Although it is that time of the year again for some reflection (though I do that during the year as well, but somehow one mentions this only around the new year), my 2009 had its ups and downs and more of the latter than the former, so I will not dwell on that here. Maybe I can reap the fruits of the sown seeds in the upcoming year (yeah, I know, I said that last year as well; patience is a virtue, right?).

As for the blog, the amount of posts increased considerably compared to previous years with the topics just as varied, which I shall try to keep up with in 2010.

I wish you all a happy, productive, and prosperous New Year!

72010 SemWebTech lecture 10: SWLS and text processing and ontologies

There is a lot to be said about how Ontology, ontologies, and natural language interact from a philosophical perspective up to the point that different commitments lead to different features and, moreover, limitations of a (Semantic Web) application. In this lecture on 22 Dec, however, we shall focus on the interaction of NLP and ontologies within a bio-domain from an engineering perspective.

During the bottom-up ontology development and methodologies lectures, it was already mentioned that natural language processing (NLP) can be useful for ontology development. In addition, NLP can be used as a component in an ontology-driven information system and an NLP application can be enhanced with an ontology. Which approaches and tools suit best depends on the goal (and background) of its developers and prospective users, ontological commitment, and available resources.

Summarising the possibilities for “something natural language text” and ontologies or ontology-like artifacts, we can:

  • Use ontologies to improve NLP: to enhance precision and recall of queries (including enhancing dialogue systems [1]), to sort results of an information retrieval query to the digital library (e.g. GoPubMed [2]), or to navigate literature (which amounts to linked data [3]).
  • Use NLP to develop ontologies (TBox): mainly to search for candidate terms and relations, which is part of the suite of techniques called ‘ontology learning’ [4].
  • Use NLP to populate ontologies (ABox): e.g., document retrieval enhanced by lexicalised ontologies and biomedical text mining [5].
  • Use it for natural language generation (NLG) from a formal language: this can be done using a template-based approach that works quite well for English but much less so for grammatically more structured languages such as Italian [6], or with a full-fledged grammar engine as with the Attempto Controlled English and bi-directional mappings (see for a discussion [7]).

Intuitively, one may be led to think that simply taking the generic NLP or NLG tools will do fine also for the bio(medical) domain. Any application does indeed use those techniques and tools—Paul Buitelaar’s slides have examples and many references to NLP tools—but, generally, they do not suffice to obtain ‘acceptable’ results. Domain specific peculiarities are many and wide-ranging. For instance, to deal with the variations of terms (scientific name, variant, common misspellings) and the grounding step (linking a term to an entity in a biological database) in the ontology-NLP preparation and instance classification side [5], to characterize the question in a question answering system correctly [1], and to find ways to deal with the rather long strings that denote a biological entity or concept or universal [4]. Some of such peculiarities actually generate better overall results than in generic or other domain-specific usages of NLP tools, but it requires extra manual preparatory work and a basic understanding of the subject domain and its applications.

References

[1] K. Vila, A. Ferrández. Developing an Ontology for Improving Question Answering in the Agricultural Domain. In: Proceedings of MTSR’09. Springer CCIS 46, 245-256.

[2] Heiko Dietze, Dimitra Alexopoulou, Michael R. Alvers, Liliana Barrio-Alvers, Bill Andreopoulos, Andreas Doms, Joerg Hakenberg, Jan Moennich, Conrad Plake, Andreas Reischuck, Loic Royer, Thomas Waechter, Matthias Zschunke, and Michael Schroeder. GoPubMed: Exploring PubMed with Ontological Background Knowledge. In Stephen A. Krawetz, editor, Bioinformatics for Systems Biology. Humana Press, 2008.

[3] Allen H. Renear and Carole L. Palmer. Strategic Reading, Ontologies, and the Future of Scientific Publishing. Science 325 (5942), 828. [DOI: 10.1126/science.1157784] (but see also some comments on the paper)

[4] Dimitra Alexopoulou, Thomas Waechter, Laura Pickersgill, Cecilia Eyre, and Michael Schroeder. Terminologies for text-mining: an experiment in the lipoprotein metabolism domain. BMC Bioinformatics, 9(Suppl4):S2, 2008

[5] Witte, R. Kappler, T. And Baker, C.J.O. Ontology design for biomedical text mining. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, pp 281-313.

[6] M. Jarrar, C.M. Keet, and P. Dongilli. Multilingual verbalization of ORM conceptual models and axiomatized ontologies. STARLab Technical Report, Vrije Universiteit Brussels, Belgium. February 2006.

[7] R. Schwitter, K. Kaljurand, A. Cregan, C. Dolbear, G. Hart. A comparison of three controlled natural languages for OWL 1.1. Proc. of OWLED 2008 DC.

Note: references 4 and 5 are mandatory reading, and 1-3 and 6 are optional (recommended for the EMLCT students).

Lecture notes: lecture 10 – Text processing

Course website

72010 SemWebTech lecture 9: Successes and challenges for ontologies in the life sciences

To be able to talk about successes and challenges of SWT for health care and life sciences (or any other subject domain), we first need to establish when something can be deemed a success, when it is a challenge, and when it is an outright failure. Such measures can be devised in an absolute sense (compare technology x with an SWT one: does it outperform on measure y?) and relative (to whom is technology x deemed successful?) Given these considerations, we shall take a closer look at several attempts, being two successes and a few challenges in representation and reasoning. What were the problems and how did they solve it, and what are the problems and can that be resolved, respectively?

As success stories we take the experiments by Wolstencroft and coauthors about classifying protein phosphatases [1] and Calvanese et al for graphical, web-based, ontology-based data access applied to horizontal gene transfer data [2]. They each focus on different ontology languages and reasoning services to solve different problems. What they have in common is that there is an interaction between the ontology and instances (and that it was a considerable amount of work by people with different specialties): the former focuses on classifying instances and the latter on querying instances. In addition, modest results of biological significance have been obtained with the classification of the protein phosphatases, whereas with the ontology-based data analysis we are tantalizingly close.

The challenges for SWT in general and for HCLS in particular are quite diverse, of which some concern the SWT proper and others are by its designers—and W3C core activities on standardization—considered outside their responsibility but still need to be done. Currently, for the software aspects, the onus is put on the software developers and industry to pick up on the proof-of-concept and working-prototype tools that have come out of academia and to transform them into the industry-grade quality that a widespread adoption of SWT requires. Although this aspect should not be ignored, we shall focus on the language and reasoning limitations during the lecture.

In addition to the language and corresponding reasoning limitations that passed the revue in the lectures on OWL, there are language “limitations” discussed and illustrated at length in various papers, with the most recent take [3], where it might well be that the extensions presented in lecture 6 and 7 (parts, time, uncertainty, and vagueness) can ameliorate or perhaps even solve the problem. Some of the issues outlined by Schultz and coauthors are ‘mere’ modelling pitfalls, whereas others are real challenges that can be approximated to a greater or lesser extent. We shall look at several representation issues that go beyond the earlier examples of SNOMED CT’s “brain concussion without loss of consciousness”; e.g. how would you represent in an ontology that in most but not all cases hepatitis has as symptom fever, or how would you formalize the defined concept “Drug abuse prevention”, and (provided you are convinced it should be represented in an ontology) that the world-wide prevalence of diabetes mellitus is 2.8%?

Concerning challenges for automated reasoning, we shall look at two of the nine identified required reasoning scenarios [4], being the “model checking (violation)” and “finding gaps in an ontology and discovering new relations”, thereby reiterating that it is the life scientists’ high-level goal-driven approach and desire to use OWL ontologies with reasoning services to, ultimately, discover novel information about nature. You might find it of interest to read about the feedback received from the SWT developers upon presenting [4] here: some requirements are met in the meantime and new useful reasoning services were presented.

References

[1] Wolstencroft, K., Stevens, R., Haarslev, V. Applying OWL reasoning to genomic data. In: Semantic Web: revolutionizing knowledge discovery in the life sciences, Baker, C.J.O., Cheung, H. (eds), Springer: New York, 2007, 225-248.

[2] Calvanese, D., Keet, C.M., Nutt, W., Rodriguez-Muro, M., Stefanoni, G. Web-based Graphical Querying of Databases through an Ontology: the WONDER System. ACM Symposium on Applied Computing (ACM SAC’10), March 22-26 2010, Sierre, Switzerland.

[3] Stefan Schulz, Holger Stenzhorn, Martin Boekers and Barry Smith. Strengths and Limitations of Formal Ontologies in the Biomedical Domain. Electronic Journal of Communication, Information and Innovation in Health (Special Issue on Ontologies, Semantic Web and Health), 2009.

[4] Keet, C.M., Roos, M. and Marshall, M.S. A survey of requirements for automated reasoning services for bio-ontologies in OWL. Third international Workshop OWL: Experiences and Directions (OWLED 2007), 6-7 June 2007, Innsbruck, Austria. CEUR-WS Vol-258.

[5] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

p.s.: the first part of the lecture on 21-12 will be devoted to the remaining part of last week’s lecture; that is, a few discussion questions about [5] that are mentioned in the slides of the previous lecture.

Note: references 1 and 3 are mandatory reading, 2 and 4 recommended to read, and 5 was mandatory for the previous lecture.

Lecture notes: lecture 9 – Successes and challenges for ontologies

Course website

72010 SemWebTech lecture 8: SWT for HCLS background and data integration

After the ontology languages and general aspects of ontology engineering, we now will delve into one specific application area: SWT for health care and life sciences. Its frontrunners in bioinformatics were adopters of some of the Semantic Web ideas even before Berners-Lee, Hendler, and Lassila wrote their Scientific American paper in 2001, even though they did not formulate their needs and intentions in the same terminology: they did want to have shared, controlled vocabularies with the same syntax, to facilitate data integration—or at least interoperability—across Web-accessible databases, have a common space for identifiers, it needing to be a dynamic, changing system, to organize and query incomplete biological knowledge, and, albeit not stated explicitly, it all still needed to be highly scalable [1].

Bioinformaticians and domain experts in genomics already organized themselves together in the Gene Ontology Consortium, which was set up officially in 1998 to realize a solution for these requirements. The results exceeded anyone’s expectations in its success for a range of reasons. Many tools for the Gene Ontology (GO) and its common KR format, .obo, have been developed, and other research groups adopted the approach to develop controlled vocabularies either by extending the GO, e.g., rice traits, or adding their own subject domain, such as zebrafish anatomy and mouse developmental stages. This proliferation, as well as the OWL development and standardization process that was going on at about the same time, pushed the goal posts further: new expectations were put on the GO and its siblings and on their tools, and the proliferation had become a bit too wieldy to keep a good overview what was going on and how those ontologies would be put together. Put differently, some people noticed the inferencing possibilities that can be obtained from moving from obo to OWL and others thought that some coordination among all those obo bio-ontologies would be advantageous given that post-hoc integration of ontologies of related and overlapping subject domains is not easy. Thus came into being the OBO Foundry to solve such issues, proposing a methodology for coordinated evolution of ontologies to support biomedical data integration [2].

People in related disciplines, such as ecology, have taken on board experiences of these very early adopters, and instead decided to jump on board after the OWL standardization. They, however, were not only motivated by data(base) integration. Referring to Madin et al’s paper [3] again, I highlight three points they made: “terminological ambiguity slows scientific progress, leads to redundant research efforts, and ultimately impedes advances towards a unified foundation for ecological science”, i.e., identification of some serious problems they have in ecological research; “Formal ontologies provide a mechanism to address the drawbacks of terminological ambiguity in ecology”, i.e., what they expect that ontologies will solve for them (disambiguation); and “and fill an important gap in the management of ecological data by facilitating powerful data discovery based on rigorously defined, scientifically meaningful terms”, i.e., for what purpose they want to use ontologies and any associated computation (discovery). That is, ontologies not as a—one of many possible—tool in the engineering/infrastructure means, but as a required part of a method in the scientific investigation that aims to discover new information and knowledge about nature (i.e., in answering the who, what, where, when, and how things are the way they are in nature).

What has all this to do with actual Semantic Web technologies? On the one hand, there are multiple data integration approaches and tools that have been, and are being, tried out by the domain experts, bioinformaticians, and interdisciplinary-minded computer scientists [4], and, on the other hand, there are the W3C Semantic Web standards XML, RDF(S), SPARQL, and OWL. Some use these standards to achieve data integration, some do not. Since this is a Semantic Web course, we shall take a look at two efforts who (try to) do, which came forth from the activities of the W3C’s Health Care and Life Sciences Interest Group. More precisely, we take a closer look at a paper written about 3 years ago [5] that reports on a case study to try to get those Semantic Web Technologies to work for them in order to achieve data integration and a range of other things. There is also a more recent paper from the HCLS IG [6], where they aimed at not only linking of data but also querying of distributed data, using a mixture of RDF triple stores and SKOS. Both papers reveal their understanding of the purposes of SWT, and, moreover, what their goals are, their experimentation with various technologies to achieve them, and where there is still some work to do. There are notable achievements described in these, and related, papers, but the sought-after “killer app” is yet to be announced.

The lecture will cover a ‘historical’ overview and what more recent ontology-adopters focus on, the very basics of data integration approaches that motivated the development of ontologies, and we shall analyse some technological issues and challenges mentioned in [5] concerning Semantic Web (or not) technologies.

References:

[1] The Gene Ontology Consortium. Gene ontology: tool for the unification of biology. Nature Genetics, May 2000;25(1):25-9.

[2] Barry Smith, Michael Ashburner, Cornelius Rosse, Jonathan Bard, William Bug, Werner Ceusters, Louis J. Goldberg, Karen Eilbeck, Amelia Ireland, Christopher J Mungall, The OBI Consortium, Neocles Leontis, Philippe Rocca-Serra, Alan Ruttenberg, Susanna-Assunta Sansone, Richard H Scheuermann, Nigam Shah, Patricia L. Whetzel, Suzanna Lewis. The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nature Biotechnology 25, 1251-1255 (2007).

[3] Joshua S. Madin, Shawn Bowers, Mark P. Schildhauer and Matthew B. Jones. (2008). Advancing ecological research with ontologies. Trends in Ecology & Evolution, 23(3): 159-168.

[4] Erhard Rahm. Data Integration in Bioinformatics and Life Sciences. EDBT Summer School, Bolzano, Sep. 2007.

[5] Ruttenberg A, Clark T, Bug W, Samwald M, Bodenreider O, Chen H, Doherty D, Forsberg K, Gao Y, Kashyap V, Kinoshita J, Luciano J, Scott Marshall M, Ogbuji C, Rees J, Stephens S, Wong GT, Elizabeth Wu, Zaccagnini D, Hongsermeier T, Neumann E, Herman I, Cheung KH. Advancing translational research with the Semantic Web, BMC Bioinformatics, 8, 2007.

[6] Kei-Hoi Cheung, H Robert Frost, M Scott Marshall, Eric Prud’hommeaux, Matthias Samwald, Jun Zhao, and Adrian Paschke. A journey to Semantic Web query federation in the life sciences. BMC Bioinformatics 2009, 10(Suppl 10):S10

Note: references 1, 2, and (5 or 6) are mandatory reading, and 3 and 4 are recommended to read.

Lecture notes: lecture 8 – SWLS background and data integration

Course website

72010 SemWebTech lecture 7: Dealing with uncertainty and vagueness

The third advanced ontology engineering topic concerns how to cope with uncertainty and vagueness in ontology languages and their reasoners—and what we can gain from all the extra effort.

For instance, consider information retrieval: to which degree is a web site, a page, a text passage, an image, or a video segment relevant to the information need and an acceptable answer to what the user was searching for? In the context of ontology alignment, one would want to know (automatically) to which degree the focal concepts of two or more ontologies represent the same thing, or are sufficiently overlapping. In an electronic health record system, one may want to classify patients based on their symptoms, such as throwing up often, having a high blood pressure, and yellowish eye colour. How can software agents do the negotiation for your holiday travel plans that are specified imprecisely, alike “I am looking for a package holiday of preferably less than 1000 euro, but really no more that 1150 euro, for about 12 days in a warm country”?

The main problem to solve, then, is what and how to incorporate such vague or uncertain knowledge in OWL and its reasoners. To clarify these two terms upfront:

  • Uncertainty: statements are true or false, but due to lack of knowledge we can only estimate to which probability / possibility / necessity degree they are true or false;
  • Vagueness: statements involve concepts for which there is no exact definition (such as tall, small, close, far, cheap, expensive), which are then true to some degree, taken from a truth space.

The two principal approaches regarding uncertainty and the semantic web are probabilistic and possibilistic languages, ontologies, and reasoning services, where the former way of dealing with uncertainty receives a lot more attention than the latter. The two principal approaches regarding vagueness and the semantic web are fuzzy and rough extensions, where fuzzy receives more attention compared to the rough approach. The lecture will cover all four approaches to a greater (probabilistic, fuzzy) and lesser (possibilistic, rough) extent.

None of the extant languages and automated reasoners that can cope with vague or uncertain knowledge have made it into ‘mainstream’ Semantic Web tools yet. There was a W3C incubator group on uncertainty, but it remained at that. This has not stopped research in this area; on the contrary. There are two principle strands in these endeavours: one with respect to extending DL languages and its reasoners, such as Pronto that combines the pellet reasoner with a probabilistic extension and FuzzyDL that is a reasoner for fuzzy SHIF(D), and another strand to uses different techniques underneath OWL, such as Bayesian networks and constraint programming-based reasoning for probabilistic ontologies (e.g., PR-OWL), and Mixed Integer Logic Programming for fuzzy ontologies. Within the former approach, one can make a further distinction between extensions of tableaux algorithms and rewritings to a non-uncertain/non-vague standard OWL language so that one of the generic DL reasoners can be used. For each of these branches, there are differences as to which aspects of probabilistic/possibilistic/fuzzy/rough are actually included—just like we saw in the previous lecture about temporal logics.

We shall not cover all such permutations in the lecture, but instead focus on general aspects of the languages and tools. A good introductory overview can be found in [1] (which also has a very long list of references to start delving into the topics [you may skip the DLP section]).  Depending on your background education and the degree programme you are studying now, you may find the more technical overview [2] of interest as well. To get an idea of one of the more recent results on rough DL-based ontologies, you might want to glance over [3]. Last, I assume you have a basic knowledge of probability theory and fuzzy sets; if there are many people who do not, I will adjust the lecture somewhat, but you are warmly advised to look it up before the lecture if you do not know about it (even if it is only the respective Wikipedia entry here and here).

References

[1] Umberto Straccia. Managing Uncertainty and Vagueness in Description Logics, Logic Programs and Description Logic Programs. In Reasoning Web, 4th International Summer School, 2008.
[2] Thomas Lukasiewicz and Umberto Straccia. 2008. Managing Uncertainty and Vagueness in Description Logics for the Semantic Web. Journal of Web Semantics, 6:291-308.
[3] Jiang, Y., Wang, J., Tang, S., and Xiao, B. 2009. Reasoning with rough description logics: An approximate concepts approach. Information Sciences, 179:600-612.

Note: references 1 or 2 is mandatory reading, 3 optional.

Lecture notes: lecture 7 – Uncertainty and vagueness

Course website

Web 2.0 widgets, travelling

(minor warning upfront: the contents of this post wanders off into different, but related, directions)

With all the cross-linking across websites and little widgets around these days, I have tested a few that made it to good or mildly entertaining use, such as ClustrMaps in the menu bar on the right, the LinkedIn tag to show also some of my WordPress blog posts on my LinkedIn page, and the 25th of November one in an earlier blog post, and some that are—from the user perspective—of no use. Regarding the latter, e.g., the SocialVibe widget is more about companies seeking your data and building a customer profile than actually donating money to your chosen charity. I tried it, but to contribute to the charity for education of girls in Africa (which I selected from the list offered through the WordPress widget set up), a keetblog visitor (you, but I experimented with it and removed the widget) first would have to fill in “a message of encouragement” for the charity of dressing to impress, help Nestle’s “coffee mate” product and choose one’s preferred Nestle coffee flavour, and that six times with different companies to “donate without paying money” a measly few bucks to the charity of preference. If I may suggest: a standing order with UNICEF and the like works fine, you receive updates on what they’re doing, and you do not have to sell out your customer behaviour to industry (if you are clueless about what is being done with your data, read Database Nation and then multiply by 10 to make up for the 10-year gap between now and its publication date). But maybe some people are willing to sell out their privacy for a schoolbook and pencil.

Continuing with the widgets, I have considered slideshare sharing, but, thus far, prefer to keep the slides under control on my homepage (of which I know it is a stable location for over 6 years already); what exactly are the compelling arguments to put it there as opposed to one’s homepage?

I also came across the LinkedIn widget to yell around which books I am reading and that are also offered through Amazon (the one that I am reading is not even on sale there). If a book is worth the space for one reason or another, I will ‘announce’ so, such as with the reviews I wrote about Insurmountable Simplicities, the Handbook of Knowledge Representation, and about Cuba’s civil society (here). Or am I just being grumpy and missing the point?

More generally, which widgets are really useful, compared to being time-consuming yet cute or funny add-ons, or a veiled big-brother type of widget?

Anyway, the secretariat of the faculty recently suggested us to use TripIt, which perhaps is useful for them keep track of our whereabouts. However, I see no reason why I should announce that to all Internet users, or even only all my online contacts. For instance, I think the majority could not care less to be informed by an automatically generated LinkedIn+TripItwidget message that the next couple of days I will be attending AI*IA’09.

I have been traveling quite a bit though, and I like doing it a lot for the experiences I gain by visiting different places and to increase my understanding of the societies and marvel at the nature in the world—while they are still there. At some point, I intend to write more about that. As a first step for now, I am adding intended to add an (ego?)tripping widget that shows which countries I have visited over the years (intended, but did not do, because the website I found for country-level annotation, world66, was too cumbersome and not Web 2.0-like). I have included only those countries where I’ve stayed for at least a couple of days, did some sightseeing, interacted with the local people, used public transport, ate local food, etc. Put differently, I am not including flight stop-overs (e.g., Singapore on the way to Australia), countries that the train merely passed through (e.g., Lichtenstein), that by accident I literally walked into (Colombia), or where I have not had the opportunity to go beyond transport+hotel (Montenegro). So, graphically with manually Paint-ed world map and ‘the world I visited’ (…) in yellow, it looks like this:

countries I've visited, marked in yellow

For the geographically challenged and to annotate the countries a little, here’s the list of the countries (i) in alphabetical order, (ii) with the name the country had at the time of visiting, and (iii) the ones where I did not only travel to for holidays or conferences, but where I also stayed for study and/or work (paid or voluntary) are marked with an asterisk: Australia, Austria, Belgium*, Bolivia, Bulgaria, Brazil, Canada, China, Cuba, Cyprus, Czechoslovakia, Czech Republic*, Denmark*, France, Germany*, Hungary, Ireland*, Italy*, Lebanon*, Luxembourg, Netherlands*, Peru*, Portugal, Romania, Slovenia, South Africa*, Spain, Switzerland, United Kingdom*, United States of America, Yugoslavia (stock-taking on 7-12-2009).

Maybe a finer-grained marking could, or should, be in order, because, say, visiting Bolzano and ‘marking it off’ has having visited Italy is stretching it a bit (I have visited most other regions in the country though); to address this, I will give the more fine-grained, but still limited, MapBuilder a try when I have a little more spare time. Likewise, a beach holiday in an all-inclusive club-med holiday resort is quite different (I suppose) from going on the off chance to some place (which can be a lot of fun). And, at least for some countries, the experiences can make a big difference on repeat visits, especially when there was a significant change in the meantime, such as the fall of the Berlin Wall, the end of the Apartheid regime, or an economic boom.

Last, but not least, one may be led to think that this kind of thing is a rich-people’s widget, but all travels I did were either for work, i.e., mainly paid by the employer, were done with entirely my own savings from the bursary/salary I made while studying/working, or I managed to get it funded in part or in whole from, among others, a funding agency. So, no, I do not have a money tree growing in my garden; in fact, I do not even have a garden.

There are choices one makes: on the one hand, which study one does and which (type of) job one has, and on the other hand, what one does with one’s income, like which slice to allocate for structural expenses (rent or mortgage, Lidl or health-shop food) and which as disposable income, and then if one spends that disposable income on goods (a house, car, plasma screen, fashion clothing, and whatnot), children, or other activities with experiences. Why do I bother writing this paragraph? Well, some people do indeed get jealous about my traveling or look at me with envy regarding my experiences and bucket-full of anecdotes, or think I am clueless about ‘how difficult it is to save money’ or ‘how dangerous it is to travel’ outside the confines of package holidays. What they do not want to or cannot see, however, are (i) the choices they have made about their finances and/or daily life that limit traveling around, and (ii) the opportunities they can create for themselves to see, observe, and experience the diversity and richness of this fascinating planet.