Computing and the philosophy of technology

Recently I wrote about the philosophy of computer science (PCS), where some people, online and offline, commented that one cannot have computer science but have the science of computing instead and that it must have as part experimental validation of the theory. Regarding the latter, related and often-raised remarks are that it is computer engineering instead, which, in turn, it closely tied to information technology. In addition, many a BSc programme trains students so as to equip them with the skills to obtain a job in IT. Coincidentally, there is a new entry in the Stanford Encyclopedia of Philosophy on the Philosophy of Technology (PT) [1], which, going by the title, may well be more encompassing than just PCS and perhaps just as applicable. In my opinion, not quite. There as several relevant aspects of the PT for computing, but if one takes the strand of engineering and IT of computing, then there are some gaps in the overview presented by Franssen, Lokhorst and Van de Poel. The the main gap I will address is that it misses the centrality of ‘problem’ in engineering and that it conflates it with notions such as requirements and design.

But to comment on it, let me first start with mentioning few salient points of their sections “historical developments” and “analytic philosophy of technology” to illustrate notions of technology and which strand of PT the SEP entry is about (I will leave the third part, “ethical and social aspects of technology”, for another time).

A few notes on its historical developments
Plato put forward the thesis that “technology learns from or imitates nature”, whereas Aristotle went a step further: “generally, art in some cases completes what nature cannot bring to a finish, and in others imitates nature”. A different theme concerning technology is that “there is a fundamental ontological distinction between natural things and artifacts” (not everybody agrees with that thesis), and if so, what, then, those distinctions are. Aristotle, again, also has a doctrine of the four causes with respect to technology: material, formal, efficient, and final.
As is well-know, there was a lull in the intellectual and technological progress in Europe during the Dark Ages, but the Renaissance and the industrial revolution made thinking about technology all the more interesting again, with notable contributions by Francis Bacon, Karl Marx, and Samuel Butler, among others. Curiously, “During the last quarter of the nineteenth century and most of the twentieth century a critical attitude [toward technology] predominated in philosophy. The representatives were, overwhelmingly, schooled in the humanities or the social sciences and had virtually no first-hand knowledge of engineering practice” (emphasis added). This type of PT is referred to as humanities PT, which looks into socio-politico-cultural aspects of technology. Not everyone was, or is, happy with that approach. Since the 1960s, an alternative approach has been gestating and is becoming more important, which can be dubbed the analytic PT that is concerned with technology itself, where technology is considered the practice of engineering. The SEP entry about PT is about this type of PT.

Main topics in Analytic PT
The topics that the authors of the PT overview deem the main ones have to do with the discussion on the relation between science and technology (and thus also on the relation of philosophy of science and of technology), the importance of design for technology, design as decision-making, and the status and characteristics of artifacts. Noteworthy is that their section 2.6 on other topics mentions that “there is at least one additional technology-related topic that ought to be mentioned… namely, Artificial Intelligence and related areas” (emphasis added) and the reader is referred to several other SEP entries. Notwithstanding the latter, I will comment on the content from a CS perspective anyway.

Sections 2.1 and 2.2 contain a useful discourse on the differences between science and engineering and on their respective philosophy-of. It was Ellul in 1964 who observed in technology “the emergent single dominant way of answering all questions concerning human action, comparable to science as the single dominant way of answering all questions concerning human knowledge” (emphasis added). Skolimowski phrased it as “science concerns itself with what is, whereas technology with what is to be.” Simon used the verb tenses are and ought to be, respectively. To hit the message home, they also say that science aims to understand the world but that technology aims to change the world, and is seen as a service to the public. (I leave it to the reader to assess what, then, the great service to the public is of artifacts such as the nuclear bomb, flight simulator games for kids, CCTV, and the likes.) In contrast, while Bunge did acknowledge technology was about action, he stressed that it is one heavily underpinned by theory, thereby distinguishing it from the arts and crafts. Practically regarding software engineering, then to say that development of ontologies or the conceptual modelling during the conceptual analysis stage of software development are, largely or wholly, an art, is to say that there is no underlying theoretical foundation for those activities. This is not true. Indeed, there are people who, without being aware of its theoretical foundations, go off and develop an ontology or ontology-like artifact nevertheless and call that activity an art, but that does not make the activity an art per sé.
Closing the brief summary of the first part of the PT entry, Bunge mentioned also that theories in technology come into two types: substantive (on knowledge about the objects of action) and operative (on the action itself). A further distinction has been put forward by Jarvie, which concerns the difference between knowing that and knowing how, the latter which Polanyi made a central aspect of technology.

Designs, requirements, and the lack of problems
The principal points of criticism I have are on the content of sections 2.3 and 2.4, which revolve around the messiness in the descriptions of design, goals, requirements. Franssen et al first state that “technology is a practice focused on the creation of artifacts and, of increasing importance, artifact-based services. The design process, the structured process leading toward that goal, forms the core of the practice of technology” and that the first step in that process is to translate the needs or wishes of the customer into a list of functional requirements. But ought the first step not to be the identification of the problem? For if we then ultimately solve the problem (giong through writing down the requirements, design the artifact, and build and tests it), it is a ‘service to the public’. Moreover, if we have identified the problem, formulated it in a problem description, written a specification of the goal to achieve, only then we can write down the list of requirements that are relevant to achieving the goal and that solves the problem, which will not only avoid a ‘solution’ exhibiting the same problem but also meet the goal and be this service to the public. But problem identification and problem solving is not a central issue to technology, according to the PT entry; just designing artifacts based on functional requirements.
Looking back at the science and engineering studies I enjoyed, the former had put an emphasis on formulating and testing the hypotheses—no thesis could do without that—whereas the latter focusses on which problem(s) your research tackles and solves, where theses and research papers have to have a (list of) problem(s) to solve. If a CS paper does not have a description of the problem, one might want to look again what they are really doing, if anything; if it is a mere ‘we did develop a cute software tool’, it is straightforward to criticise and easily can be rejected because it is insufficient regarding the problem specification and the system requirements (the reader may not share the unwritten background setting with the authors, hence think of other problems and requirements that he would expect the tool to solve), and the ‘solution’ to that (the tool) is then no contribution to science/engineering, at least not in the way it is presented. Not even mentioning the problem-identification-and-solving and its distinction with science in the approach of doing research is quite a shortcoming of the PT entry.

In addition, the authors confuse such different topics even more clearly by stating that “The functional requirements that define most design problems” (emphasis added). Design problems? So there are different types of problem, and if so, which? How can requirements define a problem? They don’t. The requirements entail what the artifact is supposed to do and what constraints it has to meet. However, if it defines the problem instead, as the authors say, then they are not parameters for the solution to that problem, even though the to-be-designed artifact that meets the requirements is supposed to solve a (perceived) problem. Further, they say that engineers claim that their designs are optimal solutions, but can one really say that a design is the solution, rather than the outcome of the design, i.e., the realisation of the working artifact?

If it is the case, as the authors summarise, that technology aims to change the world, then one needs to know what one is changing, why one is changing that, and how. If something is already ‘perfect’, then there is no need to change it. So, provided it is not ‘perfect’ according to at least one person, then there is something to change to make it (closer to) ‘perfect’; hence, there is a problem with that thing (regardless if this thing is an existing artifact or annoying practice that perhaps could be solved by using technology). Thus, one—well, at least, I—would have expected that ‘problem’ would be a point of inquiry, e.g.: what is it, what does it depend upon, if there is a difference between problem and nuisance, how it relates to requirements and design and human action, the processes of problem identification and problem specification, the drive for problem-solving in engineering and the brownie points one scores with it upon having a solution, and even the not unheard of, but lamentable, practice of ‘reverse engineering the problem’ (i.e.: the ‘we have a new tool, now we think of what it [may/can/does] solve and that can be written in such a way as if it [is/sounds like] a real problem’).

In closing, the original PT entry was published in February this year, and quickly has undergone “substantive revision” meriting a revised release on 22 June ’09, so maybe the area is active and in flux so that a next version could include some considerations about problems.

[1] Franssen, Maarten, Lokhorst, Gert-Jan, van de Poel, Ibo, “Philosophy of Technology”, The Stanford Encyclopedia of Philosophy  (Fall 2009 Edition), Edward N. Zalta (ed.), forthcoming URL =

Object-Role Modeling and Description Logics for conceptual modelling

Object-Role Modeling (ORM) is a so-called “true” conceptual modelling language in the sense that it is independent of the application scenario and it has been mapped into both UML class diagrams and ER [1]. That is, ORM and its successor ORM2 can be used in the conceptual analysis stage for database development, application software development, requirements engineering, business rules, and other areas [1-5]. If we can reason over such ORM conceptual data models, then we can guarantee the model (i.e., first order logic theory) is satisfiable and consistent, so that the corresponding application based on it definitely does behave correctly with respect to its specification (I summarised a more comprehensive argumentation and examples earlier). And, well, from a push-side: it widens the scope of possible scenarios where to use automated reasoners.

Various strategies and technologies are being developed to reason over conceptual data models to meet the same or slightly different requirements and aims. An important first distinction is between the assumption that modellers should be allowed to keep total freedom to model what they deem necessary to represent and subsequently put constraints on which parts can be used for reasoning or accept slow performance versus the assumption that it is better to constrain the language a priori to a subset of first order logic so as to achieve better performance and a guarantee that the reasoner terminates. The former approach is taken by Queralt and Teniente [6] using a dependency graph of the constraints in a UML Class Diagram + OCL and by first order logic (FOL) theorem provers. The latter approach is taken by [7-15] who experiment with different techniques. For instance, Smaragdakis et al and Kaneiwa et al [7-8] use special purpose reasoners for ORM and UML Class Diagrams, Cabot et al and Cadoli et al [9-10] encode a subset of UML class diagrams as a Constraint Satisfaction Problem, and [11-16] use a Description Logic (DL) framework for UML Class Diagrams, ER, EER, and ORM.

Perhaps not surprisingly, I also took the DL approach on this topic, on which I started working in 2006. I had put the principal version of the correspondence between ORM and the DL language DLRifd online on arXiv in February 2007 and got the discussion of the fundamental transformation problems published at DL’07 [15]. Admittedly, that technical report won’t ever win the beauty prize for its layout or concern for readability. In the meantime, I have corrected the typos, improved on the readability, proved correctness of encoding, and updated the related research with the recent works. On the latter, it also contains a discussion of a later, similar, attempt by others and the many errors in it. On the bright side: addressing those errors helps explaining the languages and trade-offs better (there are advantages to using a DL language to represent an ORM diagram, but also disadvantages). This new version (0702089v2), entitled “Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd” [17] is now also online at arXiv.

As appetizer, here’s the abstract:

In recent years, several efforts have been made to enhance conceptual data modelling with automated reasoning to improve the model’s quality and derive implicit information. One approach to achieve this in implementations, is to constrain the language. Advances in Description Logics can help choosing the right language to have greatest expressiveness yet to remain within the decidable fragment of first order logic to realise a workable implementation with good performance using DL reasoners. The best fit DL language appears to be the ExpTime-complete DLRifd. To illustrate trade-offs and highlight features of the modelling languages, we present a precise transformation of the mappable features of the very expressive (undecidable) ORM/ORM2 conceptual data modelling languages to exactly DLRifd. Although not all ORM2 features can be mapped, this is an interesting fragment because it has been shown that DLRifd can also encode UML Class Diagrams and EER, and therefore can foster interoperation between conceptual data models and research into ontological aspects of the modelling languages.

And well, for those of you who might be disappointed that not all ORM features can be mapped: computers have their limitations and people have a limited amount of time and patience. To achieve ‘scalability’ of reasoning over initially large theories represented in a very expressive language, modularisation of the conceptual models and ontologies is one of the lines of research. But it is a separate topic and not quite close to implementation just yet.


[1] Halpin, T.: Information Modeling and Relational Databases. San Francisco: Morgan Kaufmann Publishers (2001)

[2] Balsters, H., Carver, A., Halpin, T., Morgan, T.: Modeling dynamic rules in ORM. In: OTM Workshops 2006. Proc. of ORM’06. Volume 4278 of LNCS., Springer (2006) 1201-1210

[3] Evans, K.: Requirements engineering with ORM. In: OTM Workshops 2005. Proc. of ORM’05. Volume 3762 of LNCS., Springer (2005) 646-655

[4] Halpin, T., Morgan, T.: Information modeling and relational databases. 2nd edn. Morgan Kaufmann (2008)

[5] Pepels, B., Plasmeijer, R.: Generating applications from object role models. In: OTM Workshops 2005. Proc. of ORM’05. Volume 3762 of LNCS., Springer (2005) 656-665

[6] Queralt, A., Teniente, E.: Decidable reasoning in UML schemas with constraints. In: Proc. of CAiSE’08. Volume 5074 of LNCS., Springer (2008) 281-295

[7] Smaragdakis, Y., Csallner, C., Subramanian, R.: Scalable automatic test data generation from modeling diagrams. In: Proc. of ASE’07. (2007) 4-13

[8] Kaneiwa, K., Satoh, K.: Consistency checking algorithms for restricted UML class diagrams. In: Proc. of FoIKS ’06, Springer Verlag (2006)

[9] Cabot, J., Clariso, R., Riera, D.: Verification of UML/OCL class diagrams using constraint programming. In: Proc. of MoDeVVA 2008. (2008)

[10] Cadoli, M., Calvanese, D., De Giacomo, G., Mancini, T.: Finite model reasoning on UML class diagrams via constraint programming. In: Proc. of AI*IA 2007. Volume 4733 of LNAI., Springer (2007) 36-47

[11] Calvanese, D., De Giacomo, G., Lenzerini, M.: On the decidability of query containment under constraints. In: Proc. of PODS’98. (1998) 149-158

[12] Artale, A., Calvanese, D., Kontchakov, R., Ryzhikov, V., Zakharyaschev, M.: Reasoning over extended ER models. In: Proc. of ER’07. Volume 4801 of LNCS., Springer (2007) 277-292

[13] Jarrar, M.: Towards automated reasoning on ORM schemes–mapping ORM into the DLRidf Description Logic. In: ER’07. Volume 4801 of LNCS. (2007) 181-197

[14] Franconi, E., Ng, G.: The ICOM tool for intelligent conceptual modelling. In: Proc. of KRDB’00. (2000) Berlin, Germany, 2000.

[15] Keet, C.M.: Prospects for and issues with mapping the Object-Role Modeling language into DLRifd. In: Proc. of DL’07. Volume 250 of CEUR-WS. (2007) 331-338

[16] Berardi, D., Calvanese, D., De Giacomo, G.: Reasoning on UML class diagrams. Artificial Intelligence 168(1-2) (2005) 70-118

[17] Keet, C.M. Mapping the Object-Role Modeling language ORM2 into Description Logic language DLRifd. KRDB Research Centre, Free University of Bozen-Bolzano, Italy. 22 April 2009. arXiv:cs.LO/0702089v2.

Blog post analysis revisited

A year ago I had a look at which posts were (still) being visited, and to a limited extent which topics were apparently more in vogue than others were. In short: some old posts were still receiving visitors and posts about my research comparatively less. Data analysis, however, may have been hampered by the low amount of posts (32) and visits (3254) and the age of the blog. I did check again in December ’08 and yesterday: now I have 62 posts with 7886 hits to the individual posts and 12682 overall, so things are going upward.

blogstatsJune09For what it is worth: still some old posts are receiving visitors regularly, posts about my research comparatively less–but slightly better now–and posts about other people’s research is second to none. More precisely but aggregated (raw data available upon request), in June’08 the average-visits/my-research-post was 80.9, now in June’09 86.9, compared to 196 and 209.5, respectively, for the average-visits/other-research-post, whereas the percentage for the former went up from 10 to 12% of total visits (stable on 39 for ‘other research’). The two other main categories, ‘conference blogging’ and ‘other topics’, hover little above 100 for average-visits/post. Interesting to me was to observe that while the ‘other topics’ posts are largest in number, on amount and average visits it hands it over to ‘other research’ (the other two remaining roughly stable), and that I actually had published much less posts about my own research results than I thought (a mere 11 posts out of the 62). Perhaps I’m too hesitant to do all the shameless self-promotion; the research results and projects are there.

In case you would like the know which posts received most hits over the past three years, check out the vox populi page (it corresponds roughly with external linking and search terms).

A collection of parameters for ontology design

Ontology design is still more of an art than a science. A methodology, Methontology, does exist, but it does not cover all aspects of ontology development. Likewise, there are tools, such as Protégé and the NeOn toolkit, that make several steps in the whole procedure easier. But, with the plethora of resources around, where should one start developing one’s own domain ontology, what resources are available for reuse to speed up its development, for which purposes can the ontology be developed?

The novice ontology engineer would have to go through much of the extant literature, read case studies and draw their own conclusions on how to go about developing the ontology and/or also attend ontology engineering courses or summer schools, which is a rather high start-up cost.

To ameliorate this, but also to save myself from repeating such information informally, I gave it a try to condense that information in, effectively, 4 Springer-size pages [pdf] (plus 1 page intro and 1 page references) [1]. The paper contains a grouping of input parameters that determine effectiveness of ontology development and use, which are categorised along four dimensions: purpose, ontology reuse, ways of ontology learning, and the language and reasoning services.

The aim was to be brief, so while the list of parameters is long, the list of references is comparatively short—but the references are kept diverse and they do contain references to different paradigms around instead of just one. (A version with lots of references is in the making.)

The paper has several examples taken from the agriculture domain by having build upon experiences gained in previous and current projects and related literature. It is noteworthy, however, that development of agri-ontologies is in its infancy. Then, for a relatively seasoned ontology engineer, most, if not all, parameters may be known to a greater or lesser extent already, but from the intended audience perspective, the paper was deemed to be a timely, much needed, and useful overview. My impression is that those reviewers’ comments say more about the knowledge transfer—well, the lack thereof—from one discipline to another than about the modellers and domain experts.

For those of you who are interested in agri-ontologies and would like to know more about the latest developments in that area, there is the (third) special track on agriculture, food and the environment during MTSR’09 in Milan 1-2 October.


[1] Keet, C.M. Ontology design parameters for aligning agri-informatics with the Semantic Web. 3rd International Conference on Metadata and Semantics (MTSR’09) — Special Track on Agriculture, Food & Environment, Oct 1-2 2009 Milan, Italy. Springer CCIS. to appear.