Recently, for different reasons, I came into contact with the discipline of “Philosophy of information“, apparently founded by Prof. Floridi. The discipline is defined as:
the philosophical field concerned with (a) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, and (b) the elaboration and application of information-theoretic and computational methodologies to philosophical problems.
How does philosophy of information relates to computer science which is the broad topic of my research interests? My impression is that it has a postmodernist attitude.
Updated (see end of post)
Comments on a PoI paper
I focus here on an article (An Analysis of Information in Visualisation) since it deals with topics I am familiar with. I single out a few details, as a careful reviewer could.
The authors parallel information to “currency”, in the abstract we find the sentence
information is the fundamental ‘currency’ exchanged through a visualisation pipeline
Such link to economics is just presented but not motivated or even explained. Apparently ‘currency’ is introduced gratuitously to add value (no pun intended) to the concept of pipeline.
The quality of figure 1 in the paper does not tell positive things about the “practical” knowledge of the main topic of the paper. At least it indicates a limited achievement according to the Dublin Descriptor 2: “can apply their knowledge and understanding“.
The visualization pipeline shown in figure 1 is presented as a contribution from the authors, though apparently it is copied from (Chen, M., H. Jänicke. 2010) with some minor changes. Notably the required citation is missing.
The figure has two additions: the Human-Computer and Human-Human boxes whose meaning is never explained in the paper
On page 13, after a few argumentations concerning the application of “Floridi’s categorisation” to the visualization pipeline, the authors conclude:
Based on the above observations, we can conclude that, in general, Floridi’s categorisation by meaning is applicable to the information in a visualisation pipeline
Actually given the lack of depth and generality in the argumentations I would be surprised if the authors were not able to reconcile the two models.
In the section entitled “An Information-theoretic Framework for Visualisation” the authors “examine the similarities and differences between a communication system and the central path of a visualisation pipeline”. And later they state:
Probabilistic Nature. Many aspects of a visualisation pipeline feature events and phenomena with probabilistic certainty or uncertainty, bearing a striking resemblance with a modern communication pipeline.
Actually I would be surprised otherwise: the central path of the visualization pipeline has been taken (uncredited) from the paper by Chen and Jänicke (“An information-theoretic framework for visualization”). Apparently all the effort in the section are devoted to demonstrate that a model build on the basis of information and communication theory bears “striking resemblance with a modern communication pipeline”.
From page 19 to 23 there is a long discussion on “data processing inequality” that mostly derives (uncredited) from section 6.3 in the original paper by Chen and Jänicke. The novel part focuses on how it is possible to “break the Markov chain condition”, which takes a puzzling perspective: it looks like the authors think of the Markov-chain condition as a constraint in the visualization pipeline / system / software, while actually it is a simplifying modeling assumption, usually taken for granted in order to build a treatable and manageable mathematical model.
There are another couple of places where the authors take extensively from the original paper by Chen and Jänicke (without a clear citation or attribution):
- on page 19 there is a discussion about the probabilistic description of information and the weird expression “the probabilistic space underlying probability mass functions becomes undefined”, the whole paragraph looks like a whimsical version of section 9 in the original paper by Chen and Jänicke,
- the very conclusions section of the paper, draws extensively (uncredited) from the same section 9 of the original paper by Chen and Jänicke
The examined paper contains large (uncredited) portions taken from a previous paper published on an high-ranked scientific paper (IEEE Transactions on Visualization and Computer Graphics). Most of such reported material lacks contextualization and depth.
Ruling out plagiarism, what remains is a sort of postmodernist take on ICT topics. Probably it is a sign of the times that a few years ago fashionable nonsense was sacking mainly physics while now it targets computer science.
I just realized the the two papers have an author in common (Chen): this explains the extensive similarities among the two works. Still, the only novelty in the paper is the “demonstration” that one vague model can be reconciled with another — more precise though reported in highly simplified terms — model.