Comments on a book: PHILOSOPHY of INFORMATION

Door: Peter Pappenheim

A thick book came to my attention through an interview in my paper, NRC, of Sander Bais, editor and contributor to that book, In that interview he said: “The concept of entropy has lead to a definition of the concept of information. Thus the second law (of thermodynamics) lays at the basis of informatics.” That raised my curiosity, as in my book (*) I reject entropy as an adequate definition of information. (In his contribution to the book, “The Physics of Information”, he made no such assertion). He also lamented the deep gulf between the sciences and humanities exposed by C.P. Snow half a century ago. His main complaint was that few scientists other than physicists had any knowledge of the second law of thermodynamics. As my book contains also contains an explanation of the gulf between academic disciplines, I will deal with it first. I will then present a summary of my conclusions about information based on today’s knowledge of molecular biology. In part 3a), I will apply these findings to Dretkes’ definition of information, and in 3b), I will make some remarks about three aspects of the physics of information, quantification of information, entropy and order.


Philosophy comes in many guises, as a guide to wisdom, as training in clean thinking, as intellectual parlor game, as literature etc. But it also – as John Locke’s ‘under-laborer’ – prepares the ground for others to build on by laying a solid foundation for our efforts to find out what it means to be a human being and to get a grip on our empirical world through the use of our capability to reason and of our symbolic language, natural and formal. That function is the domain of epistemology and the philosophy of science, which for instance investigates its methodology and the relation between various disciplines. All this belongs to the field of information whose primary function is to provide knowledge about “what there is”. The real gulf between scientific disciplines lies in a property of their subject and mainly concerns methodology.

The classic demarcation between science and other forms of obtaining knowledge is Popper’s   method of justification. To be scientific, a theory or (the deduction of a) factual statement should be axiomatisable and conclusively falsifiable by confrontation with facts. Failure of axiomatization (truth of axioms, consistency and correct application of language, including the formal ones of logic and mathematics) can eliminate all statements that cannot claim the qualification of scientific because it cannot tell us anything more reliable than throwing dice. Physics and chemistry also allow conclusive decisions through confrontation with (basic) facts. But what about the aspirations to the title of science of anthropology, psychology, economy, sociology etc. and the philosophy of the human mind? Most of the theories in these fields do not allow conclusive statements about their falsity through actual confrontation with facts. To some extend that applies also to biology.

In practice, two positions have been taken. Rutherford and consorts hold that “all science is physics, the rest is stamp collecting”; they promote or even mandate the use of physical concepts to explain the phenomena studied by the above disciplines, and exclude the rest from the field of science. The other position is that of most practitioners of these other disciplines: they refuse to relinquish the claim to science but – because they cannot meet his requirement of his demarcation criterion – simply ignore Popper cs. That indeed reduces them to stamp collecting and greatly undermines the effectiveness of their ‘science’ in guiding our decision-making.

That suggests the solution proposed in my book, namely to acknowledge the methodological difference between the science (and philosophy) about the inert world and that about the living world. Science always develops theories and models of its object. Concerning the science about the inert world, we require them to conform to Popper’s criterion, with the above qualification of ‘conclusive’, and some provisions for the uncertainty at the level of elementary particles already proposed by Popper. With the living world, the (mathematical) structure of realistic models (complex, interdependent, dynamic and stochastic, with elements that can learn) of their object of investigation almost always precludes drawing conclusive decisions about their falsity through confrontation with facts. The properties of inert entities are independent of their place in time and space: water boils at 100% at sea level, a century ago and tomorrow, in the pacific ocean and the north sea. The properties of a living entity on the other hand are dependent on its position time and space, especially because it can learn. We can however, and therefore should, require that theories about them be axiomatizable. If they are, we often can rank competing theories (the theory “no knowledge” always provides a default value) in terms of Popper’s verisimilitude; that provides a justifiable base for deciding about competing statements about facts in terms of likelihood. The very nature of the process of life remains however incompatible with certitude. Its criterion is not ‘optimal’, but ‘adequate’, and its basic tool towards adequacy is the elimination of what is inadequate, in case of science by applying the demarcation criterion and the ranking in verisimilitude permitted by the object of its investigation. Theories that do not meet the requirement of axiomatization must be excluded from the realm of science.

So there are two gulfs: one between scientific versus other kinds of knowledge, and a second one between sciences who deal with the inert world and those who deal with the living world. The first one is defined by the requirement of evidence. The second gulf is the most pernicious, because in the living world it usually is not possible to establish such a one to one correspondence between a theory and the result of its application. But culture can perform what nature cannot: axiomatization (an example is the requirement that in any mathematical model of a phenomenon the number of its equations be equal to the number of dependent variables) and determining verisimilitude. That should be the job of science and philosophy. Failure to acknowledge and performing it precludes the solution of the biggest problem we have: how to ensure that the immense power which physics have put at our disposition be used for the evolution of the living world towards greater neguentropy in the terminology of the physicist, or – in plain language – to increase its chances for survival. The to bridge the second gulf is not a one-way street: physicists also should be aware of results of certain other disciplines to ensure adequate application of their own findings.


To our current knowledge, everything in our universe is ruled by certain ‘laws’, amongst which the two laws of thermodynamics which rule processes. The first law of thermodynamics is the starting point, the ‘where we are’, before any process has changed it; it says: “without a process, nothing is gained, nothing is lost” (=rejecting any appeal to a miracle as a valid explanation for an event). The second defines the general and final direction of any process, the ‘to where everything goes’, namely to chaos, deadly uniformity, to entropy. The entire universe respects these laws, except – at least as far as we know – a small world that rebelled: the living world. It wants to go the other way, to decrease its entropy, or rather, increase its ‘neugentropy’ (like Maxwell’s demons) by selecting amongst all possible processes, actions, those that are suitable to this objective through a specific and unique process: information. We call it the living world, and the rest is ‘inert’. This self-oriented use, to its own end, of the inert world – including all its laws – is exclusive to the living world. The legitimate and even obvious property for investigating elements and features of living beings is the function that they fulfill in its survival and propagation. The concept of function is exclusively applicable to living beings  and their features and products.

The main girder for bridging the gulf between the sciences of the inert and the living world is the current molecular biology. As everything in this world, it is firmly grounded in physics and chemistry, but also shows that certain concepts must be added to those of physics to understand and deal with the living world. To apprehend what life is, the logical en efficient way is to look at it in its simplest manifestation that we can with certitude qualify as a living being, say a bacteria, as done by Nobel prize winner Jacques Monod in his famous essay “Le hazard et la nécessité” (***). Like all living creatures, a bacterium must extract from its environment the energy and materials necessary for its survival and reproduction by acting on that environment. Acting implies a process and that inevitably produces entropy (which could be called a loss of inner (free?) energy over the whole system: the bacterium and its environment. The source of energy or materials in its vicinity on which the bacterium feeds will be depleted and must be replaced. As long as that has not been achieved, the process can produce only a loss, diminishing whatever increases of inner energy has previously been achieved, until the bacterium is swallowed by the flow of entropy governing the inert world and ceases to be a bacterium. What saves the bacterium, as well as all living beings, is the ability to process set a on or off, by a biological switch, a decision that is directed by an information process that registers the state of the environment relevant to the process, which in case of the bacterium is the presence or absence of sugar.

In the example given in the book, the switch is a molecule in its gene, the promoter, that sets in motion the process of production of an enzyme required for the ingestion of sugar as long as it has not received the information: ‘no sugar’. That information is carried by a molecule:  an inhibitor. It inhibits the production of the enzyme by bonding to a promoter which – in his bare state – would start the production of the enzyme. The inhibitor also bonds to sugar molecules and then cannot bond to the promoter and thus cannot transmit the information “no sugar”, and the enzyme is produced. The elements of this information process then are:

- A subject that requires knowledge
- The object about which information is sought: sugar
- Searching for information, here by producing inhibitors
- An information carrier: the inhibitor who can bond to sugar and, if         not, to a promotor
- An entity that can give meaning to a free inhibitor: the promotor
- The meaning of a free inhibitor: no sugar

The bonding of a free inhibitor to the promotor produces information: no sugar’ (an inhibitor bonded to sugar has the default value: ‘no information’.)

In case of a bacterium, that information also is knowledge, as the only verification that can be recorded is the survival or demise of the subject.

The information carriers, the media, attract most of the attention of science. The above bacteria produce inhibitors that can bond to sugar. For our vision we produce light sensitive cones and bars to register light reflected from the object and transmit it to brain cells, thus determining the state of those cells. The production of information carriers and their physical state are physical entities, belong to the realm of physics and are directed by the laws of physics. Unique in our world is homo sapiens’ ability to make his knowledge the object of further information processes, to manipulate it, to create a virtual world that helps to deal with the real one. (Never mind that the distinction between real and virtual is suspect in philosophy: in the end the real world always has the last say.) The most powerful mental tools are mathematics, complemented by ever more powerful computing devices. These also belong to the media. As emphasized by science fiction ‘futurologist’ Raymond Kurzweil, there is no end in sight to our power to manipulate our world, other than the end of our universe. He, and many others, equates computational power with intelligence, a sign of what the French call “idiots savants”.

The proof of the pudding and the existential reason of information, lies in the adequacy of the decisions – taken on its basis of that information – to achieve the objectives of the being concerned. The ultimate objective of the decision-maker is his viability and propagation, in terms of the physicist to run against the second law. His success depends on the ‘correctness’ of the information engaged in the decision. The need to take decisions generates the need to know. It is this need to know, coupled to the ability to give meaning to information carriers that by themselves have none, which is particular to the information process and thus defines it. These concepts, as well as the objective of running against the second law, and others such as learning, are not part of physics. They belong to life sciences and philosophy. A philosophically justifiable basis for understanding and evaluating any feature of a living being, for instance information, is the function it performs in the race against the second law. That engages all science, physics, biology, psychology, all social sciences and also philosophy as meta-science.

All these disciplines must work together to construct a realistic model of man and its place in the universe, which requires a shared understanding of life. I have therefore devoted a section in my book to “What every scientist and philosopher should know about life and information” (some of it is summarized above) and the consequences that it has for the concepts we use. It is intended as a starting point and open to any suggestion for improvement that satisfies the criteria for democratic (= scientific) argumentation for establishing facts.


a) The definition of information. When in doubt about a scientific or philosophical assertion, theory etc., I start by carefully examining the initial definitions and – often implicit – assumptions, which in almost all cases are the culprits of inadequacy. Doing so for Philosophy of Information, I found no adequate definition of information in this book. An attempt is made right on the first page (29) of the first chapter after the introduction, paragraph “NECESSARY CLARIFICATIONS: MEANING, TRUTH, AND INFORMATION”, by Fred Dretske.

He uses as an example information booths in a railway stations. They are supposed to provide answers to questions about when trains arrive or depart, and not just any answer. And as he says, we would like them to be true. He then defines information as  “true answers”. There I object. An answer presumes not only  a question, it also presumes that the answer has meaning for the person who asks it; this meaning therefore must already exist before the answer is given. The first property of an answer, of information, is not truth, but meaning. Dretske’s example belongs to a specific realm of information, communication, in this case between the railroad people and the traveler; to function, their meanings must overlap. If the meanings do not overlap, there is no communication, and the notion of truth has no content. Meaning then has priority over truth. The whole process is initiated by a need; in this case that of the traveler.

Following this information process step by step, we find that it is not compatible with Dretske’s application of truth as criterion for the existence of information. The knowledge transmitted by the reading of the booth really is: “the railway people expect the train to arrive at xxx hour.” That is the only meaning that could be true unless the writer is a prophet. If we assume that there is sufficient overlap in meaning of the signs (words), then there is information. But there is not yet knowledge. To achieve that status, the information must be followed by a qualification about its reliability. In case of our senses, ‘true’ is the default value we attach to their information, a necessity for senses to function. If the train does not arrive as predicted, you have learned something, namely not to consider the figures on the booth as more than a probability. If you believed the information in the booth, you did not leave the booth with knowledge, but with a false belief.” Dretske’s definition generates the paradox similar to Schrödinger’s cat: whether at 10.15 the message on the booth is true information or a ‘false belief’ is determined only after 10.30. Yet this information is only relevant before 10.30. A train is not a quantum magnitude: whatever the message is at 10.15 does not change at 10.30. Even if you accept it as only a prediction, it is information as soon as you have reason to assume that its probability of being correct is more that 50% (if you assume it to be 100% true, you have done very little travelling). The correct difference between knowledge and belief is that knowledge conceives that it may be untrue even if holding it can be justified, while a belief does not make such a qualification (see below). The normal, correct interpretation of the writing on the booth is that it is a prediction by the railroad authorities, and that as such it inevitably includes a stochastic element; that element is, or should be, part of the information that we derive from the message and usually is based on knowledge about the previous performance of the railroad (**). Concluding: truth can be a qualitative criterion for knowledge and information, but it cannot be part of their definition.

Information versus knowledge. As said, Dreskte definition of information as ‘true knowledge’ has little if any ‘operational’ content. There is a definition of information that has such content, which I used in my book but which also is consistent with Michael Dunn’s paragraph “What is information” (page 582, “Information does not become knowledge… until it meets Plato’s three tests: believed, justified, true”.) These are three elements of the more general and modern condition: ‘stored in memory as knowledge’. A definition could be: “Information is the state of the information process just before the attribution of a meaning, and it is potential knowledge.” The chapter “The physics of information” for instance deals exclusively with that stage. With the bacterium, it starts when it has produced inhibitors; a naked inhibitor is information. It becomes knowledge after reaching a promoter, and there gets the meaning: there is no sugar. In the case of the traveler, knowledge is the state of the brain cells connected to the cells of the retina of the traveler, and that is the result of a process that started after the traveler focused on the booth to find out when the train leaves. It ends after that state of those cells has been attributed the meaning of  “the train is expected tot leave at 10.30” and it is registered as knowledge, either immediately or after a check, a process that implies nonphysical, ‘subjective’ concepts such as relevance, need, meaning, utility, function, desire etc.. The methodological particularity of these subjective informational concepts is that they are dependent for their content on their place in time and space and therefore lack a common denominator that is required for quantification. On any of these fields, one and the same arrangements of dots differs – both between subjects and within in a specific subject -– according to the circumstances, the number of times it has occurred and where etc. Where Luciano Floridi (p.117) asserts that “we know that information ought to be quantifiable, (at least in terms of partial ordering)…”, this applies only to information as potential knowledge. We have ample proof that any actual ranking knowledge in terms of a criterion can be transitive or intransitive according to the ‘mood’ of the subject, and therefore is meaningless as a general measure of relative importance of a specific bit of knowledge. This view is consistent with the chapter “INFORMATION AND NATURAL LANGUAGE”.

Besides clarifying a ‘real’ difference between stages of the information process, the correct definition of information and knowledge allows us to decide which part of the information process belongs to physics (mainly the media), and which part to life science and philosophy (the ‘need to know’, meaning and ‘knowledge’). The above definition also comes closer to the definitions found in Webster. Both are mostly synonymous in common language, and we will have to live with it. But the connotations are slightly different. Knowledge also has the connotation of the end-state, the awareness of the receiver, while information refers more to what is available to become aware of or – in communication – what the receiver is intended to become aware of, which perforce must have preceded the awareness. If any distinction between information and knowledge is required, then the one presented above should be preferred on both counts. The distinction between information and knowledge also has another consequence: it allows us to determine the role and need of objectivity. The information process has two purely subjective elements: at the start the need to know, the motivation and the means we use to register information, and at the end the meaning we give to the registered information. The objective element is the correctness of that information and depends on the extent to which whatever we register is totally dependent on the state of the object about which we seek information. That engages the adequacy of the apparatus we use, for instance our eyes, brain cells and the connection between the two, which in any information process is a given. Any interference with this system can only reduce the correctness of the representation of the object of our inquiry. Some may be beyond our power to prevent, for instance a cloud drifting over the sun. But we can and must eliminate all other subjective elements, all value judgments except truth. (The same physical information process as used in decision-making is used for many other purposes, fort instance in art. Truth and objectivity then may not be a primary category for its evaluation.) Qualifications of information then can be ‘correct’ or ‘incorrect’; truth must be reserved for the next step: giving it meaning and accepting it as knowledge (see for instance Walliser, p. 558, line 6/7).

b) The Physics of Information. That subject was presented by Sander Bais and J. Doyne Farmer. It engages subjects as quantification of information, entropy and order. I am not qualified to evaluate its correctness and have found nothing that might induce me to turn to someone who is. I am concerned with the application of their findings.

What can the quantification of information tell us? S/F opens with: “All information is carried, retrieved and processed by machines, whether they be electronic computers or living organisms. All information, which in an abstract sense one think of as a string of zeros and ones, has to be carried by a physical substrate, be it paper, silicon chips or holograms, and the handling of this information is physical, so information is ultimately constrained by the fundamental laws of physics.”  On p 3, S/F defines the object of the physics of information, as a subfield of physics. “In fact, all science can be seen as an application of the principle of maximum entropy, which provides a means of quantifying the trade-off between simplicity and accuracy.” The conclusion starts with: “The basic method of scientific investigation is to acquire information about nature by doing measurements and then to make models which optimally compress that information. Therefore information theoretic questions arise naturally at all levels of scientific enterprise: in the analysis of measurements, in performing computer simulations, and in evaluating the quality of mathematical models and theories. …Forecasting is a process whose effectiveness can be understood only in terms of the information contained in the measurements …. the whole scientific enterprise is reduced to the minimum description length, which essentially amounts to finding the optimal compromise between … We have seen that while description might have subjective elements, whenever we use the concept of entropy to ask concrete physical questions, we always get objective answers.” Their further examples of the explanatory power of entropy all hail from the field of physics.

The physics of information therefore does not quantify information itself; it quantifies the capacity of information carriers and the efficiency of the processes used to manipulate information by minimizing the risk that the knowledge we have gathered is not really knowledge but due to a chance event. At least that is how I have understood it; as I am not a certified physicist, I sent Sander Bias a detailed explanation of how I arrived at that conclusion, with the request to point out any error, but he declined to do so. Entropy enables us to understand (determine?) the potential, the effectiveness of a specific information process or carrier to generate the information we want. It may lie at the basis of today’s informatics, of the manipulation of information, but it  neither defines nor quantifies information. Equating computers with living beings in the processing of information is justified if we deal exclusively with this capacity and efficiency. But unless that limitation is constantly emphasized, it can lead to wrong conclusions, mainly because what is shown to be ‘more’ also may be equated to ‘better’, or that if a carrier that can carry more bits, it ipso facto carries better information. If we use the concept of entropy to ask concrete physical questions, we expect to get physical and therefore ‘factually objective’ answers. The whole article gives no indication that information is not a purely physical entity, that it exists only in the living world.

Why is that distinction important? All the improvements that the physics of information can generate in the computational field are wasted if the input is invalid: “garbage in, garbage out.” The input never consists of the facts, the events, themselves; it always is takes the form of symbols representing these facts. The same applies to the output. To reach the maximum potential that can be achieved by the process, there must be a durable and correct one-to-one correspondence between facts and symbols (for “information”, see my chapter about Dretske.”)  However innocent the application of the formulas of thermodynamic-entropy to order-entropy may be, we do not know the probability of errors it may generate in its application, for instance in philosophy, unless the above limitation of its scope is part of the theory. And of course the computations of its application must be correct; if they are mathematical, that assumption usually is justified. If only logic is in cause, it is not. The occurrence of petitio prijncipii and ex falsibus omnii in renowned works of philosophjers and even social scinetists is disconcerting. That the proof of a correlation between two events does not by itself identify which is the cause and which is the result, is widely ignored. For instance reversing Rawls’ order of cause and effect between “a well ordered society” and “the principle of justice” yielded my solution of his problem of compliance. The notion of entropy can improve the use of quantum mechanics. But in terms of the contribution of information to the survival and well being of the human race and the environment on which it depends, improving the effectiveness of social science and philosophy in those terms is far more urgent. In any case an adequate concept of the nature of life is a prerequisite.

The second law, this time both in its normal thermo-dynamical and its ‘order’ guise, mandates that we ensure the contribution of information in these fields by eliminating wrong or redundant answers and ineffective processes.


It is a very interesting book that opened many horizons for me. It also confirms that the gap between physics and social science/philosophy is still widening, for it deals with a central aspect life, information as part of the decision-making process, yet most contributors engage only those scientific tools with which we also approach the inert world. To achieve their full potential, the subjects of this book must be must be integrated into the process of human decision-making, for instance by presenting its conclusions in a form that can be understood by potential users, amongst which we must – given its title – include philosophers. That engages a trade-off not mentioned in the book: between accuracy and shortness of description, and accessibility to the user. That job would require collaboration the authors of Philosophy of Information with philosophers of various fields of philosophy, unless it never was intended for them.

To narrow the gap between the two sciences, we cannot rely on nature to monitor social science and philosophy; that must be done by groups that contain scientists from various faculties, as explained in the concluding chapter of my book (*). The bottleneck in today’s human information processing is not the computing part, but what comes before (motivation, meaning and data) and after (truth). The improvement of the human condition to be expected from technical improvements are peanuts compared with the benefits from improving – by an effective social science – the way we treat each other and our environment. It is not a matter of quantity of information: we are drowning in it. The first job now is selection on truth.

My own science, economics, seems to have lost whatever contacts it had with reality and must be rebuild from the bottom up, starting with a review of all needs we must satisfy, whether they can be allocated by the market or not. Axioms about facts such as the psychology of its actors should be consistent with what science can tell us about them instead of being dictated by theoretical convenience. All feedback loops, for instance with the political world and with all external effects must be charted and dealt with in one model. The only realistic model of life is evolution, and that is incompatible with any optimizing aspirations (see Challenge to Reason by C. West Churchman 1968) and with notions of a stable equilibrium, whether evolutionary (Nash) or not. We should direct our predictive and controlling efforts not so much towards where we should go, which in practice is fairly gratuitous, but how to avoid disasters, which are often predictable and objective (as explained in my 1979 book “Vooruitgang zonder Blauwdruk”).

Above all, the allocation of our intellectual resources is totally uneconomic, wasteful. We cannot control the creative process that by its nature must contain a large stochastic element. But we can and must separate the chaff from the corn. When I call Ray Kurzweil and consorts “idiots savants”, I really mean it. His Artificial Intelligence is not intelligence; it remains a computational power that manipulates information to produce new information. If it is part of the whole information process and fulfills its function in decision-making, it produces information. Unless an artificial ‘something’ like a computer can use the information it produces to ensure its survival and propagation, it remains a product of human intelligence and ingenuity. Another reason for my qualification of Kurzweil cs. is his implicit assumption that the achievements he envisions, including immortality, not only are sensational, but also of great urgency. Rubbish. We have one all-important problem to solve: how to get humans, basically chimpanzees-with-an-overgrown-intelligence, guns and an atom bomb, to cope with the fact that their emotional, especially moral, endowments have failed to catch up with their destructive power. That must be a cooperative venture of philosophers and scientists of every discipline.

A similar conclusion follows from chapter “EPISTEMIC LOGIC AND INFORMATION UPDATE”, P 361 of the book. The authors note that at the time of Plato, knowledge is defined as ‘true, justifiable belief’. They then start by investigating ‘justifiable belief’, which differs from the ‘platonian’ concept of ‘knowledge’ by the fact that a justifiable belief need not by that fact be true. Yet, unless Popper, Carnac, Tarsky and many others are wrong, ‘justifiable belief’ is the normal end product of most information processes. If we require that knowledge be really true, then we must deduce with Socrates that we seldom if ever have any knowledge at all, and that for instance scientific theories do not qualify as such, see (*), chapter “Against the autonomous existence of Popper’s world three objects”. Tautological and analytical statements are an exception, but only by definition. They are of little use unless they are part of, or applied to, a synthetic one. I cannot understand why today we still have to follow Plato and consider that knowledge is only knowledge if true, instead of accepting knowledge as a justifiable belief that has achieved the status of ‘most likely’ in a continuous process assessing its correctness, its truth, even if this status is conventional. Where further assessment of its truth is inefficient or impossible, we must do with belief, and we usually can.  We do so every time we leave our home and assume without looking that the sidewalk still is there. Once in a while some exceptional unforeseen occurrence such as a deep leak will have produced a hole instead, and we fall in it. That’s life! Truth itself is not a fact, it is a qualification we give our knowledge, either directly or as an objective to approximate as far as needed or possible in further information processes.

(*)The Conceptual Foundations of Decision-making in a Democracy. A table of content is shown on this website.

(**) Dretske gives another example for his definition of information: decoy ducks in a duck hunt. They are a means of communication, information carriers, between hunters and ducks. There is communication as soon as the ducks see the decoys and identify them as looking like ducks. Communication always consists of two information processes. The hunters send the message: Hey, look, ducks! In first instance the ducks attach the same meaning to the decoys. But they are wary. Depending on their past experience, a group will either circle out of range to make sure they are right, or it may start to approach. The first group usually will escape unscathed; the second one may not. The information process of the ducks can produce ‘true’ information or misinformation, and their fate depends on it. But true or false, it is information. As with all knowledge, only analytical statements can be conclusively classified in terms of truth. In practice, we are often critical of a first impression and take a second look to check its truth. Even if the first one would be proved untrue, a misinformation, it must have been information; otherwise we would not have checked it.

(***) After explaining to the layman the basics of life, Monod presented some consequences that his findings could have for philosophy. Given his professional performance, it is understandable that he did not find enough time to thoroughly investigate the significance of his findings for other fields of intellectual investigation. He noted that they justified Kant’s a priori (in the sense of “before”) of pure reason. And he ventured into their significance for ethics; a venture tainted by the title of the closing paragraph: “L’éthique de la connaissance et l’ideal socialiste”. He viewed ethics as serving man as a social being, and therefore considered some form of socialism inevitable. He specifically explained that the current forms of socialism, and certainly the historicist one, did not qualify. Yet this undermined his credibility and served as an excuse for ignoring his scientific findings. The basic ethics on which his socialism was to rest is an ethic of knowledge; Socrates would agree.