The Way we are Free

‘The Way we are Free’ . David R. Weinbaum (Weaver) . ECCO . VUB . 2017

Abstract: ‘It traces the experience of choice to an epistemic gap inherent in mental processes due to them being based on physically realized computational processes. This gap weakens the grasp of determinism and allows for an effective kind of freedom. A new meaning of freedom is explored and shown to resolve the fundamental riddles of free will, ..’. The supposed train of thought from this summary:

  1. (Physically realized) computational processes underpin mental processes
  2. These computational processes are deterministic
  3. These computational processes are not part of people’s cognitive domain: there is an epistemic gap between them
  4. The epistemic gap between the deterministic computational processes and the cognitive processes weakens the ‘grasp of determinism’ (this must logically imply that the resulting cognitive processes are to some extent based on stochastic processes)
  5. The weakened grasp leads to an ‘effective kind of freedom’ (but what is an effective kind of freedom? Maybe it is not really freedom but it has the effect of it, a de facto freedom, or the feeling of freedom)?
  6. We can be free in a particular way (and hence the title).

First off: the concept of an epistemic gap resembles the concept of a moral gap. Is it the same concept?

p 3: ‘This gap, it will be argued, allows for a sense of freedom which is not epiphenomenal,..’ (a kind of a by-product). The issue is of course ‘a sense of freedom’, it must be something that can be perceived by the beholder. The question is whether this is real freedom or a mere sense of freedom, if there is a difference between these.

‘The thesis of determinism about actions is that every action is determined by antecedently sufficient causal conditions. For every action the causal conditions of the action in that context are sufficient to produce that action. Thus, where  actions are concerned, nothing could happen differently from the way it does in fact happen. The thesis of free will, sometimes called “libertarianism”, states that  some actions, at least, are such that antecedent causal conditions of the action are not causally sufficient to produce the action. Granted that the action did occur, and it did occur for a reason, all the same, the agent could have done something else, given the same antecedents of the action’ [Searle 2001]. In other (my, DPB) words: for all deterministic processes the direction of the causality is dictated by the cause and effect relation. But for choices produced from a state of free will other actions (decisions) are possible, because the causes are not sufficient to produce the action. Causes are typically difficult to deal with in a practical sense because some outcome must be related to its causes. This can only be done after the outcome has occurred. Usually the causes for that outcome are very difficult to identify, because the relation is  if and only if. In addition a cause is usually a kind of a scatter of processes within some given contour or pattern, one of which must then ‘take the blame’ as the cause.

There is no question that we have experiences of the sort that I have been calling experiences of the gap; that is, we experience our own normal voluntary actions
in such a way that we sense alternative possibilities of actions open to us, and we sense that the psychological antecedents of the action are not sufficient to fix the action. Notice that on this account the problem of free will arises only for consciousness, and it arises only for volitional or active consciousness; it does not arise for perceptual consciousness‘ [Searle 2001]. This means that a choice is made even though the psychological conditions to make ‘the perfect choice’ are not satisfied, information is incomplete or a frivolous choice is made: ‘should I order a pop-soda or chocolate milk?’. ‘The gap is a real psychological phenomenon, but if it is a real phenomenon that makes a difference in the world, it must have a neurobiological correlate’ [Searle 2001]. Our options seem to be equal to us and we can make a choice between various options on a just-so basis (‘god-zegene-de-greep’). Is it therefore not also possible that when people are aware of these limitations they have a greater sense of freedom  to make a choice within the parameters known and available to them?

It says that psychological processes of rational decision making do not really matter. The entire system is deterministic at the bottom level, and the idea that the top level has an element of freedom is simply a systematic illusion… If hypothesis 1 is true, then every muscle movement as well as every conscious thought, including the conscious experience of the gap, the experience of “free” decision making, is entirely fixed in advance; and the only thing we can say about psychological indeterminism at the higher level is that it gives us a systematic illusion of free will. The thesis is epiphenomenalistic in this respect: there is a feature of our conscious life, rational decision making and trying to carry out the decision, where we experience the gap and we experience the processes as making a causal difference to our behavior, but they do not in fact make any difference. The bodily movements were going to be exactly the same regardless of how these processes occurred‘ [Searle 2001]. The argument above presupposes a connection between determinism and inevitability, although the environment is not mentioned in the quote. This appears to be flawed because there is no such connection. I have discussed (ad-nauseam) in the Essay Free Will Ltd, borrowing amply from Dennett (i.a. Freedom Evolves). The above quote can be summarized as: if local rules are determined then the whole system is determined. Its future must be knowable, its behavior unavoidable and its states and effects inevitable. In that scenario our will is not free, our choices are not serious and the mental processes (computation) are a mere byproduct of deterministic processes. However, consider this argument that is relevant here developed by Dennett:

  • In some deterministic worlds avoiders exist that avoid damage
  • And so in some deterministic worlds some things are avoided
  • What is avoided is avoidable or ‘evitable’ (the opposite of inevitable)
  • And so in some deterministic worlds not everything is inevitable
  • And so determinism does not imply inevitability

Maybe this is how it will turn out, but if so, the hypothesis seems to me to run against everything we know about evolution. It would have the consequence
that the incredibly elaborate, complex, sensitive, and – above all – biologically expensive system of human and animal conscious rational decision making would actually make no difference whatever to the life and survival of the organisms’ [Searle 2001]. But the argument cannot logically be true and as a consequence nothing is wasted so far.

In the case that t2>t1, it can be said that a time interval T=t2-t1 is necessary for the causal circumstance C to develop (possibly through a chain of intermediate effects) into E. .. The time interval T needed for the process of producing E is therefore an integral part of the causal circumstance that necessitates the eventual effect E. .. We would like to think about C as an event or a compound set of events and conditions. The time interval T is neither an event nor a condition‘ [p 9-10]. This argument turns out to be a bit of a sideline, but I defend the position that time is not an autonomous parameter, but a derivative from ‘clicks’ of changes in relations with neighboring systems: this quote covers it perfectly: ‘Time intervals are measured by counting events‘ [p 9]. And this argues exactly the opposite: ‘Only if interval T is somehow filled by other events such as the displacement of the hands of a clock, or the cyclic motions of heavenly bodies, it can be said to exist‘ [p 9], because time is the leading parameter and the events such as the moving of the arm of a clock is the product. This appears to be the world explained upside down (the intentions seem right): ‘If these events are also regularly occurring and countable, T can even be measured by counting these regular events. If no event whatsoever can be observed to occur between t1 and t2, how can one possibly tell that there is a temporal difference between them, that any time has passed at all? T becoming part of C should mean therefore that a nonzero number N of events must occur in the course of E being produced from C’ [p. 9]. My argument is that if a number of events lead to the irreversible state E from C then apparently time period T has passed. Else, if nothing irreversible takes place, then no time passes, because time is defined by ‘clicks’ occurring, not the other way around. Note that the footnote 2 on page 9 explains the concept of a ‘click’ between systems in different words.

The concepts of Effective and Neutral T mean a state of a system developing from C to E while conditions from outside the system are injected, and where the system develops to E from its own initial conditions alone. Note that this formulation is different from Weaver’s argument because t is not a term. So Weaver arrives at the right conclusion, namely that this chain of events of Effective T leads to a breakdown of the relation between deterministic rules and predictability [p 10], but apparently for the wrong reasons. Note also that Neutral T is sterile because in practical terms it never occurs. This is probably an argument against the use of the argument of Turing completeness with regards to the modeling of organizations as units of computation: in reality myriad of signals is injected into (and from) a system, not a single algorithm starting from some set of initial conditions, but a rather messy and diffuse environment.

Furthermore, though the deterministic relation (of a computational process DPB) is understood as a general lawful relation, in the case of computational processes, the unique instances are the significant ones. Those particular instances, though being generally determined a priori, cannot be known prior to concluding their particular instance of  computation. It follows therefore that in the case of computational processes, determinism is in some deep sense unsatisfactory. The knowledge of (C, P) still  leaves us in darkness in regards to E during the time interval T while the  computation takes place. This interval represents if so an epistemic gap. A gap during which the fact that E is determined by (C, P) does not imply that E is known or can be known, inferred, implied or predicted in the same manner that  fire implies the knowledge of smoke even before smoke appears. It can be said if so that within the epistemic gap, E is determined yet actually it is unknown and  cannot be known‘ [p 13]. Why is this problematic? The terms are clear, there is no stochastic element, it takes time to compute but the solution is determined prior to the finalization of the computation. Only if the input or the rules changes during the computation, rendering it incomputable or irrelevant. In other words: if the outcome E can be avoided then E is avoidable and the future of the system is not determined.

.. , still it is more than plausible that mental states develop in time in correspondence to the computational processes to which they are correlated. In other words, mental processes can be said to be temporally aligned to the neural  processes that realize them‘ [p 14]. What does temporally aligned mean? I agree if it means that these processes develop following, or along the same sequence of events. I do not agree if  it means that time (as a driver of change) has the same effect on either of the processes, computational (physical) and mental (psychological): time has no effect.

During gap T the status of E is determined by conditions C and P but its specifics remain unknown by anyone during T (suppose it is in my brain then I of all people would be the one to know and I don’t). And at t2, T having passed, any freedom of choice is in retrospect, E now being known. T1 and t2 are in the article  defined as the begin state and the end state of some computational system. If t1 is defined as the moment when an external signal is perceived by the system and t2 is defined as the moment at which a response if communicated by the system to Self and to outside, then the epistemic gap is ‘the moral gap’. This phrase refers to the lapsed time between the perception of an input signal and the communicating of the decision to Self and others. The moral comes from the idea that the message was ‘prepared in draft’ and tested against a moral frame of reference before being communicated. The moral gap exists because the human brain needs time to compute and process the input information and formulate an answer. The Self can be seen as the spokesperson, functionally a layer on top of the other functions of the brain and it takes time to make the computation and formulate its communication to Self and to external entities.

After t1 the situation unfolds as: ‘Within the time interval T between t1 and t2, the status of the resulting mental event or action is unknown because, as explained, it is within the epistemic gap. This is true in spite the fact that the determining setup (C, P) is already set at time t1 (ftn 5) , and therefore it can be said that E is already determined at t1. Before time t2, however, there can be no knowledge whether E or its opposite or any other event in <E> would be the actual outcome of the process‘ [p 17]. E is determined but not known. But Weaver counter argues: ‘While in the epistemic gap, the person indeed is going through a change, a computation of a deliberative process is taking place. But as the change unfolds, either E or otherwise can still happen at time t2 and in this sense the outcome is yet to be determined (emphasis by the author). The epistemic gap is a sort of a limbo state where the outcome E of the mental process is both determined (generally) and not determined (particularly) [p 17]. The outcome E is determined but unknown to Self and to God; God knows it is determined, but Self is not aware of this. In this sense it can also be treated as a change of perspective, from the local observer to a distant more objective observer.

During the epistemic gap another signal can be input into the system and set up for computation. The second computation can interrupt the one running during the gap or the first one is paused or they run in parallel. However the case may be, it is possible that E never in fact takes place. While determined by C at t1 not E takes place at t2 but another outcome, namely of another computation that replaced the initial one. If C, E and P are specific for C and started by it then origination is an empty phrase, because now a little tunnel of information processing is started and nothing interferes. If they are not then new external input is required which specifies a C1, and so see the first part of the sentence and a new ‘tunnel’ is opened.

This I find interesting: ‘Moreover, we can claim that the knowledge brought forth by the person at t2 be it a mental state or an action is unique and original. This uniqueness and originality are enough to lend substance to the authorship of the person and therefore to the origination at the core of her choice. Also, at least in some sense, the author carrying out the process can be credited or held responsible to the mental state or action E, him being the agent without whom E could not be brought forth‘ [p 18]. The uniqueness of the computational procedure of an individual makes her the author and she can be held responsible for the outcome. Does this uphold even if it is presupposed that her thoughts, namely computational processes, are guided by memes? Is her interpretation of the embedded ideas and her computation of the rules sufficiently personal to mark them as ‘hers’?

This is the summary of the definition of the freedom argued here: ‘The kind of freedom argued for here is not rooted in .., but rather in the very mundane process of bringing forth the genuine and unique knowledge inherent in E that was not available otherwise. It can be said that in any such act of freedom a person describes and defines herself anew. When making a choice, any choice, a person may become conscious to how the choice defines who he is at the moment it is made. He may become conscious to the fact that the knowledge of the choice irreversibly changed him. Clearly this moment of coming to know one‟s choice is indeed a moment of surprise and wonderment, because it could not be known beforehand what this choice might be. If it was, this wouldn‟t be a moment of choice at all and one could have looked backward and find when the  actual choice had been made. At the very moment of coming to know the choice that was made, reflections such as „I could have chosen otherwise‟ are not valid  anymore. At that very moment the particular instance of freedom within the gap  disappears and responsibility begins. This responsibility reflects the manner by  which the person was changed by the choice made‘[pp. 18 -9]. The author claims that it is not a reduced kind of freedom, but a full version, because: ‘First, it is coherent and consistent with the wider understanding we have about the world involving the concept of determinism.  Second, it is consistent with our experience of freedom while we are in the process of deliberation. Third, we can now argue that our choices are effective in the world and not epiphenomenal. Furthermore, evolution in general and each person‟s unique experience and wisdom are critical factors in shaping the mental processes of deliberation‘ [p 19]. Another critique could be that this is a strictly personal experience of freedom, perhaps even in a psychological sense. What about physical and social elements, in other words: how would Zeus think about it?

This is why it is called freedom: ‘Freedom of the will in its classic sense is a confusion arising from our deeply ingrained need for control. The classic problem of free will is the problem of whether or not we are inherently able to control a given life situation. Origination in the classic sense is the ultimate control status. The sense of freedom argued here leaves behind the need for control. The meaning of being free has to do with (consciously observing) the unfolding of who we are while being in the gap, the transition from a state of not knowing into a state of knowing, that is. It can be said that it is not the choice being originated by me but  rather it is I, through choice, who is being continuously originated as the person that I am. The meaning of such freedom is not centered around control but rather around the novelty and uniqueness as they arise within each and every choice as one‟s truthful expression of being‘ [p 20]. But  in this sense there is no control over the situation, and given there is the need to control is relinquished, this fact allows one to be free.

‘An interesting result regarding freedom follows: a person‟s choice is free if and only if she is the first to produce E. This is why it is not an unfamiliar experience that when we are in contact with persons that are slower than us in reading the situation and computing proper responses, we experience an expansion of our freedom and genuineness, while when we are in contact with persons that are faster than us, we experience that our freedom diminishes.

Freedom can then be understood as a dynamic property closely related to computation means and distribution of information. A person cannot expect to be free in the same manner in different situations. When one‟s mental states and actions are often predicted in advance by others who naturally use these  predictions while interacting with him, one‟s freedom is diminished to the point where no genuine unfolding of his being is possible at all. The person becomes a  subject to a priori determined conditions imposed on him. He will probably experience himself being trapped in a situation that does not allow him any genuine expression. He loses the capacity to originate because somebody or something already knows what will happen. In everyday life, what rescues our freedom is that we are all more or less equally competent in predicting each other‟s future states and actions. Furthermore, the computational procedures that implement our theories of mind are far from accurate or complete. They are more like an elaborate guess work with some probability of producing accurate predictions. Within such circumstances, freedom is still often viable. But this may  soon radically change by the advent of neural and cognitive technologies. In fact it is already in a process of a profound change.

In simple terms, the combination of all these factors will make persons much more predictable to others and will have the effect of overall diminishing the number of instances of operating within an epistemic gap and therefore the  conditions favorable to personal freedom. The implications on freedom as described here are that in the future people able to augment their mental processes to enjoy higher computing resources and more access to information will become freer than others who enjoy less computing resources and access to information. Persons who will succeed to keep sensitive information regarding their minute to minute life happenings and their mental states secured and  private will be freer than those who are not. A future digital divide will be translated into a divide in freedom‘ [pp 23-6].

I too believe that our free will is limited, but for additional and different reasons, namely the doings of memes. I do believe that Weaver has a point with his argument of the experience of freedom in the gap (which I had come to know as the ‘Moral Gap’) and the consequences it can have for our dealings with AI. There my critique would be that the AI are assumed to be exactly the same as people, but with two exceptions: the argument made explicit that 1) they compute much faster than people and the argument 2) left implicit that people experience their unique make-up such that they are confirmed by it as per their every computation; this experience represents their freedom. Now people have a unique experience of freedom that an AI can never attain providing them a ticket to relevance among AI. I’m not sure that if argument 2 is true that argument 1 can be valid also.

I agree with this, also in the sense of the coevalness between individuals and firms. If firms do their homework and such that they prepare their interactions with the associated people, then they will come out better prepared. As a result people will feel small and objectivised. They are capable of computing the outcome before you do hence predicting your future and limiting you perceived possibilities. However, this is still a result of a personal and subjective experience and not an objective fact, namely that the outcome is as they say, not as you say.

The Utility of Diversity

The main cause of death of firms, their loss of autonomy of sales registration, is explained by them being the subject of a merger or an acquisition and only a small portion of firm deaths is caused by bankruptcy. In the previous section the question was aked whether it is a ‘bad’ thing if an organism disappears, apart from the sense of loss it generates. The same can be asked if a firm disappears: is it a ‘bad’ thing if the firm disappears by bankrupty or via a merger or an acquisition? And conversely is it a ‘good’ thing if a firm survives to a ripe old age? In order to be able to explain why firms die as they do, consider this argument below about biological diversity and whether a loss of biodiversity is a ‘bad’ thing and for whom.

Biodiversity literally means the diversity of life. As a concept per se it holds no value because the total number of species is unknown – let alone the number of organisms, not all species are relevant, their numbers are not equal, some are a nuisance and the argument can be defended that extinction of many species is inevitable anyway. What is the use of the concept biodiversity and to generalize it what is its value hence its utility? People having co-evolved with other species, their histories intertwined as they are, cannot be considered qualified to assess the utility of some other species, lest the same question be asked about them also. To that end the concept of universal utility is developed in a larger perspective, people not center stage.

What is life (step 1)?

‘Only organisms, from simple bacteria to complex animals with brains, meet the definition of life’ [Jager 2014 p. 18]. This definition includes a circular reference: organisms are living beings and life resulted in organisms because of evolution. ‘Individual organisms are descendants of the ‘first’ cell’ [Jager 2014 p. 18]. The ‘first’ cell is some complicated yet badly functioning cell that importantly did not require an organism to be able to reproduce, but only to be an offspring to its parents. Later the descent from a parent will be abandoned also. An important consequence is that collectives such as an ecosystem can not be considered alive, because they are not individual descendants of the first cell.

What is diversity?

Because every organism – above every species – on earth is unique, the total diversity of all individuals is incalculable. Details depending on differences such as age, gender or location (phenotypic) are lost if their numbers are narrowed down by taxonomically grouping. Genetic categorization doesn’t solve the problem either, because genetic differences do not determine all phenotypical differences and for instance disregard the neuron connections in complicated animals’ brains. A focus on phenotypes thus resolves that and makes brain diversity part of biodiversity. An ecosystem approach is equally unhelpful, because ecosystems are interlinked and in a sense only one ecosystem exists that also includes abiotic earth.

Diversity in the biological sphere, biodiversity, is defined as: ‘Biodiversity consists of all the differences between organisms’ [Jager 2014 p. 22]. This is useful because it includes organisms’ genetic and phenotypic characteristics as well as the basic elements of ecosystems. It mentions that there are differences, but it does not attempt to measure diversity or its conservation. It does however provide a basis for conservation strategies, because conservation implies maintenance of all the diversity of organisms and therefore also of all the processes upon which this diversity depends [p 23]. These processes include the interactions between the organisms and the interactions between the organisms and their environment, together defining he ecosystem.

This can serve as the umbrella term in the protection of organisms. From this can follow this conservation objective: ‘the preservation of a selection of ecosystem elements (and their associated processes) that in principle guarantee that the numbers of individuals of ALL relevant species (a species or its substitute) of the ecosystem remain above the minimum required for viable populations hence a minimum basis for evolution. In addition: ‘Biodiversity is in a constant state of flux as a result of the evolutionary process in which the numbers of the populations of the species in the ecosystem varies.’ [p 23]. As a consequence species’ extinction is a natural part of the evolutionary process, including those that occur as a result of human action.

Individual Utility

People consider something to be useful – to have some utility – if it changes a less satisfactory situation into a more satisfactory situation. Animals with primitive brains nor plants think about utility (the concept), but they can try to avoid unpleasant stimuli and they can try to find food. For all organisms utility relates to the minimum physiological conditions required to stay alive. For thinking animals and people utility includes the satisfaction of mental objectives such as needs and desires. In more general terms: utility is the whole of the things and activities that ensure that an organism can function normally or without too much stress [p 24]. The subjects experiencing utility exclude non-living things, nature and ecosystems, because they are not an organism and as a consequence they have no normal state of being (that can be improved).

Universal Utility

All processes and forms in the universe result from mechanisms resulting in the acquisition of degrees of freedom. ‘Acquiring degrees of freedom leads to greater differentiation in the universe. Differentiation can proceed towards more organisation and towards chaos. In both directions nature turns potential into reality. And all the while energy disperses throughout the universe at an increasing rate. Acquiring degrees of freedom means the realisation of potential’ [p. 25]. An example of the acquisition of degrees of freedom towards more order is the transition of a single-cell organism to a mutli-cell organism. The multi-cell organism is itself more difficult to eat, can eat larger food and can extend itself to higher places where there is more light. Just because of such advantages these transitions were invented by evolution: the degree of freedom of the single-cell organism as an autonomous entity was traded for the degree of freedom as multicellular entities, now having access to the advantages of a larger size and a more complex shape.

The principle of degrees of freedom also goes from atomic to molecular and from individual nerve cells to brains, from bees to a beehive and from individual people to an organisation. However, beehives and human communities cannot be organisms, because earlier it was established provisionally that any such organisation can not have descended from a first cell and can therefore not be alive.

Processes can also be associated with acquisition of degrees of freedom towards less organisation. These processes are connected with the dispersal of energy leading to the production of entropy. An example is the death of a multicellular organism: it no longer eats or breathes and the cells in its body suffocate or starve. Its body rots and the organisation of its cells is traded in for disordered molecules. The orderly degree of freedom of the organisation of molecules of the late body is now replaced with the disorderly degrees of freedom of the individual molecules. Nature in this sense can acquire orderly as well as disorderly degrees of freedom.

The generation of order implies the dispersal of energy. On balance, more energy is dispersed than corresponding order is generated and entropy always increases. Growth and metabolism are associated with the degradation of free energy (energy that can do mechanic work) from sunlight or food. This degradation of energy thus leads to an increase of diffuse energy (that can do less mechanic work) and an increase of entropy outside of the body of the organism. ‘The entropy that organisms create is a necessary consequence of the creation and maintenance of order in (a) their bodies and (b) in the environment (their burrows, communities, cities, etc)’ [p 26].

In this way all processes in nature lead to the acquisition of degrees of freedom. This acquisition is associated with universal utility (not to be mistaken with individual utility). It can therefore be said that a process that makes a larger contribution to the acquisition of degrees of freedom is more useful for nature – has a higher universal utility. ‘Universal utility is a measure of the relative contribution made by processes to the acquisition of the degrees of freedom. Universal utility does not serve a purpose – though it does have a direction – and does not satisfy any needs or desires.’ [p 27].

Biological Evolution

Viruses evolve but they are molecules with a protein coat, not organisms. In evolution something gets copied, in the case of a virus the copying of the DNA is outsourced. By replacing reproduction with replication, the scope of evolution is widened. But a new definition is required that also includes the evolution of viral dna, endosymbiontic cells (a bacterium in which another bacterium can live), cells containing dna, cells containing endosymbionts, complete multicellular organisms, etc. It must explain how existing structures give rise to the formation of new related structures. The phrase ‘give ris to’ is used instead of replication or copying etc, to include the evolution of the above list of special cases also. ‘The evolution algorithm can be described in a generic way as the repetition of two subprocesses: (1) Diversification, in which an original structure gives rise to the formation of related or derived new structures; and (2) Selection, in which the functioning of the new structuresw depends on their relativecapacities to exist in a certain environment and succeed in diversifying in the next round’ [Donald Campbell Psychological Review (journal) 1960 , Karl Popper Objective Knowledge 1972]. According to this definition, diversification in companies occurs through changes in company culture and selection takes place at the level of the ‘newly arisen’ group.

The darwinian algorithm contained reproduction, variation and selection; this is now simplified into diversification and selection. This universal (darwinian) evolution applies to everything as a framework for the evolution of everything – particles to stars and organisms – and specific theories are required for specific areas of interest, such as biological evolution.

The step of selection is associated with the capacity to acquire the next organisational degree of freedom. In organisms selection is based on survival but in the transition between particle types selection is based on the realisation of a new degree of freedom. (voor allebei geldt dat de degree of freedom groter wordt: organism: more independent of the uncertainties of the environment, particle: more randomness).

Evolution as an algorithm including the above two steps (diversification and selection) from the Chapter Replication? is able to solve problems concerning the acquisition of degrees of freedom without prior knowledge. All evolutionary steps lead to increasing dispersal of energy and increasing randomness hence increasing degrees of freedom. Simultaneously the increase of organisation leads to the acquisition of degrees of freedom also. In this way both chaotic and organisational degrees of freedom are acquired simultaneously, in turn leading to high universal utility.

Sources of biodiversity, Utility of a waterfall

What is the universal utility of water falling or more general: what is the utility of the water cycle on earth?

Particles from the previous level form the building blocks of larger, more complex particles and organisms from the previous level: all particles and organisms can be seen as steps on the particle ladder. In this sense organisms can be thought of as particles with similar features on a functional level. The steps of evolution on the particle ladder are roughly: fundamental particles > nuclear particles > atoms > molecules > bacteria > endosymbiotic cells > multicellular plants > multicellular animals.

The Organism as an Energy Vortex

Orderly structures in organisms and particles are the outcome of a self-organising process. ‘In contrast to physical and chemical particles, organisms can only maintain their structure if there is a continuous influx of free energy and building materials from the environment’ [p 38]. Organisms in this sense can be likened to the bathtub vortex, which is maintained as long as there is water in the tub, flowing out. Likewise organisms need energy and material flowing though them. Because in the process they produce low-grade energy, they can be said to be contracted by nature to convert high-grade energy to low-grade energy and in so doing to reduce the amount of free energy and increase entropy. Thus it is established that organisms contribute to the increase in the degrees of freedom in nature thus increasing universal utility. The remaining question is whether this works better in case there is more biodiversity, namely if there are more organisms.

A foaming waterfall

Some of the energy of the waterfall is used to produce froth on the surface of the river downstream. Metaphorically this waterfoam as a byproduct of falling water can be likened to biofoam as a by-product of sunlight. However, biofoam can make more biofoam using sunlight, which waterfoam cannot do. Firms can replicate themselves as biofoam.

A wellspring of biofoam

Sunlight forces cells to make more cells: ‘a wellspring of biofoam at the bottom of the sunfall’ [p. 40]. The wellspring pumps the biofoam under high pressure into the ecosystem. This pressure combined with competition and selection drives the river of evolution uphill and automatically leads to increasingly complex lifeforms. The organisms that do best are those that acquire the most resources and which are the most succesful parents can only be known after some time, when the performance of the offspring is known also.

Biofoam creates new waterfalls

When converting high-grade energy to low-grade energy it is useful if new cells are created that eat the primary producers, digest them and excrete the waste products. Utility is created for the eaters of the primary producers and so on.

Food chains

This is a sequene of alternating wellsprings and waterfalls and ever more species of organisms interspersed intensifying the process of degrading energy. At each step of degradation there are typically more than one species competing as a consequence of the mechanism of diversification.

The struggle for existence, Running with the red queen

Species populations flow through the ecosystem and flowing fastest along the path of least resistance. The Red Queen hypothesis applies the Waterfall model of the breakdown of free energy in the universe to species.

The Red Queen and the Constructal Law

The Red Queen and the concept of evolution are connected by this constructal law (variation to original by Ardrian Bejan) : ‘For a finite-size flow system to persist in time, its configuration must change such that the flow through the system meets less and less resistance’ [43]. This law is principally about the direction of the development or evolution of the patterns. The two types of flow systems are the organism and the environment. At each generation the flow passes through the organism more easily. The constructal law also predicts that the component systems and processes all develop to a stage at which they can all take the same level of stress. The environment and the whole ecosystem (inclusing biodiversity) is itself a flow system. The constructal law predicts that all organisms evolve together to reduce resistance to the material and energy flows involved from generation to generation converting sunlight to water vapour and waste products.

Here diversity can help with each organism filling its own niche in the ecosystem. But the same service can be rendered by one dominant species. It pays to be more active so as to get to the resources before your competitors do. The most active survive but overactive and overspending suffer the consequences also: biodiverity is dynamic but not chaotic, at the edge of chaos [Kauffman 1993].

People must also run

Comparison with the past and with other people create desires and motivates people as consumers and as collegues. People must run hard to sustain their level of satisfaction constant.

On the brink of chaos

The Red Queen hypothesis explains that species can go extinct but not how many or which. Long periods of constant biodiversity are intermitted with bursts of violent change in numbers of species. Explanations are external causes and, more importantly, competition between individuals. Competition leads to immediate change in the rate of species extinction: the average fitness of the species shifts to a dynamic balance which fluctuates around a ‘critical value’. [Per Bak How Nature Works 1996] argues that these critical values happen in many situations in nature. The waves of extinction in this game are similar to the species creation and extinction consistent with the Red Queen hypothesis.

Arms races and utility chains

Organisms must compete with others for resources while simultaneously avoid being used as a resource by another organism. Selection affects forms of bodies by learning how to use new abiotic resources and arms races. More abiotic and biotic resources implies more biodiversity. In case of biotic resources, the learning of the predator incites learning in the prey to avoid being preyed upon: this can become an arms race. Adaptations can be categorized as: interactions based on energy, structure, information, relocation in space and time.

Resource chains and humans

Reason, tools, industrial processes and foresight have made humans flexible to use almost all resources or their substitutes.

Competition and complexity

If fifty percent of single celled algae in a nutrient rich solution are replaced every twenty minutes with fresh solution, then the algae will not evolve towards more complexity but to a more simple version that is fitter to deal with the rapid sequence of changes in its environment by reproducing faster. This generally occurs when the object of evolution is simpler. Therefore, evolution does not necessarily lead to greater complexity of the species.

Big meets big

Shaking a box with small beads and big ones, the small beads end up at the bottom and the big ones at the surface, because the probability of a big space opening up for a big bead is smaller than a small space opening up for a small one. Following this metaphor (this example is physical in nature), more complexity is invited by competition because smaller organisms compete for resources with many others, while larger organisms compete mainly with other large organisms. In this way more complexity is invited into the process, because the bigger organisms have a better chance to survive hence every organism has the tendency to become larger hence more complex. In this way complexity depends entirely on the continual pressure of unstoppable reproduction combined with competition. [p 52]

A pile of sand on the table

Lines of descent and organisational levels

Diversification and selection leads to evolution in turn leads to diversity. Organisms’ offspring has different heritable characteristics and then the ecosystem selects which of the offspring takes part in the next round. Diversification leads to a pattern of division in different species and a pattern of the emergence of organisms of similar complexity at different locations in the ecosystem.

The origins of genetics and information

Diversity of life on earth is connected with the diversity of ‘information’ locked in the genetic material in organisms. RNA and DNA are mere structures, not information, as are the proteins that RNA and DNA code for. The actual information they represent is revealed how they contribute to the physiology and the structure of the organism. In this way the information harboured in the structures is conditional to their context. How does this relate to the acquisition of degrees of freedom of information during evolution?

Whispering down the generations

Under competitive circumstances it is hard for a cell to maintain strict control over the copying of its DNA.

Why sloppiness pays

If exact copies are made then the offspring becomes more vulnerabele for instance for virus. Not only is it cheaper to be somewhat sloppy, it results in better restistance to external influences. A ‘social’ reason exists also, because the search for a susceptible individual slows down the spreading of the disease. A reasonable balance must exist between the occurrence of changes in DNA and occurrence of changes in the environment.

Biodiversity and information

The complexity of an organism detemines the number of genes required to code for its physiology, structure and behavior. The information carried by the genes depends on the role of the cell that the genetic information belongs to: information equals data plus meaning [Peter Checkland and Jim Scholes, Systems Methodology in Action 1990].

During evolution nature is looking for new degrees of freedom for the information in organisms: continue to exist in a cell, change their structure through mutation, relocate with the cell. A degree of freedom to copy enabled the passing on of information to the offspring. A degree of freedom of sexual reproduction allowed the exchange of information between individuals.

Organisms over generations collect a genepool and are considered to constitute a species population; this represents the collective memory of the genes in the organisms.

Compulsory sex, rapid adaptation and dumping ‘waste’

Phenotypic characters and biodiversity

Diversification and selection operate mainly via individual phenotypes, but when individuals cooperate then the group ‘pays the bill’ instead of the individual. When groups compete then the individuals that work well as a group benefit over the ones that do not. The feature allowing an individual to cooperate is part of the ‘extended phenotype’ of other individuals that help it shield from external influences. Humand extend their phenotypes too by cooperating?

Biodiversity is what is left over

Extreme variations resulting in eccentric phenotyes are probably less efficient fast and smart and have a lesser chance to survive competing with others. This results in ‘pathways’ or strategies after the species explosion such as Cambrian Fauna. This comes about via a rigourous selection process that point the genes in the direction of the phenotypes most beneficial for their inhabitants. As a consequence the biodiversity curently existing on earth is just a fraction of all the possibilities. Not all variations result in a change in phenotype: some are neutral but not useless as they are experiments to build possible new foundations for the mergence of other mutations in the future.

A tree of structures

Organisational levels and biodiversity

‘Organisational levels not only represent a fundamental ranking of structures in nature, they also provide a frame of reference for evaluating biodiversity’ [p 64]

What is the value of an organisational level?

The structure at each level of organisation depends on the structure of the lower levels. The greater the number of transitions to get at some level, the higher the rank. This also enables the assessment of the cost of reaching some level: ‘By taking account for the resources and invenstions needed to achieve subsequent levels, organisational levels provide a framework for estimating how ‘bad’ it is when a species becomes extinct’ [p 65]. In light of this it is worthwhile ot take good care of the current level of human organisation, because it has taken many generations, a lot of energy and resources to get to where it stands now. Human culture is therefore a valuable invention, although it begs the question how robust it is agaionst large scale catastrophe, because of the availability of resources that have at this point become scarce.

Future biodiversity

What is Life, The difference between Life and Living (step 2)

Living is a state of being active. Life is an attribute of being in (able to be switched to) the state of living. All activities consistent with ‘living’ are therefore not relevant to a definition of ‘life’ [67].

‘If something possesses the material organisation corresponding to life and it is active then it is living’ [p 67, Société de Biologie, Paris 1860].

The Material Organisation of Life

[Maturana and Varela 1972] named the distinguishing characteristic autopoiesis, the ability of the organism to maintain itself. This does not say anything about the maintaining mechanism, but it becomes clear that the organism must have a boundary, that allows it to maintain the ‘self’. The type of organisation is spatially defined and cyclical because the molecules act as a group so as to maintain each other.

[Manfred Eigen and Peter Schuster 1977-1978, The Hypercycle: A principle of Natural Self-Organisation], A material desription of life in this sense requires interaction between a chemical hypercycle and an enclosing membrane which is maintained by the internal processes and in turn supports those processes. Structure and function are two sides of the same coin and this model can clarify the life of bacteriae.

Levels of organisation

Which structural configurations (organisation) of matter must the definition of life include? Nature has three fundamental degrees of freedom to allow complexity to emerge: inward, outward, upward. Complex systems use the structures available at the preceding level of complexity. Making the transition to a next level requires a new form of internal organisation or a new form of interaction. ‘The formation of a more complex particle is always accompanied by the formation of a new type of spatial configuration and a new type of process. Here, ‘new’ means that the new attributes are impossible at he previous level of organisation’ [p 71] Each time an existing particle forms a building block for a new attribute, a new ‘particle’ is born.

Particles + organisms = operators

The operator theory says that strict and comparable rules for building with forms apply to all operators, both physical particles and organisms. The hierarchy is quarks, hadrons, atoms, molecules, bacterial cells, endosymbiotic cells, multicellular endosymbiotic organisms and multicellular endosymbiotic organisms with a neural network. Throughout this entire hierarchy the physical laws on structural configuration place restrictions on the transitions between organisational levels. ‘What it (operator theory DPB ) does say is that the levels of complexity are not accidental. This is because nature must use the existing simple forms as the basis for building new, more complex forms, and because each time it does so, nature must follow strict design laws to meet the requirements of the next higher level of organisation in the operator hierarchy’ [p 72]. The main benefit of operator theory is that it allows us to define life in a way that avoids a circular argument, and that it is crucial to define life precisely because it is the basis for defining biodiversity.

What is Life? (third step)

Defining life by referring to organisms and defining organisms as ‘living beings’ results in a circular argument. This is avoided because the existence of the operators at one level determine the nature of the operators that will arise at the next level and each transition leads to a higher level of complexity. A transition to a higher level accompanied by a structural and a functional cyclical process, a ‘closure’. ‘All operators at least as complex as a cell are organisms. Life is a general term for the presence of the typical closures found in organisms. .. The above definition of life also implicitly includes future organisms as life forms and is therefore open-ended in the ‘upward direction’ [p 73-4]. Ecosystems and viruses do not match the definition because they do not appear in the operator ladder. As a consequence neither a virus nor the ecosystem belong in the definition of biodiversity. This implies that the structure determines whether something is alive, rather than its activity or how it is produced.

Memes and imitation

Human species can imitate. Memes are units of information that can be imitated. The exchange of memes means that people are not just memebers of genetic populations, but also members of memetic popuations. ‘In this sense, cultures can be seen as complex networks of certain memes, which give rise to generic forms of behaviour. Differences in behavioural patterns distinguish one group of people from another.

The brain

Information is categorized by the brain and by making combinations of these categories, a small neural network can make a large ‘inner world’. The diverssity can be measured in two ways: 1) counting the number of nerve cells, their connections between them and the strenght of the connections, 2) derive the complexity of the network from answers to questions. The brain contains vast numbers of categories and new categories are created at all times. Ideas in this sense can evolve quickly, provided that new ideas are subject to some selection process. ‘The evolutionary potential of brains and memes represents a whole new dimension in the development of biodiversity’ [p 77].

Predicting the future of biodiversity is too big a task to be realistic at this point. The bacteria existing today will almost certainly remain bacteria and whatever evolves from the existing realms, say a new endosymbiontic cell, will be one of many in existence already in some shape or form. The same reasoning goes for the other species groups. As a consequence the future depends on new structural configurations to emerge that is more complex than the neural networks in humans and animals.

‘Operator theory is the first method for predicting the evolution of new structures’ [p 78]. Using the operator theory to predict future operators above the level of humans being the level above the level of the organisms with neural networks. This organism should at least be able to copy itself including all the information required for its functioning as a cell does: by replicating the structures of all the molecules on a structural level. This requires structural copying, not the same thing as learning. It is the copying the structure of the neural network. Organisms with powerful brains do not have the ability to read and write their neural patterns so as to make a copy of them for later use. An organism with a programmed brain in principle could make a back-up and that information could be restored to a subsequent phenotype. Knowledge is passed on without upbringing. Given that this structural copying of information is an attribute of the next operator, then the next operator can only be a technical life form. This makes way for competition between brain codes files for phenotypes equivalent to selfish behavior of genes in phenotypes. Competition between groups of technical organisms can be instrumental in the development of group skills such as collaboration. This scenario open new avanues for the development of biodiversity.

The Pursuit of Complexity

Degrees of freedom are all the ways in which energy and matter can be distributed throughout the universe. The terms ‘acquisition of degrees of freedom’ suggests that the corresponding form of organisation already exists in potential; the acquisition involves its material realisation in the universe. All changes to the configuration of matter and energy result in a change in the state, whether leading to more order or more chaos; a change in the degree of freedom of the system is the consequence. Because the physical laws of conservation must hold true, a local reduction in entropy as a conseuene of the acquisition of degrees of freedom towards more order leads to an increase of entropy elsewhere in the universe. The more biodiversity contributes to the acquisiton of degrees of freedom, the more universal utility it has.

Individual utility for human beings (in the traditional sense) is associated with the development of their needs and desires. That development is connected with the acquisition of degrees of freedom, such as images and wishes in the brain. Everything that contributes to the satisfaction of needs and desires has utility to human beings. Biodiversity is complicated also, because of the ‘bio’ part: the definition of life is still outstanding. This was solved with the use of the organisational ladder of the operator theory: additonal degrees of freedom are acquired by nature by following a series of construction steps defined by the laws of nature. The higher a step is on the ladder the higher the structural complexity of an entity on that step. Life is an attribute of the presence of the closures of an organism as per its position on the organisational ladder. An organism is an operator that is at least as complex, and therefore as high on the ladder, as a bacterium. Biodiversity equals all the differences between organisms.

Continuing biodiversity implies the continuing of the minimum conditions for ecosystems implies ensuring the minimum population numbers of the species associated with the ecosystems implies that species can emerge and go extinct as they would unencumbered, namely with a sufficient platform for its survival and this implies the survival of humans as one of the species in the ecosystems. Conserving biodiversity implies conserving the human species.

Utility of biodiversity

Acquisition of degrees of freedom drives towards more efficient conversion from solar energy to low-grade waste. Individual organisms and all of biodiversity contribute to universal utility as much they can. A small part of universal utility is the utility of biodiversity to individual organisms for meeting their needs. Species have evolved together and they dependd on ech other for the fulfillment of their needs. Because new species are added by evolutionary processes many interactions and entire ecosystems have become more robust in the face of the needs of individual species. Humans are the top generalists, able to disconnect from many environmental uncertainties. People are consuming energy at a high rate pro rata and entropy production is very high. They are creating order and chaos at a high rate and acquisition of degrees of freedom is high as a result. When people come into play, acquisition of degrees of freedom accelerates in both directions. This leads to a reaction of the system – as it would without the intervention of people. ‘Evolution resolves such problems (reaction of the system to change DPB) by always finding paths towards maximum use of existing possibilities’ [p 86]. ‘The consumption of free energy is ‘payment’ for the order humans create’ [87]. ‘As the Red Queen goads all organisms into running faster, evolution and biodiversity ensure that new organisms will acquire increasingly complex degrees of freedom at an ever faster pace. In essence, then, the universal utility of biodiversity is the part it plays in the construction of increasingly advanced forms of life’ [p 88].

Contributions of individual species to human well-being can not easily be understood; a tool to adress this issue is ecosystem services (clean air, water, availability of fish etc). This tool is unreliable because it tends to fluctuate. For that reason people make an effort to control these ecosystem services, such as farming ect. The tool focuses on the use that the ecosystems have for human beings. No clear relation exists between the wealth of people and the biodiversity of the region they inhabit nor vice versa. The sense of loss when a species goes extinct can be specified as the feeling of a loss of potential wereas we like to keep our options open and a sense of responsibility for the event. To be fair: many species can go extinct before the human species is threatened in its existence. Many people have no noticeable relation with nature anyhow, apart from some insects and trees in the park and a pet.While people ae responsible for the decline in numbers of species, they have also introduced new biodiversity into the world such as new crops, pedigree dogs, fashion clothing, architectural styles and so on. As our societies became more industrialized, our arcadian nature changed for cultivated nature and our love of arcadian nature shifted to a love of wild nature.

Mikhailovsky and Levic: Entropy, Information and Complexity or Which Aims the Arrow of Time?

This below is my summery of a somewhat quirky article by George E. Mikhailovsky and Alexander P. Levic on MDPI. It suggests a mathematical model for the variation of complexity, using conditional local maximum entropy for (hierarchical) interrelated objects or elements in systems. I am not capable to verify whether this model makes sense mathematically. However I find the logic of it appealing because it brings a relation between entropy, information and complexity. I need this to be able to assess the complexity of my systems, i.e. businesses. Also it is based on / akin to ‘proven technology’ (i.e. existing models for these concepts in a mathematical grid) and it is seems to be more than a wild guess. Additionally it implicates relations between hierarchical levels and objects of a system, using a resources view. Lastly, and connecteed to this last issue, it addresses this ever-intriguing matter of irreversibility and the concept of time on different scales, and the mutual relation to time at a macroscopic level, i.e. how we experience it here and now.

This quote below from the last paragraph is a clue of why I find it important: “The increase of complexity, according to the general law of complification, leads to the achievement of a local maximum in the evolutionary landscape. This gets a system into a dead end where the material for further evolution is exhausted. Almost everybody is familiar with this, watching how excessive complexity (bureaucratization) of a business or public organization leads to the situation when it begins to serve itself and loses all potential for further development. The result can be either a bankruptcy due to a general economic crisis (external catastrophe) or, for example, self-destruction or decay into several businesses or organizations as a result of the loss of effective governance and, ultimately, competitiveness (internal catastrophe). However, dumping a system with such a local maximum, the catastrophe gives it the opportunity to continue the complification process and potentially achieve a higher peak.”

According to the second law entropy increases in isolated systems (Carnot, Clausius). Entropy is the first physical quantity that varies in time asymmetrically. The H-theorem of Ludwig Boltzmann shows how the irreversibility of entropy increase is derived from the reversibility of microscopic processes obeying Newtonian mechanics. He deduced the formula to:

 (1) S = KblnW

S is entropy

Kb is the Boltzmann constant equal to 1.38×10 23 J/K

W is the number of microstates related to a given macrostate

This equation relates to values at different levels or scales in a system hierarchy, resulting in a irreversible parameter as a result.

In 1948, Shannon and Weaver (The Mathematical Theory of Communication) suggested a formula for informational entropy:

(2) H = −KΣpilog pi

K is an arbitrary positive constant

pi the probability of possible events

If we define the events as microstates, consider them equally probable and choose the nondimensional Boltzmann constant, the Shannon Equation (2) becomes the Boltzmann Equation (1). The Shannon equation is a generalisation of the Boltzmann equation with different probabilities for letters making up a message (different microstates leading to a macrostate of a system). Shannon says (p 50): “Quantities of the form H = −KΣpilog pi (the constant K merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information, choice and uncertainty. The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics, where pi is the probability of a system being in cell i of its phase space.”. Note that no reference is quoted to a difference between information and information entropy. Maximum entropy exists when probabilities in all locations, pi, are equal and the information of the system (message) is in maximum disorder. Relative entropy is the ratio of H to maximum entropy.

The meaning of these values has proven difficult, because the concept of entropy is generally seen as something negative, whereas the concept of information is seen as positive. This is an example by Mikhailovsky and Levic: “A crowd of thousands of American spectators at an international hockey match chants during the game “U-S-A! U-S-A!” We have an extremely ordered, extremely degenerated state with minimal entropy and information. Then, as soon as the period of the hockey game is over, everybody is starting to talk to each other during a break, and a clear slogan is replaced by a muffled roar, in which the “macroscopic” observer finds no meaning. However, for the “microscopic” observer who walks between seats around the arena, each separate conversation makes a lot of sense. If one writes down all of them, it would be a long series of volumes instead of three syllables endlessly repeated during the previous 20 minutes. As a result, chaos replaced order, the system degraded and its entropy sharply increased for the “macro-observer”, but for the “micro-observer” (and for the system itself in its entirety), information fantastically increased, and the system passed from an extremely degraded, simple, ordered and poor information state into a much more chaotic, complex, rich and informative one.” In summary: the level of orde depends on the observed level of hierarchy. Additionally, the value attributed to order has changed in time and so may have changed the qualification ‘bad’ and ‘good’ used for entropy and information respectively.

A third concept connected to order and chaos is complexity. The definition of algorithmic complexity K(x) of the final object x is the length of the shortest computer program that prints a full, but not excessive (i.e. minimal), binary description of x and then halts. The equation for Kolmogorov complexity is:

(3) K(x) = lpr + Min(lx)

D is a set of all possible descriptions dx in range x

L is the set of equipotent lengths lx of the descriptions dx in D

lpr is the binary length of the printing algorithm mentioned above

In case x is not binary, but some other description using n symbols, then:

(4) K(x) = lpr + Min((1/n)Σpi2log(pi))

Mikhailovsky and Levic conclude that, although Equation (4) for complexity is not

completely equivalent to Equations (1) and (2), it can be regarded as their generalization in a broader sense.

Now we define an abstract representation of the system as a category that combines a class of objects and a class of morphisms. Objects of the category explicate (nl: expliciteren) the system’s states and morphisms define admissible transitions from one state to another. Categories with the same objects, but differing morphisms are different and describe different systems. For example, a system with transformations as arbitrary conformities differs from a system where the same set of objects transforms only one-to-one. Processes taking place in the first system are richer than in the latter because the first allows transitions between states of a variable number of elements, while the second requires the same number of elements in different states.

Let us take a system described by category S and the system states X and A, identical to objects X and A from S. Invariant I {X in S} (A) is a number of morphisms from X to A in the category S preserving the structure of objects. In the language of systems theory, invariant I is a number of transformations of the state X into the state A, preserving the structure of the system. We interpret the structure of the system as its “macrostate”. Transformations of the state X into the state A will be interpreted as ways of obtaining the state A from state X, or as “microstates”. Then, the invariant of a state is the number of microstates preserving the macrostate of the system, which is consistent with the Boltzmann definition of entropy in Equation (1). More strictly: we determine generalized entropy of the state A of system S (relating to the state X of the same system) as a value:

(5) Hx (A) = ln( I{X in Q}(A) / I{X in Q}(A) )

I{X in Q}(A) is the number of morphisms from set X into set A in the category of structured sets Q, and I{X in Q}(A) is the number of morphisms from set X into set A in the category of structureless sets Q with the same cardinality (number of dimensions) as in category Q, but with an “erased structure”. In particular cases, generalized entropy has the usual “Boltzmann” or, if you like, “Shannon” look (example given). This represents a ratio of the number of transformations preserving the structure by the total number Q of transformations that can be interpreted as the probability of the formation of the state with a given structure. Statistical entropy (1), information (2) and algorithmic complexity (4) are only a few possible interpretations of Equation (5). It is important to emphasize that the formula for the generalized entropy is introduced with no statistic or probabilistic assumptions and is valid for any large or small amounts of elements of the system.

The amount of “consumed” (plus “lost”) resources determines “reading” of the so-called “metabolic clock” of the system. Construction of this metabolic clock implies the ability to count the number of elements replaced in the system. Therefore, a non-trivial application of the metabolic approach requires the ability to compare one structured set to another. This ability comes from a functorial method comparison of structures that offers system invariants as generalization of the concept “number of elements” for structureless sets. Note that the system that consumes several resources exists in several metabolic times. The entropy of the system is an “averager” of metabolic times, and entropy increases monotonically with the flow of each of metabolic time, i.e., entropy and metabolic times of a system are linked uniquely, monotonously and can be calculated one through the other. This relationship is given by:


Here, H is structural entropy, L ≡ {L1 , L2 , . ., Lm} the set of metabolic times (resources) of system and Lagrange multipliers of the variational problem on the conditional maximum of structural entropy, restricted by flows of metabolic times. For the structure of sets with partitions where morphisms are preserving the partition mapping (or their dual compliances), the variational problem has the form:


It was proven that ≥ 0, i.e., structural entropy monotonously increases (or at least does not decrease) in the metabolic time of the system or entropy “production” does not decrease along a system’s trajectory in its state space (the theorem is analogous to the Boltzmann H-theorem for physical time). Such a relationship between generalized entropy and resourcescan be considered as a heuristic explanation of the origin of the logarithm in the dependence of entropy on the number of transformations: with logarithms the relationship between entropy and metabolic times becoming a power, not exponential, which in turn simplifies the formulas, which involve both parameterizations of time. Therefore, if the system metabolic time is, generally speaking, a multi-component magnitude and level-specific (relating to hierarchical levels of the system), then entropy time “averaging” metabolic times of the levels parameterizes system dynamics and returns the notion of the time to its usual universality.

The class of objects that explicates a system of categories can be presented as a system’s state space. An alternative to the postulation of the equations of motion in theoretical physics, biology, economy and other sciences is the postulation of extremal principles that generate variability laws of the systems studied. What needs to be extreme in a system? The category-functorial description gives a “natural” answer to this question, because category theory has a systematical method to compare system states. The possibility to compare the states by the strength of their structure allows one to offer an extremal principle for systems’ variation: from a given state, the system goes into a state having the strongest structure. According to the method, this function is the number of transformations admissible by structure of the system. However, a more usual formulation of the extremal principle can be obtained if we consider the monotonic function of the specific amount of admissible transformations that we defined as the generalized entropy of the state; namely given that the state of the system goes into a state for which the generalized entropy is maximal within the limits set by available resources. A generalized category-theoretic entropy allows not guessing and not postulating the objective functions, but strictly calculating them from the number of morphisms (transformations) allowed by the system structure.

Let us illustrate this with an example. Consider a very simple system consisting of a discrete space of 8 × 8 (like a chess board without dividing the fields on the black and white) and eight identical objects distributed arbitrary on these 64 elements of the space (cells). These objects can move freely from cell to cell, realizing two degrees of freedom each. The number of degrees of freedom of the system is twice as much as the number of objects due to the two-dimensionality of our space. We will consider the particular distribution of eight objects on 64 elements of our space (cells) as a system state that is equivalent in this case to a “microstate”. Thus, the number of possible states equals the number of combinations of eight objects from 64 ones: W8 = 64!/(64−8)!/8! = 4,426,165,368 .

Consider now more specific states when seven objects have arbitrary positions, while the position of the eighth one is completely determined by the positions of one, a few or all of the others. In this case, the number of degrees of freedom will reduce from 16 (eight by two) to 14 (seven by two), and the number of admissible states will decrease up to the number of combinations by seven objects, seven from 64 ones: W7 = 64!/(64−7)!/7! = 621,216,192

Let us name a set of these states a “macrostate”. Notice that the number of combinations of k elements from n calculated by the formula

(9) n! / (k! * (n-k)!)

is the cumulative number of “microstates” for “macrostates” with 16, 14, 12, and so on, degrees of freedom. Therefore, to reveal the number of “microstates” related exclusively to a given “macrostate”, we have to subtract W7 from W8 , W6 from W7, etc. These figures make quite clear that our simple model system being left to itself will inevitably move into a “macrostate” with more degrees of freedom and a larger number of admissible states, i.e., “microstates”. Two obvious conclusions immediately follow from these considerations:

• It is far more probable to find a system in a complex state than in a simple one.

• If a system came to a simple state, the probability that the next state will be simpler is immeasurably less than the probability that the next state will be more complicated.

This defines a practically irreversible increase of entropy, information and complexity, leading in turn to the irreversibility of time. For space 16 × 16, we could speak about practical irreversibility only, when reversibility is possible, although very improbable, but for real molecular systems where the number of cells is commensurate with the Avogadro’s number (6.02 × 1023), irreversibility becomes practically absolute. This absolute irreversibility leads to the absoluteness of the entropy extremal principle, which, as shown above, can be interpreted in an information or a complexity sense. This extremal principle implies a monotonic increase of state entropy along the trajectory of the system variation (sequence of its states). Thus, the entropy values parametrize the system changes. In other words, the system’s entropy time does appear. The interval of entropy time (i.e., the increment of entropy) is the logarithm of the value that shows how many times the number of non-equivalent transformations admissible by the structure of the system have changed.

Injective transformations ordering the structure are unambiguous nesting. In other words, the evolution of systems, according to the extremal principle, flows from sub-objects to objects: in the real world, where the system is limited by the resources, a formalism corresponding to the extremal principle is a variation problem on the conditional, rather than global, extremum of the objective function. This type of evolution could be named conservative or causal: the achieved states are not lost (the sub-object “is saved” in the object like some mutations of Archean prokaryotes are saved in our genomes), and the new states occur not in a vacuum, but from their “weaker” (in the sense of ordering by the strength of structure) predecessors.

Therefore, the irreversible flow of entropy time determines the “arrow of time” as a monotonic increase of entropy, information, complexity and freedom as the number of its realized degrees up to the extremum (maximum) defined by resources in the broadest sense and especially by the size of the system. On the other hand, available system resources that define a sequence of states could be considered as resource time that, together with entropy time, explicates the system’s variability as its internal system time.

We formulated and proved a far more general extremal principle applicable to any dynamic system (i.e., described by categories with morphisms), including isolated, closed, opened, material, informational, semantic, etc., ones (rare exceptions are static systems without morphisms, hence without dynamics described exceptionally by sets, for example a perfect crystal in a vacuum, a memory chip with a database backup copy or any system at a temperature of absolute zero). The extremum of this general principle is maximum, too, while the extremal function can be regarded as either generalized entropy, or generalized information, or algorithmic complexity. Therefore, before the formulation of the law related to our general extremal principle, it is necessary to determine the extremal function itself.

In summary, our generalized extremal principle is the following: the algorithmic complexity of the dynamical system, either being conservative or dissipative, described by categories with morphisms, monotonically and irreversibly increases, tending to a maximum determined by external conditions. Accordingly, the new law, which is a natural generalization of the second law of thermodynamics for any dynamic system described by categories, can be called the general law of complification:

Any natural process in a dynamic system leads to an irreversible and inevitable increase in its algorithmic complexity, together with an increase in its generalized entropy and information.

Three differences between this new law and the existing laws of nature are:

1) It is asymmetric with respect to time;

2) It is statistical: chances are larger that a system becomes more complex than that it will simplify over time. These chances for the increase of complexity grow with the increase of the size of the system, i.e. the number of elements (objects) in it;

The vast majority of forces considered by physics and other scientific disciplines could be determined as horizontal or lateral ones in a hierarchical sense. They act inside a particular level of hierarchy: for instance, quantum mechanics at the micro-level, Newton’s laws at the macro-level and relativity theory at the mega-level. The only obvious exception is thermodynamic forces when the movement of molecules at the micro-level (or at the meso-level if we consider the quantum mechanical one as the micro-level) determines the values of such thermodynamic parameters as temperature, entropy, enthalpy, heat capacity, etc., at the macro-level of the hierarchy. One could name these forces bottom-up hierarchical forces. This results in the third difference:

3) Its close connection with hierarchical rather than lateral forces.

The time scale at different levels of the hierarchy in the real world varies by orders of magnitude, the structure of time moments (the structure of the present) on the upper level leads to the irreversibility on a lower level. On the other hand, the reversibility at the lower level, in conditions of low complexity, leads to irreversibility on the top one (Boltzmann’s H-theorem). In both cases, one of the consequences of the irreversible complification is the emergence of Eddington’s arrow of time. Thus:

4) the general law of complification, leading to an increase in diversity and, therefore, accumulation of material for selection, plays the role of the engine of evolution; while selection of “viable” stable variants from all of this diversity is a kind of driver of evolution that determines its specific direction. The role of a “breeder” of this selection plays other, usually less general, laws of nature, which remain unchanged.

External catastrophes include the unexpected and powerful impacts of free energy, to which the system is not adapted. The free energy as an information killer drastically simplifies the system and throws it back in its development. However, the complexity and information already accumulated by the system are not destroyed completely, as a rule, and the system according to conservative or casual evolution, continues developing, not from scratch, but from some already achieved level.

Internal catastrophes are caused by ineffective links within the system, when complexity becomes excessive for a given level of evolution and leads to duplication, triplication, and so on, of relations, circuiting them into loops, nesting loop ones into others and, as a result, to the collapse of the system due to loss of coordination between the elements.

Het belang van Belousov-Zhabotinsky (BZ) reacties

Sommige kenmerken van complexe adaptieve systemen, het onderwerp van dit blog, zijn ongrijpbaar, bijvoorbeeld omdat ze ingewikkelde oorzaken of mechanismes hebben, omdat we dat niet zo hebben geleerd of omdat ze tegen de intuïtie ingaan. Dat bemoeilijkt de communicatie erover. Eén voorbeeld daarvan is dat zulke systemen ver-uit-evenwicht zijn. Dat wordt vaak geassocieerd met chaotische verschijnselen, wanorde dus. Daar hebben veel mensen weinig ervaring mee, onder andere omdat het geen onderdeel uitmaakt van het lespakket op scholen: meestal gaat het over evenwichten.

Een BZ reactie is een voorbeeld van een scheikundig proces dat ver-uit-evenwicht is. Het is belangijk omdat het laat zien dat chemische reacties zich niet altijd in evenwichtssituaties afspelen. Het is een auto-katalytische reactie, waarin reagerende stoffen andere stoffen opleveren die op hun beurt de productie van de eerste weer bevorderen en zovoort. In het eerste filmpje hieronder is te zien dat (onder invloed van licht) in het mengsel  bovendien patronen ontstaan en dat is bijzonder, want er ontstaat orde in een systeem dat niet in evenwicht is en waarvan je wanorde zou verwachten. Het is namelijk wonderlijk dat al die moleculen zich uit eigen beweging op een ordelijke manier ten opzichte van elkaar gaan ‘gedragen’.

Op Wikipedia te vinden bij Belousov–Zhabotinsky reaction: An essential aspect of the BZ reaction is its so called “excitability”; under the influence of stimuli, patterns develop… Some clock reactions such as Briggs–Rauscher and BZ … can be excited into self-organising activity through the influence of light.

In het tweede filmpje is ook een autokatalytische oscillerende chemische reactie weergegeven, de Briggs-Rauscher reactie. Deze reactie is ook ver-uit-evenwicht en levert ook orde op, door in een vaste volgorde gedurende een redelijk lange tijd van kleur te veranderen.


Dialoog uit The Counselor

Dit is de transcript van twee dialogen uit de film The Counselor naar het boek van Cormac Mc Carthy. Het onderwerp van de gesprekken is de onafwendbare en, in dit geval uiterste, consequentie van een beslissing. Het is een mooi en huiveringwekkend gesprek over hoe een terloops genomen beslissing in de toekomst enorme gevolgen zal hebben. Lees verder Dialoog uit The Counselor

De betekenis van tijd

20150518_181652Hieronder is een link opgenomen naar een artikel in Business Insider. De boodschap is dat in verschillende culturen op een andere manier tegen tijd wordt aangekeken. Het begrip ervan varieert en het gebruik is ook anders. In het aandachtsgebied van complexiteitstheorie heeft het begrip tijd een ambigue rol. Veel klassieke natuurkundigen waren (zijn) ervan overtuigd dat tijd op zich niet relevant is: alle processen zijn omkeerbaar, we kennen alleen nog niet alle regels (natuurwetten). Er is ook een stroming die ervan uitgaan dat er wel intrinsiek onomkeerbare processen bestaan en dat tijd ook intrinsiek is in zulke processen. In termen van complexe systemen: systemen co-evolueren of co-adapteren ten opzichte van elkaar.

Dat gaat weliswaar met interacties gepaard, maar de vraag is of die noodzakelijk tijdsgebonden zijn: je kunt ook denken aan een soort ‘clicks’. Op een evolutionaire (denk aan geologische) tijdschaal is om een ontwikkeling te beschrijven tijd in feite nauwelijks relevant, vergeleken met onze gemiddelde leeftijd van 80 jaar. De portee van het artikel hieronder is anders, maar het is in deze context toch interessant om te lezen hoe ons vaste begrip van tijd anders wordt geïnterpreteerd in andere culturen. Dit is de link.

The Information

Dit is een verslag van het boek The Information van James Gleick. Hij beschrijft het ontstaan van informatie, de verzelfstandiging ervan tot taal, logica en computability en de rol van informatie in evolutie. Ik had gehoopt dat hij door zou tot het gaatje, namelijk door te beschrijven hoe computability of eigenlijk computing centraal staat in co-evoluerende systemen zoals organisaties, maar dat moet ik zelf doen.
Lees verder The Information

Origins of Order

Maxwell’s Demon revisited
De demon scheidt snelle deeltjes van de langzame en laat ze toe in vat B door telkens een klep te openen als een snel deeltje in de richting van vat B gaat (vat A bevat dus steeds meer langzame deeltjes). Er ontstaat dan een tegendruk in vat B, namelijk omdat de snelle deeltjes daarin een grotere kans hebben om met elkaar te botsen. Die tegendruk kan alleen worden opgeheven als de demon sterk is: hij blijft dan snel genoeg om de klep voldoende snel open te doen om de administratie bij te blijven houden van de banen van de deeltjes in het systeem. Als de demon zwak is dan is die toenemende tegendruk een steeds groter probleem om de toegang te blijven regelen.
Lees verder Origins of Order

Kans, Tijd en Orde

In eerdere posts is het begrip entropie voorbij gekomen. En in het verlengde daarvan is het begrip tijdpijl (‘arrow of time’) besproken. Het verband tussen die begrippen is dat de tijdpijl in de richting wijst van toenemende entropie, oftewel van de entropieproductie. Entropie wordt geassocieerd met toenemende chaos en afnemende structuur, maar de correcte omschrijving is de richting van een onomkeerbaar proces. Lees verder Kans, Tijd en Orde

Entropie en Rising Flow

De ‘werkdoelstelling’ van dit blog is een apparaat te vinden om vooruit te kunnen voorspellen of een bedrijf op de langere termijn levensvatbaar is of niet. Om dat te kunnen doen moet de thermometer erin: het zoeken is naar één of andere parameter die bepalend of op zijn minst indicatief is voor de levensvatbaarheid, of er, zo gezegd, ‘leven in zit’. Mijn uitgangspunt voor deze post is dat een andere manier om datzelfde aan te duiden de ‘mate van levendheid’ van het systeem is. Als die bekend zou zijn, dan is dat een indicatie van de levensvatbaarheid – en met excuses voor de semantiek. Lees verder Entropie en Rising Flow