Chemical Organization Theory and Autopoiesis

E-mail communication of Francis Heylighen on 29 May 2018:

Inspired by the notion of autopoiesis (“self-production”) that Maturana and Varela developed as a definition of life, I wanted to generalize the underlying idea of cyclic processes to other ill-understood phenomena, such as mind, consciousness, social systems and ecosystems. The difference between these phenomena and the living organisms analysed by Maturana and Varela is that the former don’t have a clear boundary or closure that gives them a stable identity. Yet, they still exhibit this mechanism of “self-production” in which the components of the system are transformed into other components in such a way that the main components are eventually reconstituted.

This mechanism is neatly formalized in COT’s notion of “self-maintenance” of a network of reactions. I am not going to repeat this here but refer to my paper cited below. Instead, I’ll give a very simple example of such a circular, self-reproducing process:

A -> B,

B -> C,

C -> A

The components A, B, C are here continuously broken down but then reconstituted, so that the system rebuilds itself, and thus maintains an invariant identity within a flux of endless change.

A slightly more complex example:

A + X -> B + U

B + Y -> C + V

C + Z -> A + W

Here A, B, and C need the resources (inputs, or “food”) X, Y and Z to be reconstituted, while producing the waste products U, V, and W. This is more typical of an actual organism that needs inputs and outputs while still being “operationally” closed in its network of processes.

In more complex processes, several components are being simultaneously consumed and produced, but so that the overall mixture of components remains relatively invariant. In this case, the concentration of the components can vary the one relative to the other, so that the system never really returns to the same state, only to a state that is qualitatively equivalent (having the same components but in different amounts).

One more generalization is to allow the state of the system to also vary qualitatively: some components may (temporarily) disappear, while others are newly added. In this case, we  no longer have strict autopoiesis or [closure + self-maintenance], i.e. the criterion for being an “organization” in COT. However, we still have a form of continuity of the organization based on the circulation or recycling of the components.

An illustration would be the circulation of traffic in a city. Most vehicles move to different destinations within the city, but eventually come back to destinations they have visited before. However, occasionally vehicles leave the city that may or may not come back, while new vehicles enter the city that may or may not stay within. Thus, the distribution of individual vehicles in the city changes quantitatively and qualitatively while remaining relatively continuous, as most vehicle-position pairs are “recycled” or reconstituted eventually. This is what I call circulation.

Most generally, what circulates are not physical things but what I have earlier called challenges. Challenges are phenomena or situations that incite some action. This action transforms the situation into a different situation. Alternative names for such phenomena could be stimuli (phenomena that stimulate an action or process), activations (phenomena that are are active, i.e. ready to incite action) or selections (phenomena singled out as being important, valuable or meaningful enough to deserve further processing). The term “selections” is the one used by Luhmann in his autopoietic model of social systems as circulating communications.

I have previously analysed distributed intelligence (and more generally any process of self-organization or evolution) as the propagation of challenges: one challenge produces one or more other challenges,  which in turn produce further challenges, and so on. Circulation is a special form of propagation in which the initial challenges are recurrently reactivated, i.e. where the propagation path is circular, coming back to its origins.

This to me seems a better model of society than Luhmann’s autopoietic social systems. The reason is that proper autopoiesis does not really allow the system to evolve, as it needs to exactly rebuild all its components, without producing any new ones. With circulating challenges, the main structure of society is continuously rebuilt, thus ensuring the continuity of its organization, however while allowing gradual changes in which old challenges (distinctions, norms, values…) dissipate and new ones are introduced.

Another application of circulating challenges are ecosystems. Different species and their products (such as CO2, water, organic material, minerals, etc.) are constantly recycled, as the one is consumed in order to produce the other, but most are eventually reconstituted. Yet, not everything is reproduced: some species may become extinct, while new species invade the ecosystem. Thus the ecosystem undergoes constant evolution, while being relatively stable and resilient against perturbations.

Perhaps the most interesting application of this concept of circulation is consciousness. The “hard problem” of consciousness asks why information processing in the brain does not just function automatically or unconsciously, the way we automatically pull back our hand from a hot surface, before we even have become conscious of the pain of burning. The “global workspace” theory of consciousness says that various subconscious stimuli enter the global workspace in the brain (a crossroad of neural connections in the prefrontal cortext), but that only a few are sufficiently amplified to win the competition for workspace domination. The winners are characterized by much stronger activation and their ability to be “broadcasted” to all brain modules (instead of remaining restricted to specialized modules functioning subconsciously). These brain modules can then each add their own specific interpretation to the “conscious” thought.

In my interpretation, reaching the level of activation necessary to “flood” the global workspace means that activation does not just propagate from neuron to neuron, but starts to circulate so that a large array of neurons in the workspace are constantly reactivated. This circulation keeps the signal alive long enough for the different specialized brain modules to process it, and add their own inferences to it. Normally, activation cannot stay in place, because of neuronal fatigue: an excited neuron must pass on its “action potential” to connected neurons, it cannot maintain activation. To maintain an activation pattern (representing a challenge) long enough so that it can be examined and processed by disparate modules that pattern must be stabilized by circulation.

But circulation, as noted, does not imply invariance or permanence, merely a relative stability or continuity that undergoes transformations by incoming stimuli or on-going processing. This seems to be the essence of consciousness: on the one hand, the content of our consciousness is constantly changing (the “stream of consciousness”), on the other hand that content must endure sufficiently long for specialized brain processes to consider and process it, putting part of it in episodic memory, evaluating part of it in terms of its importance, deciding to turn part of it into action, or dismissing or vetoing part of it as inappropriate.

This relative stability enables reflection, i.e. considering different options implied by the conscious content, and deciding which ones to follow up, and which ones to ignore. This ability to choose is the essence of “free will“. Subconscious processes, on the other hand, just flow automatically and linearly from beginning to end, so that there is no occasion to interrupt the flow and decide to go somewhere else. It is because the flow circulates and returns that the occasion is created to interrupt it after some aspects of that flow have been processed and found to be misdirected.

To make this idea of repetition with changes more concrete, I wish to present a kind of “delayed echo” technique used in music. One of the best implementation is Frippertronics, invented by avant-garde rock guitarist Robert Fripp (of King Crimson): https://en.wikipedia.org/wiki/Frippertronics

The basic implementation consist of an analogue magnetic tape on which the sounds produced by a musician are recorded. However, after having passed the recording head of the tape recorder, the tape continues moving until it is read by another head that reads and plays the recorded sound. Thus, the sound recorded at time t is played back at time t + T, where the interval T depends on the distance between the recording and playback heads. But while the recorded sound in played back, the recording head continues recording all the sound, played by either the musician(s) or the playback head, on the same tape. Thus, the sound propagates from musician to recording head, from where is is transported by tape to the playback head, from where it is propagated in the form of a sound wave back to the recording head, thus forming a feedback loop.

If T is short, the effect is like an echo, where the initial sound is repeated a number of times until it fades away (under the assumption that the playback is slightly less loud than the original sound). For a longer T, the repeated sound may not be immediately recognized as a copy of what was recorded before given that many other sounds have been produced in the meantime. What makes the technique interesting is that while the recorded sounds are repeated, the musician each time adds another layer of sound to the layers already on the recording. This allows the musician to build up a complex, multilayered, “symphonic” sound, where s/he is being accompanied by her/his previous performance. The resulting music is repetitive, but not strictly so, since each newly added sound creates a new element, and these elements accumulate so that they can steer the composition in a wholly different direction.

This “tape loop” can be seen as a simplified (linear or one-dimensional) version of what I called circulation, where the looping or recycling maintains a continuity, while the gradual fading of earlier recordings and the addition of new sounds creates an endlessly evolving “stream” of sound. My hypothesis is that consciousness corresponds to a similar circulation of neural activation, with the different brain modules playing the role of the musicians that add new input to the circulating signal. A differences is probably that the removal of outdated input does not just happen by slow “fading” but by active inhibition, given that the workspace can only sustain a certain amount of circulating activation, so that strong new input tends to suppress weaker existing signals. This and the complexity of circulating in several directions of a network may explain why conscious content appears much more dynamic than repetitive music.

Social Systems as Parasites

Seminar 1 December 2017, Francis Heylighen

Social Systems as Parasites

The power of a social system

1. In an experiment concerning punishment, people obey an instruction to administer others electric shocks. People tend to be obedient / “God rewards obedience” / “Whom should I obey first?” 2. When asked to point out which symbol is equal to another, people select the one they believe is equal, but when they are confronted with the choices of the other contestants, they tend to change their selection to what the others have chosen. Social systems in this way determine our worldview, namely the social construction of reality by specifying what is real.

Social systems suppress self-actualization

Social systems don’t ‘want’ you to think for yourself, but to replicate their information instead; social systems suppress non-conformist thought, namely they suppress differences in thought, and thereby they do not allow the development of unique (human) personalities: they suppress self-actualization. Examples of rules: 1. A Woman Should Be A Housewife >> If someone is a woman then, given that she shows conformist behavior, she will become a housewife and not a mathematician &c. Suppose Anna has a knack for math: If she complies then she becomes a housewife and she is likely to become frustrated; If she does not comply then she will become a mathematician (or engineer &c) and she is likely to become rebellious and suffer from doubts &c.2. To Be Gay is Unacceptable >> If someone is gay then, given that she shows conformist behavior, she will suppress gay behavior, but show a behavior considered normal instead; Suppose Anna is gay: If she complies she will be with a man and become frustrated; If she does not comply then she is likely to become rebellious, she will exhibit gay behavior, be with a woman, and suffer from doubts &c.

Social Systems Programming

People obey social rules unthinkingly and hence their self-actualization is limited (by them). This is the same as to say that social systems have a control over people. The emphasis on the lack of thinking is by the authors. The social system consists of rules that assists the thinking. And only thinking outside of those rules (thinking while not using those rules) would allow a workaround, or even a replacement of the rules, temporary or ongoing. This requires thinking without using pre-existing patterns or even thinking sans-image (new to the world).

Reinforcement Learning

1. Behaviorist: action >> reward (rat and shock) 2. socialization: good behavior and bad behavior (child and smile). This was a sparse remark: I guess the development of decision-action rules in children by socialization (smiling) is the same as the development of behavioral rules in rats by a behaviorist approach (shock).

Social systems as addictions

Dopamine is a neurotransmitter producing pleasure. A reward releases dopamine; Dopamine is addictive; Rewards are addictive. Social systems provide (ample) sources for rewards; Participating in social systems is a source of dopamine and hence it is addictive (generates addiction) and it maintains the addiction.

Narratives

Reinforcement need NOT be immediate NOR material (e.g. heaven / hell). Narratives can describe virtual penalties and rewards: myth, movies, stories, scriptures.

Conformist transmission

When more people transmit a particular rule then more people will transmit it. DPB: this reminds me of the changes in complex systems as a result of small injected change: many small changes and fewer large ones: the relation between the size of the shifts and their frequency is a power law.

Cognitive Dissonant

Entertaining mutually inconsistent beliefs is painful: the person believes it is bad to kill other people. As a soldier he now kills other people. This conflict can be resolved by replacing the picture of a person to be killed by the picture of vermin. The person thinks it is ok to kill vermin.

Co-opting emotions

Emotions are immediate strong motivators that bypass rational thought. Social systems use emotions to reinforce the motivation to obey their rules. 1. Fear: the anticipation of a particular outcome and the desire to avoid it 2. Guilt: fear of a retribution (wraak) and the desire to redeem (goedmaken); this can be exploited by the social system because there can be a deviation from its rules without a victim and it works on imaginary misdeeds: now people want to redeem vis-a-vis the social system 3. Shame: Perceived deficiency of the self because one is not fulfilling the norms of the social system: one feels weak, vulnerable and small and wishes to hide; the (perceived) negative judgments of others (their norms) are internalized. PS: Guilt refers to a wrong action implying a change of action; Shame refers to a wrong self and implies the wish for a change of (the perception of) self 4. Disgust: Revulsion of (sources of) pollution such as microbes, parasites &c. The Law of Contagion implies that anything associated with contagion is itself contagious.

Social System and disgust

The picture of a social system is that it is clean and pure and that it should not be breached. Ideas that do not conform to the rules of the social system (up to and including dogma and taboo) are like sources of pollution; these contagious ideas lead to reactions of violent repulsion by the ones included by the social system.

Vulnerability to these emotions

According to Maslow people who self-actualize are more resistant to these emotions of fear, shame, guilt and disgust.

DPB: 1. how do variations in the sensitivity to neurotransmitters affect the sensitivity to reinforcing? I would speculate that a higher sensitivity to dopamine leads to a more eager reaction to a positive experience, hence leading to a stronger reinforcement of the rule in the brain 2. how do higher or lower sensitivity to risk (the chance that some particular event occurs and the impact when it does) affect their abiding by the rules? I would speculate that sensitivity to risk depends on the power to cognize it and to act in accordance with it. A higher sensitivity to risk leads to attempting to follow (conformist) rules more precisely and more vigorously; conversely a lesser sensitivity to risk leaves space for interpretation of the rule, its condition or its enactment.

How Social System Program Human Behavior

Heylighen, F., Lenartowicz, M., Kingsbury, K., Beigi, S., Harmsen, T. . Social Systems Programming I: neural and behavioral control mechanisms

Abstract

Social systems can be defined as autopoietic networks of distinctions and rules that specify which actions should be performed under which conditions. Social systems have an enormous power over human individuals, as they can “program” them, ..’ [draft p 1]. DPB: I like the summary ‘distinctions and rules’, but I’m not sure why (maybe it is the definitiveness of this very small list). I also like the phrase ‘which actions .. under which conditions’: this is interesting because social systems are ‘made of’ communication, which in turn is ‘made of’ signals, which in turns are built up from selections of utterances &c., understandings and information. The meaning is that information depends on its frame, namely its environment. And so this phrase above makes the link between the communication, rule-based systems and the assigning of meaning by (in) a system. Lastly these social mechanisms hold a strong influence over humans, even up to the point of damaging themselves. This paper is about the basic neural and behavioral mechanisms used for programming in social systems. This should be important for my landscape of the mind, and familiarization.

Introduction

Humans experience a large influence from many different social systems on a daily basis: ‘Our beliefs, thoughts and emotions are to an important extent determined by the norms, culture and morals that we acquired via processes of education, socialization and communication’ [p 1]. DPB: this resonates with me, because of the choice of the words ‘beliefs’ and ‘thoughts’: these must nicely match the same words in my text, where I explain how these mechanisms operate. In addition I like this phrase because of the concept of acquisition, although I doubt that the word ‘communication’ above is used in the sense of Luhmann. This is not easy to critique or even to realize that these processes are ‘social construction’ and difficult to understand them to be so (the one making a distinction cannot talk about it). Also what is reality in this sense: is it what would have been without the behavior based on these socialized rules or the behavior as-is (the latter I guess)? ‘Social systems can be defined as autopoietic networks of distinctions and rules that govern the interactions between individuals’ (I preferred this one from the abstract: which actions should be performed under which conditions, DPB). The distinctions structure reality into a number of socially sanctioned categories of conditions, while ignoring phenomena that fall outside these categories. The rules specify how the individuals should act under the thus specified conditions. Thus, a social system can be modeled as a network of condition-action rules that directs the behavior of individuals agents. These rules have evolved through the repeated reinforcement of certain types of social actions’ [p 2]. DPB: this is a nice summary of how I also believe things work: rule- based systems – distinctions (social categories) – conditions per distinction – behavior as per the condition-action rules – rules evolve through repeated reinforcement of social actions. ‘Such a system of rules tends to self-organize towards a self-perpetuating configuration. This means that the actions or communications abiding by these rules engender other actions that abide by these same general rules. In other words, the network of social actions or communications perpetually reproduces itself. It is closed in the sense that it does not generate actions of a type that are not already part of the system; it is self-maintaining in the sense that all the actions that deifne parts of the system are eventually produced again (Dittrich & Winter, 2008). This autopoiesis turns the social system into an autonomous, organism-like agent, with its own ideintity that separates it from its environment. This identity or “self” is preserved by the processes taking place inside the system, aand therefore actively defended against outside or “non-self” influences that may endanger it’ [p 2]. DPB: this almost literally explains how cultural evolution takes place. This might be a good quote to include and cut a lot of grass in one go! Social systems wield a powerful influence over people, up to the point of acting against their own health. The workings of social systems is likened to parasites such as the rabies virus which ‘motivates’ its host to become aggressive and bite others such as to spread the virus. ‘We examine the simple neural reinforcement mechanism that is the basis for the process of conditioning whilst also ensuring self-organization of social systems’ (emphasis by the author) [p 3]. DPB: very important: this is at the pivot where the human mind is conditioned such that it incites (motivates) it to act in a specific way and where the self-organization of the social system occurs. This is how my bubbles / situations / jobs work! An element of this process is familiarization: the neural reinforcement mechanism.

The Power of Social Systems

In the hunter gatherer period, humans lived in small groups and individuals could come and go as they wanted to join or form a new group [p 3]. DPB: I question whether free choice was involved in those decisions to stay or leave – or whether they were rather kicked out – and if it was a smooth transfer to other bands – or whether they lost standing and had to settle for a lower rank in a new group. ‘These first human groupings were “social” in the sense of forming a cooperative, caring community, but they were not yet consolidated into autopoietic systems governed by formal rules, and defined by clear boundaries’ [p 4]. DPB: I have some doubts because it sounds too idealistic / normal; however, if taken for face value then this is a great argument to illustrate the developing positions of Kev and Gav against. In sharp contrast are the agricultural communities: they set themselves apart from nature and other social systems, everything outside of their domain fair game for exploitation, hierarchically organized, upheld with symbolic order: authorities, divinities paid homage to with offerings, rituals, prescriptions and taboos. In the latter society it is dangerous to not live by the rules: ‘Thus, social systems acquired a physical power over life and death. As they evolved and refined their network of rules, this physical power engendered a more indirect moral or symbolic power that could make people obey the norms with increasingly less need for physical coercion’ [p 4]. DPB: I always miss the concept of ‘autopolicing’ in the ECCO texts. Individuation of a social system: 1. a contour forms from first utterances in a context (mbwa!) 2. these are mutually understood and get repeated 3. when outside the distinction (norm) there will be a remark 4. autopolicing. Our capacity to cognize depend on the words our society offer to describe what we perceive: ‘More fundamentally, what we think and understand is largely dependent on the concepts and categories provided y the social systems, and by the rules that say which category is associated with which other category of expectations or actions’ [p 5]. DPB: this adds to my theory the idea that not only the rules for decision making and for action depend on the belief systems, namely the memeplexes, but also people’s ‘powers of perception’.

How Social Systems Impede Self-actualization

Social rules govern the whole of our worldview, namely our picture of reality and our role within it (emphasis DPB re definition worldview): ‘They tell us which are the major categories of existence (e.g. mind vs. body, duty vs. desire), what properties these categories have (e.g. mind is insubstantial, the body is inert and solid, duty is real and desire is phantasmagoric), and what our attitudes and behaviors towards each of these categories should be (e.g. the body is to be ignored and despised, desire is to be suppressed)’ [p 5]. DPB: I like this because it gives some background to motivations; however, I believe they are more varied than this and that they do not only reflect the major categories but everything one can know (or rather believe). They are just-so in the sense that they can be (seen or perceived as) useful for something like human well-being or limiting for it. They are generally tacit and believed to be universal and so it is difficult to know which of the above they are. ‘.. these rules have self-organized out of distributed social interactions. Therefore, there is no individual or authority that has the power to change them or announce them obsolete. This means that in practice we are enslaved by the autopoietic social system: we are programmed to obey its rules without questioning’ [ p6]. DPB: I agree, there is no other valid option than that from a variety of just-so stories a few are selected that are more fitting with the existing ones. For people it may now appear that these are the more useful ones, but the used arguments serve a mere narrative that explains why people do stuff, lest they appear to do stuff without knowing why. And as a consequence the motivation to do things only if they serve a purpose is itself meme that tells us to act in this way especially vis a vis others, namely to construct a narrative such that this behavior is explained. The rules driving behavior can be interpreted more or less strictly: ‘Moreover, some rules (like covering the feet) tend to be enforced much less strictly than others (like covering the genitals)‘ [p 6]. DPB: hahaa: Fokke & Sukke. Some of the rules that govern a society are allowed some margin of interpretation and so a variety of them exist; others are assumed to be generally valid, and hence they are more strictly interpreted, exhibiting less variety, leaving people unaware that they are in fact obeying a rule at all. As a consequence of a particular rule being part of a much larger system they cannot be easily changed, especially because the behavior of the person herself is – perhaps unknowingly – steered by that rule or system of rules. In this sense it can be said to hinder or impede people’s self-actualization. ‘The obstruction of societal change and self-actualization is not a mere side effect of the rigidity of social systems; it is an essential part of their identity. An autopoietic system aims at self-maintenance. Therefore, it will counteract any processes that threaten to perturb its organization (Maturana& Varela, 1980, Mingers, 1994). In particular, it will suppress anything that would put into question the rules that define it. This includes self-actualization, which is a condition generally characterized by openness to new ideas, autonomy, and enduring exploration (Heylighen, 1992; Maslow, 1970). Therefore, if we wish to promote self-actualization, we will need to better understand how these mechanisms of suppression used by social systems function’ [p 7]. DPB: I fully agree with the mechanism and I honestly wonder if it is at all possible to know one’s state of mind (what one has been familiarized with in one’s life experience so far, framed in the current environment), and hence if it is possible to self-actualize in a different way from what the actual state of mind (known or not) rules.

Reinforcement: reward and punishment

Conditioning, or reinforcement learning, is a way to induce a particular behavior. Behavior rewarded with a pleasant stimulus tends to be repeated, while behavior punished by an unpleasant stimulus tends to be suppressed. The more often a combination of the above occurs, the more will the relation be internalized, such that it can take the shape of a condition-action (stimulus-response) rule. This differential or selective reinforcement occurs in a process of socialization; the affirmation need to be a material reward, a simple acknowledgement and confirmation suffices (smile, thumbs up, like!); these signals suffice for the release of dopamine in the brain. ‘Social interaction is a nearly ubiquitous source of such reinforcing stimuli. Therefore, it has a wide-ranging power in shaping our categorizations, associations and behavior. Maintaining this dopamine-releasing and therefore rewarding stimulation requires continuing participation in the social system. That means acting according to the system’s rules. Thus, social systems program individuals in part through the same neural mechanisms that create conditioning and addiction. This ensures not only that these individuals automatically and uncritically follow the rules, but that they would feel unhappy if somehow prevented from participating in this on-going social reinforcement game. Immediate reward and punishment are only the simplest mechanisms of reinforcement and conditioning. Reinforcement can also be achieved through rewards or penalties that are anticipated, but that may never occur in reality’ (emphasis by the author) [ p 8].

The power of narratives

People are capable of symbolic cognition and they can conceive of situations that have never occurred (to them): ‘These imagined situations can function as “virtual” (but therefore not less effective) rewards that reinforce behavior’ [p 8]. Narratives (for instance tales) feature tales where the characters are punished or rewarded for their specific behavior. Social systems exploit people’s capacity of symbolic cognition using narratives, and hence build on the anticipatory powers of people to maintain and spread. ‘Such narratives have the advantage that they are easy to grasp, remember and communicate, because they embed abstract norms, rules and values into sequences of concrete events experienced by concrete individuals with whom the audience can easily empathize (Bruner, 1991; Heylighen, 2009; Oatley, 2002). In this way, virtual rewards that in practice are unreachably remote (like becoming a superstar, president of the USA, or billionaire) become easy to imagine as realities’ (emphasis by the author) [p 9]. Narratives can become more believable when communicated via media, celebrities, scripture deemed holy, &c.

Conformist transmission

Reinforcement is more effective when it is repeated more often. Given that social systems are self-reproducing networks of communications (Luhmann, 1995), the information they contain will be heard time and again. Conformist transmission means that you are more liable to adopt an idea, behavior or a narrative if you are communicated it by more other individuals; once adopted you are more likely to convert others to it and to confirm it when others express it. DPB: I agree and I never thought of this in this way: once familiarized with it, then not only can one become more convinced of an idea, but also can one become more evangelical about it. In that way an idea spreads quicker if it is more familiar to more people who then talk about it simultaneously. Now it can become a common opinion; and at that point it becomes more difficult to retain other ideas, up to the point that direct observation can be overruled. Sinterklaas and Zwarte Piet exist!

Cognitive dissonance and institutionalized action

People have a preference for coherence in thought and action: ‘When an individual has mutually inconsistent beliefs, this creates an unpleasant tension, known as cognitive dissonance; this can be remedied by rejecting or ignoring some of these thoughts, so that the remaining ones are consistent. This can be used by the social systems to suppress non-conformist ideas by having a person act in accordance with the rules of the social system but conflicting with the person’s rules: the conformist actions cannot be denied and now the person must cull the non-conformist ideas to release tensions [p 10]. ‘This mechanism becomes more effective when the actions that confirm the social norms are formalized, ritualized or institutionalized, so that they are repeatedly and unambiguously reinforced’ [p 10]. DPB: an illustration is given from [Zizek 2010]: by performing the rituals one becomes religious, because the rituals are the religion. This is an example of a meme: an expression of the core idea; conversely by repeating the expression one repeats the core idea also, and thereby familiarizes oneself with that idea as it becomes reinforced in one’s mind. But that reminds me of the idea of the pencil between the lips making a person happier (left to right) or unhappy (sticking forward). And to top it off: ‘Indeed, the undeniable act of praying to God can only be safeguarded from cognitive dissonance by denying any doubts you may have about the existence of God. This creates a coherence between inner beliefs and socially sanctioned actions, which now come to mutually reinforce each other in an autopoietic closure’ [p 10]. DPB: this is the role of dogma in any belief system: the questions that cannot be asked, the nogo areas, &c.

Distributed Intelligence

Heylighen, F. and Beigi, S. . mind outside brain: a radically non-dualist foundation for distributed cognition . Socially Extended Epistemology (Eds. Carter, Clark, Kallestrup, Palermos, Pritchard) . Oxford University Press . 2016

Abstract

We approach the problem of the extended mind from a radically non-dualist perspective. The separation between mind and matter is an artefact of the outdated mechanistic worldview, which leaves no room for mental phenomena such as agency, intentionality, or experience. [DPB: the rationale behind this is the determinism argument: if everything is determined by the rules of physics (nature) then nothing can be avoided and the future is determined. There can be no agency because there is nothing to choose, there can be no intentionality because people’s choices are determined by the rules of physics (it appears to be their intention but it is physics talking) and there can be no personal experience because which events a person encounters is indifferent from the existence of the (physical) person]. We propose to replace it by an action ontology, which conceives mind and matter as aspects of the same network of processes. By adopting the intentional stance, we interpret the catalysts of elementary reactions as agents exhibiting desires, intentions, and sensations. [DPB: I agree with the idea that mind and body are ‘functions of the same processes’. The intentional stance implies the question: What would I desire, want, feel in his place in this circumstance, and hence what can I be expected to do?] Autopoietic networks of reactions constitute more complex superagents, which moreover exhibit memory, deliberation and sense-making. In the specific case of social networks, individual agents coordinate their actions via the propagation of challenges. [DPB: for the challenges model: see the article Evo mailed]. The distributed cognition that emerges from this interaction cannot be situated in any individual brain. [DPB: this is important and I have discussed this in the section about the Shell operator, who cannot physically be aware of the processes out of his own scope of professional activities]. This non-dualist, holistic view extends and operationalizes process metaphysics and Eastern philosophies. It is supported by both mindfulness experiences and mathematical models of action, self-organization, and cognition. [DPB: I must decide how to apply the concepts of individuation, virtual/real/present, process ontology and/or action ontology, distributed cognition and distributed intelligence (do I need that?), and computation/thinking/information processing in my arguments].

Introduction

Socially extended knowledge is a part of the philosophical theory of the extended mind (Clark & Chalmers, 1998; Palermos & Pritchard, 2013; Pritchard, 2010): mental phenomena such as memory, knowledge and sensation extend outside the individual human brain, and into the material and social environment. DPB: this reminds of the Shell narrative. The idea is that human cognition is not confined to information processing within the brain, but depends on phenomena external to the brain: ‘These include the body, cognitive tools such as notebooks and computers, the situation, the interactions between agent and environment, communications with other agents, and social systems. We will summarize this broad scale of “extensions” under the header of distributed cognition (Hutchins, 2000), as they all imply that cognitive content and processes are distributed across a variety of agents, objects and actions. Only some of those are located inside the human brain; yet all of them contribute to human decisions by providing part of the information necessary to make these decisions’ [pp. 1-2]. The aim of this paper is to propose a radical resolution to this controversy (between processes such as belief, desire and intention are considered mental and other such as information transmission and processing, and storage as mechanical): we assume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011)’ (emphasis DPB: this statement is similar to (analogous to?) the statement that all processes in nature are computational processes or that all processes are cognitive and individuating processes) [p 2].

From dualism to action ontology

Descartes argued that people are free to choose: therefore the human mind does not follow physical laws. But since all matter follows such laws, the mind cannot be material. Therefore the mind must be independent, belonging to a separate, non-material realm. This is illustrated by the narrative that the mind leaves the body when a person dies. But a paradox rises: if mind and matter are separate then how can one affect the other? Most scientists agree that the mind ‘supervenes’ on the matter of the brain and it cannot exist without it. But many still reserve some quality that is specific for the mind, thereby leaving the thinking dualist. An evolutionary worldview explains the increasing complexity: elements and systems are interconnected and the mind does not need to be explained as a separate entity, but as a ‘.. mind appears .. as a natural emanation of the way processes and networks self-organize into goal-directed, adaptive agents’ [p 5], a conception known as process metaphysics. The thesis here is that the theory of the mind can be both non-dual AND analytic. To that end the vagueness of the process metaphysics is replaced with action ontology: ‘That will allow us to “extend” the mind not just across notebooks and social systems, but across the whole of nature and society’ [p 5].

Agents and the intentional stance

Action ontology is based on reactions as per COT. Probability is a factor and so determinism does not apply. Reactions or processes are the pivot in action ontology and states are secondary: ‘States can be defined in terms of the reactions that are possible in that state (Heylighen, 2011; Turchin, 1993)’ [p 7]. DPB: this reminds of the restrictions of Oudemans, the attractors and repellers that promote the probability that some states and restrict the probability that other states can follow from this particular one. In that sense it reminds also of the perception that systems can give to the observer that they are intentional. The list of actions that an agent can perform defines a dynamical system (Beer, 1995, 2000). The states that lead into an attractor define the attractor’s basin and the process of attaining that position in phase-space is called equifinality: different initial states produce the same final state (Bertalanffy, 1973). The attractor, the place the system tends to move towards is its ‘goal’ and the trajectory towards it as it is chosen by the agent at each consecutive state is its ‘course of action’ in order to reach that ‘goal’. The disturbances that might bring the agents off its course can be seen as challenges, which the agent does not control, but which the agent might be able to tackle by changing its course of action appropriately. To interpret the dynamics of a system as a goal-directed agent in an environment is the intentional stance (Dennett, 1989).

Panpsychism and the Theory of Mind

The “sensations” we introduced previously can be seen as rudimentary “beliefs” that an agent has about the conditions it is experiencing’ [p 10]. DPB: conversely beliefs can be seen as sensations in the sense of internalized I-O rules. ‘The prediction (of the intentional stance DPB) is that the agent will perform those actions that are most likely to realize its desires given its beliefs about the situation it is in’ [p 10]. DPB: and this is applicable to all kinds of systems. Indeed Dennett has designed different classes for physical systems, and I agree with the authors that there is no need for that, given that these systems are all considered to be agents (/ computational processes). Action ontology generalizes the application of the intentional stance to all conceivable systems and processes. To view non-human processes and systems in this way is in a sense ‘animistic’: all phenomena are sentient beings.

Organizations

In the action ontology a network of coupled reactions can be modeled: the output of one reaction forms the input for the next and so on. In this way it can be shown that a new level of coherence emerges. If such a network produces its own components including the elements required for its own reproduction it is autopoietic. In spite of ever changing states, its organization remains invariant. The states are characterized by the current configurations of the system’s elements, the states change as a consequence of the perturbations external to the system. Its organization lends the network system its (stable) identity despite the fact that it is in ongoing flux. The organization and its identity render it autonomous, namely independent of the uncertainties in its environment: ‘Still, the autopoietic network A interacts with the environment, by producing the actions Y appropriate to deal with the external challenges X. This defines the autopoietic organism as a higher-order agent: A+XA+Y. At the abstract level of this overall reaction, there is no difference between a complex agent, such as an animal or a human, and an elementary agent, such as a particle. The difference becomes clear when we zoom in and investigate the changing state of the network of reactions inside the agent’ [p 14]. DPB: this is a kind of a definition of the emergence of organization of a multitude of elements into a larger body. This relates to my black-box / transparency narrative. This line of thought is further elaborated on in the COT, where closure and self-maintenance are introduced to explain the notion of autopoiesis in networks. Closure means that eventually no new elements are produced, self-maintenance means that eventually all the elements are produced again (nothing is lost), and together they imply that all the essential parts are eventually recycled. This leads to states on an attractor. Also see COT article Francis. //INTERESTING!! In simple agents the input is directly transformed into an action: there is no internal state and these agents are reactive. In complex networks an input affects the internal state, the agent keeps an internal memory of previous experiences. That memory is determined by the sequence of sensations the agent has undergone. This memory together with its present sensations (perceptions of the environment) constitutes the agent’s belief system. A state is processed (to the next state) by the system’s network of internal reactions, the design of which depends on its autopoietic organization. A signal may or may not be the result of this processing and hence this process can be seen as a ‘deliberation’ or ‘sense-making’. Given the state of the environment, and given the memory of the system resulting from its previous experience, and given its propensity to maintain its autopoiesis, an input is processed (interpreted) to formulate an action to deal with the changed situation. If the action turns out to be appropriate then the action was justified and the rule leading up to it was true and the beliefs are knowledge: ‘This is equivalent to the original argument that autopoiesis necessarily entails cognition (Maturana & Varela, 1980), since the autopoietic agent must “know” how to act on a potentially perturbing situation in order to safeguard its autopoiesis’. This is connected to the notion of “virtue reliabilism”, that asserts that beliefs can be seen as knowledge when their reliability is evidenced by the cognitive capabilities (“virtues”) they grant the agent (Palermos, 2015; Pritchard, 2010) [p 15]. UP TO HERE //.

Socially distributed cognition

In our own approach to social systems, we conceive such processes as a propagation of challenges (Heylighen, 2014a). This can be seen as a generalization of Hutchins’s analysis of socially distributed cognition taking place through the propagation of “state” (Hutchins, 1995, 2000): the state of some agent determines that agentś action or communication, which in turn affects the state of the next agent receiving that communication or undergoing that action. Since a state is a selection out of a variety of potential states, it carries information. Therefore, the propagation of state from agent to agent is equivalent to the transmission and processing of information. This is an adequate model of distributed cognition if cognition is conceived as merely complex information processing. But if we want to analyze cognition as the functioning of a mind or agency, then we need to also include that agent’s desires, or more broadly its system of values and preferences. .. in how far does a state help to either help or hinder the agent in realizing its desires? This shifts our view of information from the traditional syntactic perspective of information theory (information as selection among possibilities) (Shannon & Weaver, 1963)) to a pragmatic perspective (information as trigger for goal-directed action (Gernert, 2006)(emphasis of DPB) [pp. 17-8]. DPB: this is an important connection to my idea that not only people’s minds process information, but the organization as such processes information also. This can explain how a multitude of people can be autonomous as an entity ‘an sich’. Distributed cognition is the cognition of the whole thing and in that sense the wording is not good, because the focus is no longer the human individual but the multitude as a single entity; a better word would be ‘integrated cognition’? It is propose to replace the terms “information” or “state” to “challenge”: a challenge is defined as a situation (i.e. a conjunction of conditions sensed by some agent) that stimulated the agent to act. DPB: Heylighen suggests that acting on this challenge brings benefit to the agent, I think it is more prosaic than that. I am not sure that I need the concept of a challenge. Below is an illustration of my Shell example: an individual know that action A leads to result B, but no one knows that U →Y, but the employees together know this: the knowledge is not in one person, but in the whole (the organization): John : U V, Ann : V→W, Barbara : W→X, Tom : X→Y. Each person recognizes the issue, does not know the (partial) answer, but knows (or finds out) who does; the persons are aware of their position in the organization and who else is there and (more or less) doing what. ‘Together, the “mental properties” of these human and non-human agents will determine the overall course of action of he organization. This course of action moves towards a certain “attractor”, which defines the collective desire or system of values of the organization’ [p 21]. DPB: if I want to model the organization using COT then this above section can be a starting point. I’m not sure I do want to, because I find it impracticable to identify the mix of the ingredients that should enter the concoction that is the initial condition to evolve into the memeplex that is a firm. How many of ‘get a job’ per what amount of ‘the shareholder is king’ should be in it?

Experiencing non-duality

Using the intentional stance it is possible to conceptualize a variety of processes as mind-like agencies. The mind does not reside in the brain, it sits in all kinds of processes in a distributed way.

Social Systems and Autopoiesis

Lenartowicz, M. . Linking Social Communication to Individual Cognition: Communication Science between Social Constructionism and Radical Constructivism . Constructivist Foundations vol. 12 No 1 . 2016

I wish to differentiate between between a social species in the organic, animalistic sense and the interconnectivity of social personas in social science’s sense. While the former expresses its sense structures, co-opting language and other available symbolic tools towards its own autopoietic self-perpetuation and survival, the latter (personas) self-organize out of the usages of these tools – and aggregate up into larger self-organizing social constructs’ [p 50]. DPB: I find this important because it adds a category of behavior to the existing ones: biological (love of kin &c.), the social (altruism) of the category that improves the probability that the organisms survives, and added is now externally directed behavior that produces self-organization in their aggregate. ‘If we agree to approach social systems as cognitive agents per se, we must assume that there will be instances, or aspects, of human expression that are rather pulled by the “creatures of the semiosphere”, as I call the autopoietic constructs of the social (Lenartowicz 2016), for the sake of their own self-perpetuation, than pushed by the sense-structures of the human self’ [p 50]. DPB: I like this idea of the human mind being attracted by some aspects of social systems (and / or repelled by others); a term that is much used in ECCO is whether ‘something resonates with someone’. The argument above is that a push and a pull exist and that in the case of the social, the semiotic creatures have the upper hand, over the proffered biological motivations. ‘The RC (radical constructivist) approach to human consciousness must, then, be balanced by the RC view of the social as an individuated, survival-seeking locus of cognition. The difference between the two kinds of organic and symbolic expressions of sociality, which are here suggested as perpetuating the two distinct autopoietic systems, .. has finally settled the long-standing controversy about whether social systems are autopoietic (..), demonstrating that both sides were right. They were simply addressing two angles of the social. Maturana’s objections originated from his understanding of social relatedness as a biological phenomenon (the organic social), whereas the position summarized by Cadenas and Arnold-Cathalifaud was addressing the social as it is conceived by the social sciences (the symbolic social). The difference here is not in the different disciplinary lenses being applied to the same phenomenon. Rather, it is between two kinds of phenomena, stemming from the cognitive operation of two kinds of autopoietic embodiments. For one, the social is an extension, or an expression, of the organic, physical embodiment of a social species. It does not form an operational closure itself. For the other, the social has happened to self-organize and evolve in a manner that has led it to spawn autonomous, autopoietic and individuating cognitive agents – the “social systems” about which Luhmann wrote’ [p 50]. DPB: this is a long quote with some important elements. First the dichotomy is explained between the social aspects of humans. Second the reason why Maturana was, of all people, opposed to the applicability of autopoiesis to social systems. Now it seems clear why. Third, embodiment is introduced: for the organic social, the social is an extension of the physical embodiment of the individual, but without the autonomy; for the other the social ís the embodiment, namely it self-organizes and evolves into autonomous systems. I like that: the organization at the scale of the human and the organization at the level of the aggregate of the humans.

Darwinian Philosophy

OUDEMANS PLANTAARDIG

[Th. C. W. Oudemans and N. G. J. Peeters, Plantaardig – Vegetatieve Filosofie, KNNV Uitgeverij, 2014]

Find below some original clippings from the above book on the philosophy of Darwinism in general and the perception of plants in ecosystems. Some of them were used in my English book on the concept of the firm.

Dat mensen de natuur beschouwen als beheersings- en als beheersgebied – is dat vreemd of zelfs maar vermijdbaar? Helemaal niet, want mensen zijn levende wezens, en er zijn geen levende wezens die zich niet vermenigvuldigen. Wat zich vermenigvuldigt zal moeten proberen zijn omgeving naar zijn hand te zetten, op gevaar van uitsterven af. Mensen wijken ook in dit opzicht niet af van andere levensvormen. Ieder levend wezen beschouwt zichzelf als subject in zijn eigen wereld.‘ [Oudemans e.a. 2014, p 15].

..de metafysiche indeling van de natuur: je hebt planten die groeien en verwelken, maar niet voelen of streven, je hebt dieren die wel voelen of streven, maar niet nadenken, en je hebt mensen die niet alleen groeien, voelen en streven, maar ook nog eens nadenken. Omdat planten zo laag op de semantische ladder staan zijn ze zielloos, en daarmee nauwelijks medeschepselen van mensen. .. Planten bewegen maar zelden, en als zij dat doen dan meestal onzichtbaar voor het oog. Dat neemt niet weg dat zij een even actieve als intelligente verhouding met hun omgeving hebben – een verhouding waarop vervolgens alle dieren en alle mensen parasiteren’ [Oudemans e.a. 2014 p 16]

Ieder dier, dus ook iedere mens parasiteert direct of indirect op planten. Ook dieren en mensen leven van opgeslagen zonlicht, maar dat kunnen zij alleen door op planten te teren, al is het indirect, door elkaar te consumeren. .. Ook zijn ‘zelfstandig’ denken parasiteert op het plantaardige. Dit omgeeft alles wat ik erover te zeggen denk te kunnen hebben. De semantiek waarbinnen ik mijzelf in mijn verhouding tot de levende natuur zie is zelf weer van natuurlijke oorsprong – al kan ik die natuur niet maar zo tegenover mij plaatsen en bespreken‘ [ Oudemans e.a. p 21].

Darwin ziet het leven als een oever waarop alles wat leeft met elkaar verstrengeld en in elkaar verstrikt is. Dit betekent – een conclusie die Darwin niet trekt – dat ook het menselijk leven en daarmee het menselijke denken op hun eigen manier verstrikt zijn in en verweven met dezelfde oever. De oever is niet te overzien. Als filosoof denk ik hierbij na, terwijl ik er toch binnenin blijf.‘ [Oudeman e.a. 2014 p 23].

Een plant is geen plant wanneer hij zich niet vemenigvuldigt. Wat zich vermenigvuldigt, dat bestaat als reeks. Een reeks bestaat als zich voortzettende opeenvolging van kopiëen en is dus nooit definitief af- of aanwezig. Stopt de voortzetting, dan is het organisme dood. Stopt de voortzetting van een soort dan is die uitgestorven. Waar iets bestaat als zich voortzettende reeks kopiëen, daar zullen uiteenlopende varianten ontstaan, en wel zo dat aard en omvang van de variatie zelf niet te voorspellen valt‘ [Oudemans e.a. 2014 p 30].

Leibnitz spreekt van de opeenvolging der dingen die verspreid zijn over het universum van de levende wezens. Ieder levend wezen maakt deel uit van een serie die niet beëindigd is, zowel in de richting van het verleden als in de richting van de toekomst, series interminata. Leibnitz onderkent dat reeksen niet immuun zijn voor variatie. Wat leeft, dat plant zich voort, maar wat zich voortplant heeft de tendens om mutaties te genereren. Hij spreekt van een tendentia interna ad mutationem. In het wereldbeeld van Newton en Descartes is er uiteindelijk één mogelijkheid, en die wordt al dan niet gerealiseerd, en dat is de mogelijkheid van het universum zoals dat er nu uitziet. Dat dit universum zo is ontstaan en niet anders, is causaal bepaald – het had niet anders af kunnen lopen. Bij Leibnitz komt een heel ander universum naar voren, namelijk een wereld waarin telkens uiteenlopende mogelijkheden tegelijkertijd gerealiseerd worden. Maar dat kan zo niet blijven: er zijn teveel mogelijkhjeden die op hetzelfde moement vragen om een realisatie. En omdat deze mogelijkheden zich allemaal vermenigvuldigen zullen er varianten moeten afvallen. Er ontstaat steeds weer strijd (conflictus) .. Je kunt nooit zeggen da de beste variant gewonnen heeft. Het is onmogelijk om in de wereld van levende kopieën te maken te krijgen met een echt toereikende grond. De toereikende grond zou zich moeten bevinden buiten de opeenvolging van kopieën. .. Wie deze God niet aanneemt, die zal moeten aanvaarden dat er in deze wereld uitsluitend en alleen sprake is van ontoereikende gronden. Wat er is had anders kunnen zijn. Of het had er niet kunnen zijn. Of de omstandigheden veranderen, waardoor datgene wat vroeger verloor het misschien nu opperbest had gedaan. [Oudemans 2014 pp. 31-2].

De vraag naar de species of identiteit is de vraag naar het wezen van iets, maar tegelijkertijd ook de vraag naar de benaming daarvan. Kan ik in mijn benaming de echte aard van het ding zelf raken of niet?‘ [Oudemans 2014 pp. 31-2]. Het zoeken naar en het benoemen van Aristoteliaanse essentie van dingen. Linneaus nam ook het bestaan van essentiële soorten aan. Afwijkingen in voorkomen waren alleen het gevolg van bijzondere natuurlijke omstandigheden.

Hobbes valt met de deur in huis: dat namen arbitrair zijn – dat kan zonder verdere vragen worden verondersteld. Namen hebben wel de pretentie universeeel te zijn, maar uiteindelijk is die universaliteit niets anders dan het samenbrengen van allerlei concrete op elkaar lijkende gevallen (bijvoorbeeld van een madeliefje) onder een verzonnen noemer. .. Locke beseft: de levende natuur is niet zomaar in vaste species in te delen, zij is eindeloos transformeerbaar. Mensen classificeren twee paarden als behorend tot dezelfde soort, en een paard en een zebra niet. Maar dat is niet meer dan een pragmatische beslissing die niet gedicteerd wordt door welke werkelijkheid dan ook‘ [Oudemans 2014 p. 37]. Die benadering wordt conventionalisme of nominalisme genoemd: essentialisme is niet van toepassing op de natuur. Niet de genus bepaalt de aard van de plant maar andersom.

Met Darwin is een nieuwe mogelijkheid binnengetreden in de betekeniswereld die mens en natuur verbindt, namelijk dat noch de natuurlijke soorten noch de benamingen ervoor scherp van elkaar te scheiden zijn, en dat ze toch qua indeling niet willekeurig zijn, omdat er sprake is van verwantschapsrelaties die succes laten zien in de strijd om het bestaan. De scheidingen tussen de soorten zijn er wel, maar ze zijn vaag en poreus, en ze liggen, dankzij de variabiliteit van het levende en de onvoorspelbare wijzigingen in de omgeving, niet vooor eeuwig vast. Beide bestaan als variatie en daaropvolgende selectie van de overlevers, zonder dat de selectie ooit leidt tot een definitief resultaat, want de vermenigvuldiging en dus de variatie gaan door zonder einde‘ [Oudemans 2014 p. 41]

Co-evolutie van bloeiende planten en insecten (Darwin en de Saporta).

Wat leeft, dat vermenigvuldigt zich. En het varieert. Maar al die varianten kunnen op de eindig bewoonbare aarde niet tegelijkertijd blijven bestaan. Sommige varianten oveerleven, andere sterven uit. Dat gaat niet zomaar: daar is sprake van een confrontatie met de omgeving, waardoor de ene variant geschikter blijkt dan de andere. Dat heeft betekenis voor de manier waarop dieren en planten begrepen moeten worden. Zij zijn niet, zoals in de mathesis universalis verondersteld wordt, substanties of krachten, die zich vervolgens in een bepaalde entourage bevinden, maar zij bestaan als verhouding tot hun omgeving. Er is niet een levend wezen dat vervolgens een betekenisvolle relatie aangaat met andere levende wezens en de rest van de natuur, maar die relaties zijn bepalend voor de aard ervan. Dat wordt in dit boek het monadische ervan genoemd: monaden bestaan als spiegel van hun omgeving. .. Om te beginnen vormt ieder levend wezen een eigen perspectief op de wereld. Maar dan kan het niet langer restloos opgenomen worden in de menselijke kennis en beheersing van de natuur. Het zal blijken dat het nog vreemder is: mensen denken planten te manipuleren, maar het omgekeerde gebeurt even goed. .. Wanneer planten en bomen bestaan als hun verhoudingen tot hun omgeving, dan hebben zij een heel eigen begrenzing: zij kunnen de buitenwereld deels toelaten en de deels buitensluiten. Zij worden getekend doordat zij zijn omgeven door membranen. .. Niet ik hecht deze betekenis aan deze boom, dat doet hij zelf in samenspraak met zijn omgeving‘ [Oudemans 2014 pp. 54-5].

..levende wezens niet begrepen kunnen worden in de semantiek van de zelfstandige substanties en de zelfstandige subjecten. Levende wezens vormen namelijk zelf perspectieven op de wereld die ze omringt. Een substantie is geen zelfstandig zijnde, maar een eigen perspectief op de wereld, dat tegelijkertijd een spiegel is van diezelfde wereld. Dat noemt Leibnitz een monade. Spiegeling hoeft daarbij geen afbeelding te zijn – het kan gaan om afgestemd zijn van het een op het ander, zoals het oor aan een kopje is afgestemd op de hand van de theedrinker en een boomblad is afgestemd op het zonlicht.’[Oudemans 2014 p. 57]

Levende wezens vormen reeksen die zich vermenigvuldigen en muteren. Maar in een eindig bewoonbare wereld kunnen zij niet allemaal tegelijk blijven bestaan. Omdat er sprake is van meerdere gevarieerde reeksen wordt er differentieel overleefd, afhankelijk van de omgeving. De ene reeks verminigvuldigt zich meer dan de andere. Dat is de zin van het monadische van de levende natuur. De omgeving heeft betekenis voor het overleven van de reeks. De ene reeks is ‘rationeler’ dan de andere, want beter aangepast aan een bepaalde omgeving. De eigenschappen van de omgeving waarop organismen zijn afgestemd raken in de loop van de tijd in deze organismen geïnternaliseerd. Dat gebeurt keer op keer in de onafzienbare rij organismen die elkaars nakomelingen zijn. Dat houdt in dat je een levend wezen nooit los kunt zien van zijn omgeving en evenmin van zijn voorouders in hun omgeving’ [Oudemans 2014 p. 57-8].

De monadische aard van gewassen blijkt uit de verhouding tussen bomen, grassen en mensen. Ieder gewas wordt geconfronteerd met het vraagstuk: hoe voorkom ik dat ik word opgegeten en dat ik overschaduwd raak door mijn concurrenten. .. Gras verspreidt zich bijzonder snel. Het zet in op groei en verspreiding, niet op permanentie, zoals bomen. Gras groeit telkens aan, uit een goed verborgen knoop (vlak boven deze knoop bevindt zich een deelvaardig weefsel – intercalair meristeem – van waaruit nieuwe stengelleden groeien) die niet gemakkelijk op te eten is. Het kan zich niet vermenigvuldigen zonder de grote hoefdieren die het opeten en verspreiden. De hoefdieren zijn op hun beurt zijn aangepast geraakt aan gras: van hun maag tot aan hun gebit zijn zij erdoor getekend. Gras en het merendeel van de hoefdieren zijn met elkaar verweven – niet los van elkaar te denken. Mensen behoren tot deze vergraste soorten (aangezien ze door mensen worden gebruikt en geconsumeerd DPB). [Oudemans 2014 p. 60]

De voorheen vaste identiteiten van levende wezens blijken poreus, veranderlijk en onoverzichtelijk te zijn. Bij planten is dit nog extremer dan bij dieren: de individualiteit daarvan is onzeker en volatiel‘ [Oudemans 2014 p. 62]

De dominante soorten raken geadapteerd aan uiteenlopende plaatsen in de economie van de natuur (note 235). Darwin’s inzicht is te danken aan de semantiek van de monade. Een levend wezen is alleen een levend wezen wanneer het zich in een omgeving bevindt, in een over en weer ermee. Varianten van planten overleven wanneer zij nieuwe omgevingen vinden, niches, die voor deze bewoonbaar zijn en voor de andere variant niet. Anders geformuleerd: de strijd om het bestaan vergt een strijdperk. Wanneer het strijdperk waarbinnen gestreden wordt muteert, muteert ook de strijd. Wie geschikt is vooor het ene strijdperk kan verliezen in het andere’ [Oudemans 2014 p. 68].

Overal waar leven is, daar bestaan half doorlatende grenzen, membranen, op alle niveaus. Van onderdelen van cellen via cellen als geheel, via onderdelen van organismen zoals bladeren naar organismen als geheel, van regenwouden naar de aarde als geheel, overal houden membranen het onderscheid in stand tussen de binnenzijde en de buitenkant, veelal van energetische aard… In Leibnitz’ wereld van varianten en toeval bleek later entropie een hoofdrol te spelen. Die houdt in: laat een gesloten systeem zijn gang gaan en de daarin bestaande verschillen in energie zullen worden opgeheven. De ordening van het systeem tendeert naar wanorde. Waarom? Omdat er veel meer wanordelijke dan ordelijke mogelijkheden voor het systeem bestaan. De statistische mogelijkheid dat een systeem wanordelijk wordt is enorm groot‘ [Oudemans 2014 p. 73]

Wanneer een blad helemaal open zou staan naar de buitenwereld, dan zou het vervloeien en opgaan in zijn omgeving. Wanneer een blad helemaal gesloten zou zijn, dan zou het direct het lot ondergaan, dat het nu enige tijd uit weet te stellen, namelijk dood zijn, overeenkomstig het beginsel van de entropie‘ [Oudemans 2014 p. 73].

De natuur is een strijd van mogelijkheden, die nu eenmaal niet allemaal verwerkelijkt kunnen worden. Dat houdt in dat de grond waarom iets er is en iets anders niet, niet beperkt kan worden tot werkoorzaken – de (dat DPB) dingen een verandering in beweging bewerkstelligen door tegen andere aan te stoten. Er is sprake van restricties die ervoor zorgen dat de ene mogelijkheid verwerkelijkt wordt en de andere niet. .. dat restricties niet alleen begrepen kunnen worden als beperkingen die mogelijkheden afknijpen. Zij sluiten mogelijkheden uit en juist daardoor worden nieuwe mogelijkheden vewezenlijkt. Iedere zet (op een schaakbord) begrenst het aantal mogelijke tegenzetten, en juist daardoor kunnen er prachtige en ongekende patronen op het schaakbord ontstaan.’ [Oudemans 2014 p. 77].

Het leven op aarde is niet in evenwicht. Voortdurend moet energie worden opgenomen uit de omgeving en weer worden afgestoten. Er moet een energetisch verschil gehandhaafd blijven tussen een levend wezen en zijn omgeving. En toch: levende wezens die naar hun aard ‘far-from-equilibrium’ zijn, zijn niettemin uiterst stabiel. Veel plantaardige en menselijke genen zijn letterlijk miljarden jaren oud. Terwijl de wind en de golven van entropie alles op aarde eroderen behoudt het leven zijn onevenwichtige stabiliteit over kosmische tijdsspannen.‘ [Oudemans 2014 p. 79].

De mechanische reductie lijkt in eerste instantie aan levende wezens nu juist hun leven te ontnemen. Een plant wordt tot machine gereduceerd en dat is een plant niet. .. Maar uiteindelijk is niet de objectieve werkelijkheid primair, maar. Zoals Leibnitz heeft laten zien, het over en weer tussen mij en de plant. .. Dat is het punt dat Heidegger naar voren heeft gebracht. Je kunt een boom wel begrijpen als machine, maarmee heb je nog geen zicht op de verhouding tussen het plantaardige en het menselijke. Wat een boom is en wat ik zelf ben, hoe het plantaardige mede bepalend is voor mijn eigen identiteit – dat alles betreft de manier waarop de een de ander tegemoet treedt. De aard van dit tegemoet treden is semantisch, ligt niet in de feiten en de wetmatigheden aan de objectzijde, maar in het over en weer, waarbinnen de feiten en wetmatigheden zich afspelen.‘ [Oudemans 2014 p. 87].

Kenmerkend voor de overgeleverde semantiek is dat levende wezens op de een of andere manier zelf handelen. Zij hebben het begin van hun beweging in zichzelf, zoals Aristoteles het uitdrukt. Maar dat is bij planten maar in beperkte mate het geval. Zij kunnen niet denken, zij kunnen niet waarnemenen dus nergens naar streven en zij kunnen niet van hun plaats komen, zegt Aristoteles. Het enige wat een plant kenmerkt is het soort beweging dat samenhangt met voeding, groei en ontbinding. .. Planten staan op een lage tree van ontwikkeling, die loopt van planten via strevende en voelende dieren tot aan de denkende mens. Deze semantiek beheerst het moderne Europese denken tot in de huidige tijd. .. Dankzij het Darwinisme is het aristotelisme zo vanzelfsprekend niet meer. Planten verkeren allerminst in de comateuze toestand die ze wordt toegedicht. De bewegingen van planten zijn dikwijls zo traag dat zij verborgen blijven voor de menselijke blik. Zij leven in een andere tijdsschaal.‘ [Oudemans 2014 pp. 88-9]. Er zijn legio voorbeelden bekend van de activiteiten van planten die erop zijn gericht invloed uit te oefenen op hun plantwardige of dierlijke omgeving [Oudemans 2014 pp. 89-100].

Een automaat is een machine die zichzelf in stand houdt en zichzelf vermenigvuldigt. Dat kunnen chemische machines, maar mechanische niet. Zo bezien hebben mensen nog nooit een automaat vervaardigd, terwijl alle levende wezens in deze zin automaten zijn. .. Mensenmachines hebben altijd mensen nodig om in stand te blijven en zich te vermenigvuldigen. Zij zijn niet echt autark, geen echte automaten, zoals Leibnitz verduidelijkt heeft.‘ [Oudemans 2014 p. 105].

Nature abhors self-fertilisation, nature abhors self-pollination’ [Wallace, Darwin in Oudemans 2014 p. 108]

..de wereld is niet causaal bepaald, maar is een strijd tussen zich vermenigvuldigende reeksen mutanten, waarbij telkens selectie plaatsvindt. Daar komt geen doel aan te pas, terwijl in de strijd om de vermenigvuldiging toch telkens datgene komt bovendrijven wat op dat moment functioneel is. Functioneel wil niets anders zeggen dan: onder bepaalde omstandigheden overleeft de ene variant talrijker dan de andere‘ [Oudemans 2014 pp. 109-10]

Dawkins heeft het duidelijk gemaakt. Genen manipuleren de wereld. Het is alsof zij een doel hebben, namelijk hun overleving te maximaliseren. Maar dat doen zij niet. Het is eenvoudig zo dat de varianten met de meeste overlevers overleven. Doelen en strevingen komen er niet aan te pas. Maar dat is voor mij als individu, als werktuig van het genoom, niet anders: individuen streven er niet bewust naar om wat dan ook te maximaliseren; zij gedragen zich alsof zij iets maximaliseren. ..

Mensen zien zich graag als wezens die doelbewust, doelgericht of doelmatig zijn. Dat is een uitvloeisel van de subject-objectgedachte. Als blijkt dat de wereld monadisch is, een over en weer van perspectieven en communicatie, dan is het beter om te spreken van aantrekkingskracht. Dat zegt iets over de verhouding tussen het ene wezen en het andere. Aantrekkingskracht heeft al gauw betekenis voor beide perspectieven: x oefent aantrekkingskracht uit op y (waarbij het er niet zoveel toe doet of x daar zelf ook weet van heeft). Dat kan in het voordeel van y zijn, maar ook van x. Wat mij een eigen doel toeschijnt, dat is de aantrekkingskracht van een aantrekkelijk wezen‘ [Oudemans 2014 p. 110].

De menselijke cultivering is naar haar aard erop gericht alles wat onzuiver is uit te bannen, teneinde zich te verzekeren van maximale beheersing tegen alle verwilderende invloeden. Mensen hebben harde, ondoordringbare scheidingen nodig, geen half doorlatende membranen. Dat blijkt op allerlei manieren, om te beginnen bij de taal die mensen bezigen: in het voorgaande werd duidelijk hoezeer Linnaeus gedreven werd door het verlangen naar zuivere en ondoordringbare categoriseringen.‘ [Oudemans 2014 p. 124].

Iedere keer dat een oude appelvariëteit wegvalt uit de cultivering is een pakket van genen – dat wil zeggen een pakket kwaliteiten van smaak, kleur en textuur, en van bestendigheid tegen parasieten – van de aarbodem verdwenen[M. Pollan, The Botany of Desire: A Plant’s Eye View of the World, 2001 p. 57 in Oudemans 2014 p. 130]

Wij speelden van onze kant onze rol. Wij vermenigvuldigden de bloemen buiten alle proportie. Wij verplaatsten hun zaden de planeet rond, wij schreven bnoeken om hun roem te verspreiden en hun geluk zeker te stellen. Voor de bloem was het hetzelfde oude liedje. Weer een grote evolutionairre deal met een willig, lichtgelovig dier[M. Pollan, The Botany of Desire: A Plant’s Eye View of the World, 2001 p. 119 in Oudemans 2014 p. 137]

Scrabbelen met alleen Q-s en X-en.

Geen herder en één kudde. Ieder wil hetzelfde, ieder is gelijk: wie anders voelt gaat vrijwillig het gekkenhuis in’. .. Geen mens die zich druk maakt over bureaucratisering, onderlinge afhankelijkheid, vernietiging van ‘privacy’, overlevering aan sociale media en vooral: overgeleverd zijn aan een almachtige, overal doordringende, alwetende staat, die vrijwel alles heeft opgeslokt wat voorheen als een menselijk bestaan heeft gegolden, zonder dat dit doordringt tot zijn burgers.[‘Also Sprach Zarathustra, p. 20 in Oudemans 2014 p. 142′, Oudemans 2014 p. 142]

Volgens Ten Bos is Bureaucratie (als een) Inktvis

This is a summary of Ten Bos’s book: ‘Bureacratie is een Inktvis’. The concept of a hyperobject is valuable and was extensively used in my book about the firm.

Characteristics of a bureaucracy are: 1) they have viscosity 2) they are not confined to some location 3) they exist in different time dimensions 4) they are only discernible in phases 5) they are interobjective.

1) viscosity people dealing with bureacracies know these ethical stances: a) groups not individuals are the source of true creativity b) to belong is not a wish but a moral law to which an individual must comply c) to become subject to rationality and science of the collective leads to individual and collective benefit. This ethik is omnipresent in bureaucracies: bureacratic memes.

This is the system by which the business firms are directed and controlled via rules, responsibilities for decisions and their procedures. It also involves the way the company objectives are set, the means of attaining them and the monitoring of them. The focus here is on the relation between the shareholders and the management. Institutions can be seen as bodies of rules forming the environment of markets and organizations where trade-offs take place. The nature of these environments can for instance be economic, political, social, cultural and institutional. The environment provides the conditions for the creation of both coordination mechanisms, for shaping them and providing selection mechanisms evolving both. The environment of organizations and markets consists of rules shaping human interaction safeguarding transactions from any risk explicit to them. In this sense ‘the way the game is played’ is shaped by the cultural institutional environment, which itself is a result of cultural evolution. It is suggested here that this myriad detailed routines, rules and attitudes evolve via human communication from person to person. And in that way that they are capable to generate a finite yet large variation of tentative and experimental beliefs and corresponding decisions and actions for people to exhibit in their professional and private lives alike.

The average counts: to not spend money is good but keeps the collective poor and to spend is sinful but benefits the collective. In that sense mediocracy is a good thing because it benefits the collective and excelling as an invidual damages the collective. As a consequence average performance is beneficial: too much or too big or too deep can never be a good thing. And this hangs in the balance: to not act so as to maximize some things (be a brilliant individual) yet to act so as to maximize other things (consume). Traditional theory of bureaucray states that the person and the position are separate entitities, but starting from the hyperobject theory it becomes clear that this is not possible and bureacracy exists in all of people’s daily activities. The appropriate term for this phenomena is ‘institutionalism’: what is ‘done’and ‘not done’ is institutional and to go against the grain is unprofessional or dilettante behavior. The prototypic and unreliable illustration: monkies associate cold water with some action and institutionalize their action. In this sense people become neophobic: people are very hesitant to engage in something new. Everyone is responsible and no one is accountable; good or bad are annihiliated because everything is proceduralized and everybody is responsible. ‘Nobody really washes her hands clean but everybody washes them together’ [Ten Bos 2015 p. 52].

2) Non locality

In everyday reality we manage to identify objects also using their locality in space and time. In addition we can use speed and acceleration to find out what they are. People are used to observe the world in a three dimensional grid where there a distance between ourselves and other things potentially as well as a difference in speed and acceleration. This is useful for our daily survival but it is also a construct whereby people become separated from their environment, while in fact they are an integrated part of it [Ten Bos 2015 pp 53-4]. Instead of distinguishing people as entities isolated from others and from their environment (the wish to communicate something is the cause of the communication and that the subject is separated from her communication), a better alternative is to understand that individuals are not discrete elements but entangled and very hard to distinguish. This is relevant for people dealing with bureaucracies (bureaucrats) also: the person, her position, the context have become so entangled that they are impossible to distinguish, cause and effect have become indistinguishable. As a conseqence people can act very differently in different locations and at different times: they are driven by outside forces alone and no internal forces. In bureaucratic reality cause and effect have become separate: the process becomes indeterminate. Everything touches everything else, everything is connected: it is an endless sequence of paper, conversation, decision and idea. In that sense bureaucracy is also the denial of singularity and while everbody affects eeverybody else, they are at a distance from each other.

3) Waves

When dealing with hyperobjects the observer has no control over the situation. Bureacracy is the water in which we swim; we don’t know much about it and what we are doing really is survive. This must be clear: this water is often a subtle and often a not quite so subtle form of violence. This violence leads us to the execution of a lot unnecessary work of the kind ‘bulllshit jobs’ [Graeber in Ten Bos 2015 p 59]. People dealing with bureaucracies often do not understand this environment or their positions in it because there is no perspective for their actions. Whatever is written does not conform to what is spoken or what is thought and in a bureaucracy nobody is authentic and everybody is to some extent stupid. This condition of stupidity is relevant in this era of late capitalism.

The pivot is shifting from a correct execution of the tasks belonging to the position, to the correct handling of the administrative tasks that come with the job. ‘This resembles the image of a large ferry boat that, nearly out of control, drives through a sea of drowning people’[ Peter Sloterdijk 1995 pp 13-4 in Ten Bos 2015 p 61]. The expression of emotion does not help, because it is not seen as solidarity and also because to express emotions something concrete to react to is needed. And so as a consequence people tend to feel small in relation to these processes within hyperobjects. The reactions of people between themselves (for example evaluations) are filtered and temporized in relation to their context and so people dealing with hyperobjects tend to be unsure of their performance.

4) Phases

A hyperobject cannot be seen in its entirety but only in parts or in time, as phases. To see it as one the observer would have to ascend to a higher dimension but our senses are limited to the dimensions of the reality they are in. Hyperobjects can appear to not exist for some time but then jump back into view at some point. Hyperobjects are permanently active and never stagnate. Nobody is in control of these processes including the bureaucrats themselves. There is no master mind steering these processes, the machine runs by itself, there is no higher authority. And conversely those considered to be in charge are not effectively in control or to a limited extent. Power is not centralized and can be dispersed in the organization or can even be located at the floor. Often the management has limited power and can not say much for risk of having to execute whatever they have expressed: they also feel observed and controlled. Though hyperobjects are at some times more present or noticeable than at other times, they have a tendency to force themselves to grab the attention. An important characteristic of bureaucracies is testing: once tested, certified or accredited – all procedures to conform to some standard – doors are opnede that were closed before.

This is an automatic absolvent for reflexivity: having entered some test it is no longer required to think about the essence of the thing put to the test, but about the essence of the test itself. People believe that to summarize some tested element by highlighting some issues and ignoring others implies to really understand and to know the element and to identify its causes in an attempt to improve the global performance of some system by tuning the micro-mechanisms. The thought behind this system is to represent reality in the simplest way and to then organize it. And yet, audits and tests are on many occasions no more than an opinion of the person designing the test. And as a consequence the acceptability of the test result depends on the trust that the testee has in the tester. And as a result the selection procedure of the most trustworthy testing agency and not discussion of the facts becomes the main issue for the test. The selection of the testing facility and the testing procedure itself have become the authority for trustworthiness.

The test now provides the certainty much sought after: having achieved the required score the testee feels she can rest assured. But two elements remain unsettling: has the test unveiled facts about the the truth or the testee: what is now known that wasn’t known before the test? And for how long does this last, namely when is the next test due? And so central to the hyperobject is a feeling of stupidity in the individual caused by the object, the bureaucracy in particular. Whenever testing, a bureaucracy looks in a literal way, not at her, but right through the individual in that sense causing a feeling of being stupid and clumsy in the given situation. The proffered support isn’t necessarily useful or helpful and this cannot be known in advance; it is known in advance however that the amount of offered support increases over time.

5) Interobjectivity

The essence is that people can use instruments and means and machines to leave marks that will last for weeks and months and years. These marks are symbols of power: whatever their concrete meaning is, they have the intention to state something and to hold someone to the statement. When the statement isn’t understood then the receiver of the mark pretends that she does understand. Kafka has understood that bureaucracy can be a comedy where everybody pretends to understand what everyone else says and does either or not intentionally. Bureaucracy cannot work if the people are dumb and cannot understand what the written texts say. People need to be enlightened to just the righ level so as to be capable to understand what the bureaucracy requires.

Bureaucracy requires the existence of the tools to register and administrate. The marks of power must remain in existence for some time and the ‘continuity of ink’ supports this. Importantly the objects that surround and pervade bureaucracies also shape the decisions and the communication. These are infrastructural conditions and restrictions that are made available or imposed by the objects that surround people populating bureaucracies.

Individuals exist between private person, her autonomous self, and the official person, her function in a hierarchy, servicing herself as well as the bureaucracy, namely the system that is her environment. ‘This perspective on people as employees sheds light on the concept of hyperobjects also. At this point we begin to understand how the hyperobject not only encompasses people but pervades them’ [Ten Bos 2015 p 112]. The confusion is how people’s wishes to live a normal life as an autonomous human being can be satisfied within the confines of the hyperobject, as often suggested by the human resources manager.

The Utility of Diversity

Diversity
The main cause of death of firms, their loss of autonomy of sales registration, is explained by them being the subject of a merger or an acquisition and only a small portion of firm deaths is caused by bankruptcy. In the previous section the question was aked whether it is a ‘bad’ thing if an organism disappears, apart from the sense of loss it generates. The same can be asked if a firm disappears: is it a ‘bad’ thing if the firm disappears by bankrupty or via a merger or an acquisition? And conversely is it a ‘good’ thing if a firm survives to a ripe old age? In order to be able to explain why firms die as they do, consider this argument below about biological diversity and whether a loss of biodiversity is a ‘bad’ thing and for whom.

Biodiversity literally means the diversity of life. As a concept per se it holds no value because the total number of species is unknown – let alone the number of organisms, not all species are relevant, their numbers are not equal, some are a nuisance and the argument can be defended that extinction of many species is inevitable anyway. What is the use of the concept biodiversity and to generalize it what is its value hence its utility? People having co-evolved with other species, their histories intertwined as they are, cannot be considered qualified to assess the utility of some other species, lest the same question be asked about them also. To that end the concept of universal utility is developed in a larger perspective, people not center stage.

What is life (step 1)?

‘Only organisms, from simple bacteria to complex animals with brains, meet the definition of life’ [Jager 2014 p. 18]. This definition includes a circular reference: organisms are living beings and life resulted in organisms because of evolution. ‘Individual organisms are descendants of the ‘first’ cell’ [Jager 2014 p. 18]. The ‘first’ cell is some complicated yet badly functioning cell that importantly did not require an organism to be able to reproduce, but only to be an offspring to its parents. Later the descent from a parent will be abandoned also. An important consequence is that collectives such as an ecosystem can not be considered alive, because they are not individual descendants of the first cell.

What is diversity?

Because every organism – above every species – on earth is unique, the total diversity of all individuals is incalculable. Details depending on differences such as age, gender or location (phenotypic) are lost if their numbers are narrowed down by taxonomically grouping. Genetic categorization doesn’t solve the problem either, because genetic differences do not determine all phenotypical differences and for instance disregard the neuron connections in complicated animals’ brains. A focus on phenotypes thus resolves that and makes brain diversity part of biodiversity. An ecosystem approach is equally unhelpful, because ecosystems are interlinked and in a sense only one ecosystem exists that also includes abiotic earth.

Diversity in the biological sphere, biodiversity, is defined as: ‘Biodiversity consists of all the differences between organisms’ [Jager 2014 p. 22]. This is useful because it includes organisms’ genetic and phenotypic characteristics as well as the basic elements of ecosystems. It mentions that there are differences, but it does not attempt to measure diversity or its conservation. It does however provide a basis for conservation strategies, because conservation implies maintenance of all the diversity of organisms and therefore also of all the processes upon which this diversity depends [p 23]. These processes include the interactions between the organisms and the interactions between the organisms and their environment, together defining he ecosystem.

This can serve as the umbrella term in the protection of organisms. From this can follow this conservation objective: ‘the preservation of a selection of ecosystem elements (and their associated processes) that in principle guarantee that the numbers of individuals of ALL relevant species (a species or its substitute) of the ecosystem remain above the minimum required for viable populations hence a minimum basis for evolution. In addition: ‘Biodiversity is in a constant state of flux as a result of the evolutionary process in which the numbers of the populations of the species in the ecosystem varies.’ [p 23]. As a consequence species’ extinction is a natural part of the evolutionary process, including those that occur as a result of human action.

Individual Utility

People consider something to be useful – to have some utility – if it changes a less satisfactory situation into a more satisfactory situation. Animals with primitive brains nor plants think about utility (the concept), but they can try to avoid unpleasant stimuli and they can try to find food. For all organisms utility relates to the minimum physiological conditions required to stay alive. For thinking animals and people utility includes the satisfaction of mental objectives such as needs and desires. In more general terms: utility is the whole of the things and activities that ensure that an organism can function normally or without too much stress [p 24]. The subjects experiencing utility exclude non-living things, nature and ecosystems, because they are not an organism and as a consequence they have no normal state of being (that can be improved).

Universal Utility

All processes and forms in the universe result from mechanisms resulting in the acquisition of degrees of freedom. ‘Acquiring degrees of freedom leads to greater differentiation in the universe. Differentiation can proceed towards more organisation and towards chaos. In both directions nature turns potential into reality. And all the while energy disperses throughout the universe at an increasing rate. Acquiring degrees of freedom means the realisation of potential’ [p. 25]. An example of the acquisition of degrees of freedom towards more order is the transition of a single-cell organism to a mutli-cell organism. The multi-cell organism is itself more difficult to eat, can eat larger food and can extend itself to higher places where there is more light. Just because of such advantages these transitions were invented by evolution: the degree of freedom of the single-cell organism as an autonomous entity was traded for the degree of freedom as multicellular entities, now having access to the advantages of a larger size and a more complex shape.

The principle of degrees of freedom also goes from atomic to molecular and from individual nerve cells to brains, from bees to a beehive and from individual people to an organisation. However, beehives and human communities cannot be organisms, because earlier it was established provisionally that any such organisation can not have descended from a first cell and can therefore not be alive.

Processes can also be associated with acquisition of degrees of freedom towards less organisation. These processes are connected with the dispersal of energy leading to the production of entropy. An example is the death of a multicellular organism: it no longer eats or breathes and the cells in its body suffocate or starve. Its body rots and the organisation of its cells is traded in for disordered molecules. The orderly degree of freedom of the organisation of molecules of the late body is now replaced with the disorderly degrees of freedom of the individual molecules. Nature in this sense can acquire orderly as well as disorderly degrees of freedom.

The generation of order implies the dispersal of energy. On balance, more energy is dispersed than corresponding order is generated and entropy always increases. Growth and metabolism are associated with the degradation of free energy (energy that can do mechanic work) from sunlight or food. This degradation of energy thus leads to an increase of diffuse energy (that can do less mechanic work) and an increase of entropy outside of the body of the organism. ‘The entropy that organisms create is a necessary consequence of the creation and maintenance of order in (a) their bodies and (b) in the environment (their burrows, communities, cities, etc)’ [p 26].

In this way all processes in nature lead to the acquisition of degrees of freedom. This acquisition is associated with universal utility (not to be mistaken with individual utility). It can therefore be said that a process that makes a larger contribution to the acquisition of degrees of freedom is more useful for nature – has a higher universal utility. ‘Universal utility is a measure of the relative contribution made by processes to the acquisition of the degrees of freedom. Universal utility does not serve a purpose – though it does have a direction – and does not satisfy any needs or desires.’ [p 27].

Biological Evolution

Viruses evolve but they are molecules with a protein coat, not organisms. In evolution something gets copied, in the case of a virus the copying of the DNA is outsourced. By replacing reproduction with replication, the scope of evolution is widened. But a new definition is required that also includes the evolution of viral dna, endosymbiontic cells (a bacterium in which another bacterium can live), cells containing dna, cells containing endosymbionts, complete multicellular organisms, etc. It must explain how existing structures give rise to the formation of new related structures. The phrase ‘give ris to’ is used instead of replication or copying etc, to include the evolution of the above list of special cases also. ‘The evolution algorithm can be described in a generic way as the repetition of two subprocesses: (1) Diversification, in which an original structure gives rise to the formation of related or derived new structures; and (2) Selection, in which the functioning of the new structuresw depends on their relativecapacities to exist in a certain environment and succeed in diversifying in the next round’ [Donald Campbell Psychological Review (journal) 1960 , Karl Popper Objective Knowledge 1972]. According to this definition, diversification in companies occurs through changes in company culture and selection takes place at the level of the ‘newly arisen’ group.

The darwinian algorithm contained reproduction, variation and selection; this is now simplified into diversification and selection. This universal (darwinian) evolution applies to everything as a framework for the evolution of everything – particles to stars and organisms – and specific theories are required for specific areas of interest, such as biological evolution.

The step of selection is associated with the capacity to acquire the next organisational degree of freedom. In organisms selection is based on survival but in the transition between particle types selection is based on the realisation of a new degree of freedom. (voor allebei geldt dat de degree of freedom groter wordt: organism: more independent of the uncertainties of the environment, particle: more randomness).

Evolution as an algorithm including the above two steps (diversification and selection) from the Chapter Replication? is able to solve problems concerning the acquisition of degrees of freedom without prior knowledge. All evolutionary steps lead to increasing dispersal of energy and increasing randomness hence increasing degrees of freedom. Simultaneously the increase of organisation leads to the acquisition of degrees of freedom also. In this way both chaotic and organisational degrees of freedom are acquired simultaneously, in turn leading to high universal utility.

Sources of biodiversity, Utility of a waterfall

What is the universal utility of water falling or more general: what is the utility of the water cycle on earth?

Particles from the previous level form the building blocks of larger, more complex particles and organisms from the previous level: all particles and organisms can be seen as steps on the particle ladder. In this sense organisms can be thought of as particles with similar features on a functional level. The steps of evolution on the particle ladder are roughly: fundamental particles > nuclear particles > atoms > molecules > bacteria > endosymbiotic cells > multicellular plants > multicellular animals.

The Organism as an Energy Vortex

Orderly structures in organisms and particles are the outcome of a self-organising process. ‘In contrast to physical and chemical particles, organisms can only maintain their structure if there is a continuous influx of free energy and building materials from the environment’ [p 38]. Organisms in this sense can be likened to the bathtub vortex, which is maintained as long as there is water in the tub, flowing out. Likewise organisms need energy and material flowing though them. Because in the process they produce low-grade energy, they can be said to be contracted by nature to convert high-grade energy to low-grade energy and in so doing to reduce the amount of free energy and increase entropy. Thus it is established that organisms contribute to the increase in the degrees of freedom in nature thus increasing universal utility. The remaining question is whether this works better in case there is more biodiversity, namely if there are more organisms.

A foaming waterfall

Some of the energy of the waterfall is used to produce froth on the surface of the river downstream. Metaphorically this waterfoam as a byproduct of falling water can be likened to biofoam as a by-product of sunlight. However, biofoam can make more biofoam using sunlight, which waterfoam cannot do. Firms can replicate themselves as biofoam.

A wellspring of biofoam

Sunlight forces cells to make more cells: ‘a wellspring of biofoam at the bottom of the sunfall’ [p. 40]. The wellspring pumps the biofoam under high pressure into the ecosystem. This pressure combined with competition and selection drives the river of evolution uphill and automatically leads to increasingly complex lifeforms. The organisms that do best are those that acquire the most resources and which are the most succesful parents can only be known after some time, when the performance of the offspring is known also.

Biofoam creates new waterfalls

When converting high-grade energy to low-grade energy it is useful if new cells are created that eat the primary producers, digest them and excrete the waste products. Utility is created for the eaters of the primary producers and so on.

Food chains

This is a sequene of alternating wellsprings and waterfalls and ever more species of organisms interspersed intensifying the process of degrading energy. At each step of degradation there are typically more than one species competing as a consequence of the mechanism of diversification.

The struggle for existence, Running with the red queen

Species populations flow through the ecosystem and flowing fastest along the path of least resistance. The Red Queen hypothesis applies the Waterfall model of the breakdown of free energy in the universe to species.

The Red Queen and the Constructal Law

The Red Queen and the concept of evolution are connected by this constructal law (variation to original by Ardrian Bejan) : ‘For a finite-size flow system to persist in time, its configuration must change such that the flow through the system meets less and less resistance’ [43]. This law is principally about the direction of the development or evolution of the patterns. The two types of flow systems are the organism and the environment. At each generation the flow passes through the organism more easily. The constructal law also predicts that the component systems and processes all develop to a stage at which they can all take the same level of stress. The environment and the whole ecosystem (inclusing biodiversity) is itself a flow system. The constructal law predicts that all organisms evolve together to reduce resistance to the material and energy flows involved from generation to generation converting sunlight to water vapour and waste products.

Here diversity can help with each organism filling its own niche in the ecosystem. But the same service can be rendered by one dominant species. It pays to be more active so as to get to the resources before your competitors do. The most active survive but overactive and overspending suffer the consequences also: biodiverity is dynamic but not chaotic, at the edge of chaos [Kauffman 1993].

People must also run

Comparison with the past and with other people create desires and motivates people as consumers and as collegues. People must run hard to sustain their level of satisfaction constant.

On the brink of chaos

The Red Queen hypothesis explains that species can go extinct but not how many or which. Long periods of constant biodiversity are intermitted with bursts of violent change in numbers of species. Explanations are external causes and, more importantly, competition between individuals. Competition leads to immediate change in the rate of species extinction: the average fitness of the species shifts to a dynamic balance which fluctuates around a ‘critical value’. [Per Bak How Nature Works 1996] argues that these critical values happen in many situations in nature. The waves of extinction in this game are similar to the species creation and extinction consistent with the Red Queen hypothesis.

Arms races and utility chains

Organisms must compete with others for resources while simultaneously avoid being used as a resource by another organism. Selection affects forms of bodies by learning how to use new abiotic resources and arms races. More abiotic and biotic resources implies more biodiversity. In case of biotic resources, the learning of the predator incites learning in the prey to avoid being preyed upon: this can become an arms race. Adaptations can be categorized as: interactions based on energy, structure, information, relocation in space and time.

Resource chains and humans

Reason, tools, industrial processes and foresight have made humans flexible to use almost all resources or their substitutes.

Competition and complexity

If fifty percent of single celled algae in a nutrient rich solution are replaced every twenty minutes with fresh solution, then the algae will not evolve towards more complexity but to a more simple version that is fitter to deal with the rapid sequence of changes in its environment by reproducing faster. This generally occurs when the object of evolution is simpler. Therefore, evolution does not necessarily lead to greater complexity of the species.

Big meets big

Shaking a box with small beads and big ones, the small beads end up at the bottom and the big ones at the surface, because the probability of a big space opening up for a big bead is smaller than a small space opening up for a small one. Following this metaphor (this example is physical in nature), more complexity is invited by competition because smaller organisms compete for resources with many others, while larger organisms compete mainly with other large organisms. In this way more complexity is invited into the process, because the bigger organisms have a better chance to survive hence every organism has the tendency to become larger hence more complex. In this way complexity depends entirely on the continual pressure of unstoppable reproduction combined with competition. [p 52]

A pile of sand on the table

Lines of descent and organisational levels

Diversification and selection leads to evolution in turn leads to diversity. Organisms’ offspring has different heritable characteristics and then the ecosystem selects which of the offspring takes part in the next round. Diversification leads to a pattern of division in different species and a pattern of the emergence of organisms of similar complexity at different locations in the ecosystem.

The origins of genetics and information

Diversity of life on earth is connected with the diversity of ‘information’ locked in the genetic material in organisms. RNA and DNA are mere structures, not information, as are the proteins that RNA and DNA code for. The actual information they represent is revealed how they contribute to the physiology and the structure of the organism. In this way the information harboured in the structures is conditional to their context. How does this relate to the acquisition of degrees of freedom of information during evolution?

Whispering down the generations

Under competitive circumstances it is hard for a cell to maintain strict control over the copying of its DNA.

Why sloppiness pays

If exact copies are made then the offspring becomes more vulnerabele for instance for virus. Not only is it cheaper to be somewhat sloppy, it results in better restistance to external influences. A ‘social’ reason exists also, because the search for a susceptible individual slows down the spreading of the disease. A reasonable balance must exist between the occurrence of changes in DNA and occurrence of changes in the environment.

Biodiversity and information

The complexity of an organism detemines the number of genes required to code for its physiology, structure and behavior. The information carried by the genes depends on the role of the cell that the genetic information belongs to: information equals data plus meaning [Peter Checkland and Jim Scholes, Systems Methodology in Action 1990].

During evolution nature is looking for new degrees of freedom for the information in organisms: continue to exist in a cell, change their structure through mutation, relocate with the cell. A degree of freedom to copy enabled the passing on of information to the offspring. A degree of freedom of sexual reproduction allowed the exchange of information between individuals.

Organisms over generations collect a genepool and are considered to constitute a species population; this represents the collective memory of the genes in the organisms.

Compulsory sex, rapid adaptation and dumping ‘waste’

Phenotypic characters and biodiversity

Diversification and selection operate mainly via individual phenotypes, but when individuals cooperate then the group ‘pays the bill’ instead of the individual. When groups compete then the individuals that work well as a group benefit over the ones that do not. The feature allowing an individual to cooperate is part of the ‘extended phenotype’ of other individuals that help it shield from external influences. Humand extend their phenotypes too by cooperating?

Biodiversity is what is left over

Extreme variations resulting in eccentric phenotyes are probably less efficient fast and smart and have a lesser chance to survive competing with others. This results in ‘pathways’ or strategies after the species explosion such as Cambrian Fauna. This comes about via a rigourous selection process that point the genes in the direction of the phenotypes most beneficial for their inhabitants. As a consequence the biodiversity curently existing on earth is just a fraction of all the possibilities. Not all variations result in a change in phenotype: some are neutral but not useless as they are experiments to build possible new foundations for the mergence of other mutations in the future.

A tree of structures

Organisational levels and biodiversity

‘Organisational levels not only represent a fundamental ranking of structures in nature, they also provide a frame of reference for evaluating biodiversity’ [p 64]

What is the value of an organisational level?

The structure at each level of organisation depends on the structure of the lower levels. The greater the number of transitions to get at some level, the higher the rank. This also enables the assessment of the cost of reaching some level: ‘By taking account for the resources and invenstions needed to achieve subsequent levels, organisational levels provide a framework for estimating how ‘bad’ it is when a species becomes extinct’ [p 65]. In light of this it is worthwhile ot take good care of the current level of human organisation, because it has taken many generations, a lot of energy and resources to get to where it stands now. Human culture is therefore a valuable invention, although it begs the question how robust it is agaionst large scale catastrophe, because of the availability of resources that have at this point become scarce.

Future biodiversity

What is Life, The difference between Life and Living (step 2)

Living is a state of being active. Life is an attribute of being in (able to be switched to) the state of living. All activities consistent with ‘living’ are therefore not relevant to a definition of ‘life’ [67].

‘If something possesses the material organisation corresponding to life and it is active then it is living’ [p 67, Société de Biologie, Paris 1860].

The Material Organisation of Life

[Maturana and Varela 1972] named the distinguishing characteristic autopoiesis, the ability of the organism to maintain itself. This does not say anything about the maintaining mechanism, but it becomes clear that the organism must have a boundary, that allows it to maintain the ‘self’. The type of organisation is spatially defined and cyclical because the molecules act as a group so as to maintain each other.

[Manfred Eigen and Peter Schuster 1977-1978, The Hypercycle: A principle of Natural Self-Organisation], A material desription of life in this sense requires interaction between a chemical hypercycle and an enclosing membrane which is maintained by the internal processes and in turn supports those processes. Structure and function are two sides of the same coin and this model can clarify the life of bacteriae.

Levels of organisation

Which structural configurations (organisation) of matter must the definition of life include? Nature has three fundamental degrees of freedom to allow complexity to emerge: inward, outward, upward. Complex systems use the structures available at the preceding level of complexity. Making the transition to a next level requires a new form of internal organisation or a new form of interaction. ‘The formation of a more complex particle is always accompanied by the formation of a new type of spatial configuration and a new type of process. Here, ‘new’ means that the new attributes are impossible at he previous level of organisation’ [p 71] Each time an existing particle forms a building block for a new attribute, a new ‘particle’ is born.

Particles + organisms = operators

The operator theory says that strict and comparable rules for building with forms apply to all operators, both physical particles and organisms. The hierarchy is quarks, hadrons, atoms, molecules, bacterial cells, endosymbiotic cells, multicellular endosymbiotic organisms and multicellular endosymbiotic organisms with a neural network. Throughout this entire hierarchy the physical laws on structural configuration place restrictions on the transitions between organisational levels. ‘What it (operator theory DPB ) does say is that the levels of complexity are not accidental. This is because nature must use the existing simple forms as the basis for building new, more complex forms, and because each time it does so, nature must follow strict design laws to meet the requirements of the next higher level of organisation in the operator hierarchy’ [p 72]. The main benefit of operator theory is that it allows us to define life in a way that avoids a circular argument, and that it is crucial to define life precisely because it is the basis for defining biodiversity.

What is Life? (third step)

Defining life by referring to organisms and defining organisms as ‘living beings’ results in a circular argument. This is avoided because the existence of the operators at one level determine the nature of the operators that will arise at the next level and each transition leads to a higher level of complexity. A transition to a higher level accompanied by a structural and a functional cyclical process, a ‘closure’. ‘All operators at least as complex as a cell are organisms. Life is a general term for the presence of the typical closures found in organisms. .. The above definition of life also implicitly includes future organisms as life forms and is therefore open-ended in the ‘upward direction’ [p 73-4]. Ecosystems and viruses do not match the definition because they do not appear in the operator ladder. As a consequence neither a virus nor the ecosystem belong in the definition of biodiversity. This implies that the structure determines whether something is alive, rather than its activity or how it is produced.

Memes and imitation

Human species can imitate. Memes are units of information that can be imitated. The exchange of memes means that people are not just memebers of genetic populations, but also members of memetic popuations. ‘In this sense, cultures can be seen as complex networks of certain memes, which give rise to generic forms of behaviour. Differences in behavioural patterns distinguish one group of people from another.

The brain

Information is categorized by the brain and by making combinations of these categories, a small neural network can make a large ‘inner world’. The diverssity can be measured in two ways: 1) counting the number of nerve cells, their connections between them and the strenght of the connections, 2) derive the complexity of the network from answers to questions. The brain contains vast numbers of categories and new categories are created at all times. Ideas in this sense can evolve quickly, provided that new ideas are subject to some selection process. ‘The evolutionary potential of brains and memes represents a whole new dimension in the development of biodiversity’ [p 77].

Predicting the future of biodiversity is too big a task to be realistic at this point. The bacteria existing today will almost certainly remain bacteria and whatever evolves from the existing realms, say a new endosymbiontic cell, will be one of many in existence already in some shape or form. The same reasoning goes for the other species groups. As a consequence the future depends on new structural configurations to emerge that is more complex than the neural networks in humans and animals.

‘Operator theory is the first method for predicting the evolution of new structures’ [p 78]. Using the operator theory to predict future operators above the level of humans being the level above the level of the organisms with neural networks. This organism should at least be able to copy itself including all the information required for its functioning as a cell does: by replicating the structures of all the molecules on a structural level. This requires structural copying, not the same thing as learning. It is the copying the structure of the neural network. Organisms with powerful brains do not have the ability to read and write their neural patterns so as to make a copy of them for later use. An organism with a programmed brain in principle could make a back-up and that information could be restored to a subsequent phenotype. Knowledge is passed on without upbringing. Given that this structural copying of information is an attribute of the next operator, then the next operator can only be a technical life form. This makes way for competition between brain codes files for phenotypes equivalent to selfish behavior of genes in phenotypes. Competition between groups of technical organisms can be instrumental in the development of group skills such as collaboration. This scenario open new avanues for the development of biodiversity.

The Pursuit of Complexity

Degrees of freedom are all the ways in which energy and matter can be distributed throughout the universe. The terms ‘acquisition of degrees of freedom’ suggests that the corresponding form of organisation already exists in potential; the acquisition involves its material realisation in the universe. All changes to the configuration of matter and energy result in a change in the state, whether leading to more order or more chaos; a change in the degree of freedom of the system is the consequence. Because the physical laws of conservation must hold true, a local reduction in entropy as a conseuene of the acquisition of degrees of freedom towards more order leads to an increase of entropy elsewhere in the universe. The more biodiversity contributes to the acquisiton of degrees of freedom, the more universal utility it has.

Individual utility for human beings (in the traditional sense) is associated with the development of their needs and desires. That development is connected with the acquisition of degrees of freedom, such as images and wishes in the brain. Everything that contributes to the satisfaction of needs and desires has utility to human beings. Biodiversity is complicated also, because of the ‘bio’ part: the definition of life is still outstanding. This was solved with the use of the organisational ladder of the operator theory: additonal degrees of freedom are acquired by nature by following a series of construction steps defined by the laws of nature. The higher a step is on the ladder the higher the structural complexity of an entity on that step. Life is an attribute of the presence of the closures of an organism as per its position on the organisational ladder. An organism is an operator that is at least as complex, and therefore as high on the ladder, as a bacterium. Biodiversity equals all the differences between organisms.

Continuing biodiversity implies the continuing of the minimum conditions for ecosystems implies ensuring the minimum population numbers of the species associated with the ecosystems implies that species can emerge and go extinct as they would unencumbered, namely with a sufficient platform for its survival and this implies the survival of humans as one of the species in the ecosystems. Conserving biodiversity implies conserving the human species.

Utility of biodiversity

Acquisition of degrees of freedom drives towards more efficient conversion from solar energy to low-grade waste. Individual organisms and all of biodiversity contribute to universal utility as much they can. A small part of universal utility is the utility of biodiversity to individual organisms for meeting their needs. Species have evolved together and they dependd on ech other for the fulfillment of their needs. Because new species are added by evolutionary processes many interactions and entire ecosystems have become more robust in the face of the needs of individual species. Humans are the top generalists, able to disconnect from many environmental uncertainties. People are consuming energy at a high rate pro rata and entropy production is very high. They are creating order and chaos at a high rate and acquisition of degrees of freedom is high as a result. When people come into play, acquisition of degrees of freedom accelerates in both directions. This leads to a reaction of the system – as it would without the intervention of people. ‘Evolution resolves such problems (reaction of the system to change DPB) by always finding paths towards maximum use of existing possibilities’ [p 86]. ‘The consumption of free energy is ‘payment’ for the order humans create’ [87]. ‘As the Red Queen goads all organisms into running faster, evolution and biodiversity ensure that new organisms will acquire increasingly complex degrees of freedom at an ever faster pace. In essence, then, the universal utility of biodiversity is the part it plays in the construction of increasingly advanced forms of life’ [p 88].

Contributions of individual species to human well-being can not easily be understood; a tool to adress this issue is ecosystem services (clean air, water, availability of fish etc). This tool is unreliable because it tends to fluctuate. For that reason people make an effort to control these ecosystem services, such as farming ect. The tool focuses on the use that the ecosystems have for human beings. No clear relation exists between the wealth of people and the biodiversity of the region they inhabit nor vice versa. The sense of loss when a species goes extinct can be specified as the feeling of a loss of potential wereas we like to keep our options open and a sense of responsibility for the event. To be fair: many species can go extinct before the human species is threatened in its existence. Many people have no noticeable relation with nature anyhow, apart from some insects and trees in the park and a pet.While people ae responsible for the decline in numbers of species, they have also introduced new biodiversity into the world such as new crops, pedigree dogs, fashion clothing, architectural styles and so on. As our societies became more industrialized, our arcadian nature changed for cultivated nature and our love of arcadian nature shifted to a love of wild nature.

A New Kind of Science

Wolfram concludes that ‘the phenomenon of complexity is quite universal – and quite independent of the details of particular systems’. This complex behaviour does not depend on system features such as the way cellulare automata are typically arranged in a rigid array or that they are processed in parallel. Very simple rules of cellular automata generally lead to repetitive behaviour, slightly more complex rules may lead to nested behaviour and even more complex rules may lead to complex behaviour of the system. Complexity with regards to the underlying rules means how they can be intricate or their assembly or make-up is complicated. Complexity with regards to the behaviour of the overall system means that little or no regularity is observed.

The surprise is that the threshold for the level of complexity of the underlying rules to generate overall system complexity is relatively low. Conversely, above the threshold, there is no requirement for the rules to become more complex for the overall behaviour of the system to become more complex.

And vice versa: even the most complex of rules are capable of producing simple behaviour. Moreover: the kinds of behaviour at a system level are similar for various kinds of underlying rules. They can be categorized as repetitive, nested, random and ‘including localized structures’. This implies that general principles exist that produce the behaviour of a wide range of systems, regardless of the details of the underlying rules. And so, without knowing every detail of the observed system, we can make fundamental statements about its overall behaviour. Another consequence is that in order to study complex behaviour, there is no need to design vastly complicated computer programs in order to generate interesting behaviour: the simple programs will do [Wolfram, 2002, pp. 105 – 113].

Numbers
Systems used in textbooks for complete analysis may have a limited capacity to generate complex behaviour because they, given the difficulties to make a complete analysis, are specifically chosen for their amenability to complete analysis, hence of a simple kind. If we ignore the need for analysis and look only at results of computer experiments, even simple ‘number programs’ can lead to complex results.

One difference is that in traditional mathematics, numbers are usually seen as elementary objects, the most important attribute of which is their size. Not so for computers: numbers must be represented explicity (in their entirety) for any computer to be able to work with it. This means that a computer uses numbers as we do: by reading them or writing them down fully as a sequence of digits. Whereas we humans do this on base 10 (0 to 9), computers typically use base 2 (0 to 1). Operations on these sequences have the effect that the sequences of digits are updated and change shape. In tradional mathematics, this effect is disregarded: the effect of an operation on a sequence as a consequence of an operation is considered trivial. Yet this effect amongst others is by itself capable of introducing complexity. However, even when the size only is represented as a base 2 digit sequence when executing a simple operator such as multiplication with fractions or even whole numbers, complex behaviour is possible.

Indeed, in the end, despite some confusing suggestions from traditional mathematics, we will discover that the general behavior of systems based on numbers is very similar to the general behavior of simple programs that we have already discussed‘ [Wolfram, 2002, p 117].

The underlying rules for systems like cellular automata are usually different from those for systems based on numbers. The main reason forr that is that rules for cellular automata are always local: the new color of any particular cell depends only on the previous colour of that cell and its immediate neighbours. But in systems based on numbers there is usually no such locality. But despite the absence of locality in the underlying rules of systes based on numbers it is possible to find localized structures also seen in cellular automata.

When using recursive functions of a form such as f(n) = f(n – f(n- 1) then only subtraction and addition are sufficient for the development of small programs based on numbers that generate behaviour of great complexity.

And almost by definition, numbers that can be obtained by simple mathematical operations will correspond to simple such (symbolic DPB) expressions. But the problem is that there is no telling how difficult it may be to compute the actual value of a number from the symbolic expression that is used to represent it‘ [Wolfram, 2002, p143].

Adding more dimensions to a cellular automaton or a turing machine does not necessarily mean that the complexity increases.

But the crucial point that I will discuss more in Chapter 7 is that the presence of sensitive dependence on initial conditions in systems like (a) and (b) in no way implies that it is what is responsible for the randomness and complexity we see in these systems. And indeed, what looking at the shift map in terms of digit sequences shows us is that this phenomenon on its own can make no contribution at all to what we can reasonably consider the ultimate production of randomness‘ [Wolfram, 2002, p. 155].

Multiway Systems
The design of this class of systems is so that the systems can have multiple states at any one step. The states at some time generate states at the nex step according to the underlying rules. All states thus generated remain in place after they have been generated. Most Multiway systems grow very fast or not at all and slow growth is as rare as is randomness. The usual behaviour is that repetition occurs, even if it is after a large number of seemingly random states. The threshold seems to be in the rate of growth: if the system is allowed to grow faster then the chances that it will show complex behaviour increases. In the process, however, it generates so many states that it becomes difficult to handle [Wolfram 2002, pp. 204 – 209].

Chapter 6: Starting from Randomness
If systems are started with random initial conditions (up to this point they started with very simple initial conditions such as one black or one white cell), they manage to exhibit repetitive, nested as well as complex behaviour. They are capable of generating a pattern that is partially random and partially locally structured. The point is that the intial conditions may be in part but not alone responsible for the existence of complex behaviour of the system [Wolfram 2002, pp. 223 – 230].

Class 1 – the behaviour is very simple and almost all initial conditions lead to exactly the same uniform final state

Class 2 – there are many different possible final states, but all of them consist just of a certain set of simple structures that either remain the same forever or repeat every few steps

Class 3 – the behaviour is more complicated, and seems in many respects random, although triangles and other small-scale structures are essentially always on some level seen

Class 4 – this class of systems involves a mixture of order and randomness: localized structures are produced which on their own are fairly simple, but these structures move around and interact with each other in very complicated ways.

‘There is no way of telling into which class a cellular automaton falls by studying its rules. What is needed is to run them and visually ascertain which class it belongs to’ [Wolfram 2002, Chapter 6, pp.235].

One-dimensional cellular automata of Class 4 are often on the boundary between Class 2 and Class 3, but settling in neither one of them. There seems to be some kind of transition. They do have characteristics of their own, notably localized structures, that do neither belong to Class 2 or Class 3 behaviour. This behaviour including localized structures can occur in ordinary discrete cellular automata as well as in continuous cellular automata as well as in two-dimensional cellular automata.

Sensitivity to Initial Conditions and Handling of Information
Class 1 – changes always die out. Information about a change is always quickly forgotten

Class 2 – changes may persist, but they remain localized, contained in a part of the system. Some information about the change is retained in the final configuration, but remains local and therefore not communicated thoughout the system

Class 3 – changes spread at a uniform rate thoughout the entire system. Change is communicated long-range given that local structures travelling around the system are affected by the change

Class 4 – changes spread sporadically, affecting other cells locally. These systems are capable of communicating long-range, but this happens only when localized structures are affected [Wolfram 2002, p. 252].

In Class 2 systems, the logical connection between their eventually repetitive behaviour and the fact that no long-range communication takes place is that the absence of long-range communication forces the system to behave as if its size were limited. This behaviour follows a general result that any system of limited size, discrete steps and definite rules will repeat itself eventually.

In Class 3 systems the possible sources of randomness are the randomness present in initial conditions (in the case of a cellular automaton the initial cells are chosen at random versus one single black or white cell for simple initial conditions) and the sensitive dependence on initial conditions of the process. Random behaviour in a Class 3 system can occur if there is no randomness in its initial conditions. There is not an a priori difference in the behaviour of most systems generated on the basis of random initial conditions and one based on simple intial conditions1. The dependence on the initial conditions of the patterns arising in the pattern in the overall behaviour of the system is limited in the sense that although the produced randomness is evident in many cases, the exact shape can differ from the initial conditions. This is a form of stability, for, whatever the initial conditions the system has to deal with, it always produces similar recognizable random behaviour as a result.

In Class 4 there must be some structures that can persist forever. If a system is capable of showing a sufficiently complicated structure then eventually at some initial condition, a moving structure is found also. Moving structures are inevitable in Class 4 systems. It is a general feature of Class 4 cellular automata that with appropriate initial conditions they can mimick the behaviour of all sorts of other systems. The behaviour of Class 4 cellular automata can be diverse and complex even though their underlying rules are very simple (compared to other cellular automata). The way that diffferent structures existing in Class 4 systems interact is difficult to predict. The behaviour resulting from the interaction is vastly more complex than the behaviour of the individual structures and the effects of the interactions may take a long time (many steps) after the collision to become clear.

It is common to be able to design special initial conditions so that some cellular automaton behaves like another. The trick is that the special initial conditions must then be designed so that the behaviour of the cellular automaton emulated is contained within the overall behaviour of the other cellular automaton.

Attractors
The behaviour of a cellular automaton depends on the specified initial conditions. The behaviour of the system, the sequences shown, gets progressively more restricted as the system develops. The resulting end-state or final configuration can be thought of as an attractor for that cellular automaton. Usually many different but related initial conditionss lead to the same end-state: the basin of attraction leads it to an attractor, visible to the observer as the final configuration of the system.

Chapter 7 Mechanisms in Programs and Nature
Processes happening in nature are complicated. Simple programs are capable of producing this complicated behaviour. To what extent is the behaviour of the simple programs of for instance cellular automata relevant to phenomena observed in nature? ‘It (the visual similarity of the behaviour of cellular automata and natural processes being, DPB) is not, I believe, any kind of coincidence, or trick of perception. And instead what I suspect is that it reflects a deep correspondence between simple programs and systems in nature‘ [Wolfram 2002, p 298].

Striking similarities exist between the behaviours of many different processes in nature. This suggests a kind of universality in the types of behaviour of these processes, regardless the underlying rules. Wolfram suggests that this universality of behaviour encompasses both natural systems’ behaviour and that of cellular automata. If that is the case, studying the behaviour of cellular automata can give insight into the behaviour of processes occurring in nature. ‘For it (the observed similarity in systems behaviour, DPB) suggests that the basic mechanisms responsible for phenomena that we see in nature are somehow the same as those responsible for phenomena that we see in simple programs‘ [Wolfram 2002, p 298].

A feature of the behaviour of many processes in nature is randomness. Three sources of randomness in simple programs such as cellular automata exist:
the environment – randomness is injected into the system from outside from the interactions of the system with the environment.
initial conditions – the initial conditions are a source of randomness from outside. The randomness in the system’s behaviour is a transcription of the randomness in the initial conditions. Once the system evolves, no new randomness is introduced from interactions with the environment. The system’s behaviour can be no more random than the randomness of the initial conditions. In practical terms many times isolating a system from any outside interaction is not realistic and so the importance of this category is often limited.
intrinsic generation – simple programs often show random behaviour even though no randomness is injected from interactions with outside entities. Assuming that systems in nature behave like the simple programs, it is reasonable to assume that the intrinsic generating of randomness occurs in nature also. How random is this internally generated randomness really? Based on tests using existing measures for randomness they are at least as random as any process seen in nature. It is not random by a much used definition classifying behaviour as random if it can never be generated by a simple procedure such as the simple programs at hand, but this is a conceptual and not a practical definition. A limit to the randomness of numbers generated with a simple program, is that it is bound to repeat itself if it exists in a limited space. Another limit is the set of initial conditions: because it is deteministic, running a rule twice on the same initial conditions will generate the same sequence and the same random number as a consequence. Lastly truncating the generated number will limit its randomness. The clearest sign of intrinsic randomness is its repeatability: in the generated graphs areas will evolve with similar patterns. This is not possible starting from different initial conditions or with external randomness injected while interacting. The existence of intrinsic randomness allows a discrete system to behave in seemingly continuous ways, because the randomness at a local level averages out the differences in behaviour of individual simple programs or system elements. Continuous systems are capable of showing discrete behaviour and vice versa.

Constraints
But despite this (capability of constraints to force complex behaviour DPB) my strong suspicion is that of all of the examples we see in nature almost none can in the end best be explained in terms of constraints‘ [Wolfram 2002, p 342]. Constraints are a way of making a system behave as the observer wants it to behave. To find out which constraints are required to deliver the desired behaviour of a system in nature is in practical terms far too difficult. The reason for that difficulty is that the number of configurations in any space soon becomes very large and it seems impossible for systems in nature to work out which constraint is required to satisfy the constraints at hand, especially if this procedure needs to be performed routinely. Even if possible the procedure to find the rule that actually satisfies the constraint is so cumbersome and computationally intensive, that it seems unlikely that nature uses it to evolve its processes. As a consequence nature seems to not work with constraints but with explicit rules to evolve its processes.

Implications for everyday systems
Intuitively from the perspective of traditional science the more complex is the system, the more complex is its behaviour. It has turned out that this is not the case: simple programs are much capable of generating compicated behaviour. In general the explicit (mechanistic) models show behaviour that confirms the behaviour of the corresponding systems in nature, but often diverges in the details.
The traditional way to use a model to make predictions about the behaviour of an observed system is to input a few numbers from the observed system in your model and then to try and predict the system’s behaviour from the outputs of your model. When the observed behaviour is complex (for example if it exhibits random behaviour) this approach is not feasible.
If the model is represented by a number of abstract equations, then it is unlikely (nor was it intended) that the equations describe the mechanics of the system, but only to describe its behaviour in whatever way works to make a prediction about its future behaviour. This usually implies disregarding all the details and only taking into account only the important factors driving the behaviour of the system.
Using simple programs, there is also no direct relation between the behaviour of the elements of the studied system and the mechanics of the program. ‘.. all any model is supposed to do – whether it is a cellular automaton, a differential equation or anything else – is to provide an abstract representation of effects that are important in detemining the behaviour of a system. And below the level of these effects there is no reason that the model should actually operate like the system itself‘ [Wolfram 2002, p 366].
The approach in the case of the cellular automata is to then visually compare (compare the pictures of) the outcomes of the model with the behaviour of the system and try and draw conclusions about similarities in the behaviour of the observed system and the created system.

Biological Systems
Genetic material can be seen as the programming of a life form. Its lines contain rules that determine the morphology of a creature via the process of morphogenesis. Traditional darwinism suggests that the morphology of a creature determines its fitness. Its fitness in turn detemines its chances of survival and thus the survival of its genes: the more individuals of the species survive, the bigger its representation in the genepool. In this evolutionary process, the occurrence of mutations will add some randommness, so that the species continuously searches the genetic space of solutions for the combination of genes with the highest fitness.
The problem of maximizing fitness is essentially the same as the problem of satisfying constraints..‘ [Wolfram 2002, p386]. Sufficiently simple constraints can be satisfied by iterative random searches and converge to some solution, but if the constraints are complicated then this is no longer the case.
Biological systems have some tricks to speed up this process, like sexual reproduction to mix up the genetic offspring large scale and genetic differentiation to allow for localized updating of genetic information for separate organs.
Wolfram however consides it ‘implausible that the trillions or so of generations of organisms since the beginning of life on earth would be sufficient to allow optimal solutions to be found to constraints of any significant complexity‘ [Wolfram 2002 p 386]. To add insult to injury, the design of many existing organisms is far from optimal and is better described as a make-do, easy and cheap solution that will hopefully not immediately be fatal to its inhabitant.
In that sense not every feature of every creature points at some advantage towards the fitness of the creature: many features are hold-overs from elements evolved at some earlier stage. Many features are so because they are fairly straightforward to make based on simple programs and then they are just good enough for the species to survive, not more and not less. Not the details filled in afterwards, but the relatively coarse features support the survival of the species.
In a short program there is little room for frills: almost any mutation in the program will tend to have an immediate effect on at least some details of the phenotype. If, as a mental exercise, biological evolution is modeled as a sequence of cellular automata, using each others output sequentially as input, then it is easy to see that the final behaviour of the morphogenesis is quite complex.
It is, however, not required that the program be very long or complicated to generate complexity. A short program with some essential mutations suffices. The reason that there isn’t vastly more complexity in biological systems while it is so easy to come by and while the forms and patterns usually seen in biological systems are fairly simple is that: ‘My guess is that in essence it (the propensity to exhibit mainly simple patterns DPB) reflects limitations associated with the process of natural selection .. I suspect that in the end natural selection can only operate in a meaningful way on systems or parts of systems whose behaviour is in some sense quite simple‘ [Wolfram 2002, pp. 391 – 92]. The reasons are:
when behaviour is complex, the number of actual configurations quickly becomes too large to explore when the layout of different individuals in a species becomes very differnent then the details may have a different weight in their survival skills. If the variety of detail becomes large then acting consitently and definitively becomes increasingly difficult when the overall behaviour of a system is more complex then any of its subsystems, then any change will entail a large number of changes to all the subsystems, each with a different effect on the behaviour of the individual systems and natural selection has no way to pick the relevant changes
if chances occur in many directions, it becomes very difficult for changes to cancel out or find one direction and thus for natural selection to understand what to act on iterative random searches tend to be slow and make very little progress towards a global optimum.

If a feature is to be succesfully optimized for different environments then it must be simple. While it has been claimed that natural selection increases complexity of organisms, Wolfram suggests that it reduces complexity: ..’it tends to make biological systems avoid complexity, and be more like systems in engineering‘ [Wolfram 2002, p 393]. The difference is that in engineering systems are designed and developed in a goal oriented way, whereas in evolution it is done by an iterative random search process.

There is evidence from the fossil record that evolution brings smooth change and relative simplicity of features in biological systems. If all this evoltionary process points at simple features and smooth changes, then where comes the diversity from? It turns out that a change in the rate of growth changes the shape of the organism dramatically as well as its mechanical operation.

Fundamental Physics
My approach in investigating issues like the Second Law is in effect to use simple programs as metaphors for physical systems. But can such programs in fact be more than that? And for example is it conceivable that at some level physical systems actually operate directly according to the rules of a simple program? ‘ [Wolfram 2002, p. 434].

Out of 256 rules for cellular automata based on two colours and nearest neighbour interaction, only six exhibit reversible behaviour. This means that overall behaviour can be reversed if the rules of each automaton are played backwards. Their behaviour, however, is not very interesting. Out of 7,500 billion rules based on three colours and next-neighbour interaction, around 1,800 exhibit reversible behaviour of which a handful shows interesting behaviour.

The rules can be designed to show reversible behaviour if their pictured behaviour can be mirrored vertically (the graphs generated are usually from top to bottom, DPB): the future then looks the same as the past. It turns out that the pivotal design feature of reversible rules is that existing rules can be adapted to add dependence on the states of neighbouring cells two steps back. Note that this reversibily of rules can also be constructed by using the preceding step only, if, instead of two states, four are allowed. The overall behaviour showed by these rules is reversible, whether the intial conditons be random or simple. It is shown that a small fraction of the reversible rules exhibit complex behaviour for initial conditions that are simple or random alike.

Whether this reversibility actually happens in real life depends on the theoretical definition of the initial conditions and in our ability to set them up so as to exhibit the reversible overall behaviour. If the initial conditons are exactly right then increasingly complex behaviour towards the future can become simpler when reversed. In practical terms this hardly ever happens, because we tend to design and implement the intial conditions so that they are easy to describe and construct to the experimenter. It seems reasonable that in any meaningful experiment, the activities to set up the experiment should be simpler than the process that the experiment is intended to observe. If we consider these processes as computations, then the computations required to set up the experiment should be simpler than the computations involved in the evolution of the system under review. So starting with simple initial conditions and trace back to the more complex ones, then, starting the evolution of the system there anew, we will surely find that the system shows increasingly simple behaviour. Finding these complicated seemingly random initial conditions in any other way than tracing a reversible process to and fro the simple initial conditions seems impossible. This is also the basic argument for the existence of the Second Law of Thermodynamics.

Entropy is defined as the amount of information about a system that is still unknown after measurements on the system. The Second Law means that if more measurements are performed over time then the entropy will tend to decrease. In other words: should the observer be able to know with absolute certainty properties such as the positions and velocities of each particle in the system, then the entropy would be zero. According to the definition entropy is the information with which it would be possible to pick out the configuration the system is actually in from every possible configuration of the distribution of particles in the system satisfying the outcomes of the measurements on the system. To increase the number and quality of the measurements involved amounts to the same computational effort as is required for the actual evolution of the system. Once randomness is produced, the actual behaviour of the system becomes independent of the details of the initial conditions of the system. In a reversible system different initial conditions must lead to a diffent evolution of the system, for else there would be no way of reversing the system behaviour in a unique way. But even though the outcomes from different initial conditions can be much different, the overall patterns produced by the system can still look much the same. But to identify the initial conditions from the state of a system at any time implies a computational effort that is far beyond the effort for a practical and meaningful measurement procedure. If a system generates sufficient randomness, then it evolves towards a unique equilibrium whose properties are for practical reasons independent of its initial conditions. In this why it is possible to identify many systems based on a few typical parameters.

‘With cellular automata it is possible, using reversible rules and starting from a random set of initial conditions, to generate behaviour that increases order instead of tending towards more random behaviour, e.g. rule 37R [Wolfram 2002, pp. 452 – 57]. Its behaviour neither completely settles down to order nor does it generate randomness only. Although it is reversible, it does not obey the Second Law. To be able to reverse this process, however, the experimenter would have to set up initial conditions exactly so as to be able to reach the ‘earlier’ stages, else the information generated by the system is lost. But how can there be enough information to reconstruct the past? All the intermediate local structures that passed on the way to the ‘turning point’ would have to be absorbed by the system on its way back to in the end to reach its original state. No local structure emitted on the way to the turning point can escape.

The evolution in systems is therefore intrinsically? not reversible. All forms of self organisation in cellular automata without reversible rules can potentially occur?

For these reasons it is possible to parts of the universe get more organised than other parts, even with all laws of nature being reversible. What the cellular automata such as 37R show is that this is even possible for closed systems to not follow the Second Law. If the systems gets partitioned then within the partitions order might evolve while simultaneously elsewhere in the system randomness is generated. Any closed system will repeat itself at some point in time. Until then it must visit every possible configuration. Most of these will be or seem to be random. Rule 37R does not produce this ergodicity: it visits only a small fraction of all possible states before repeating.

Conserved Quantities and Continuum Phenomena
Examples are quantities of energy and electric charge. Can the amount of information in exchanged messages be a proxy for a quantity to be conserved?

With nearest neighbour rules, cellular automata do exhibit this principle (shown as the number of cells of equal colour conserved in each step), but without showing sufficient complex behaviour. Using next-neighbour rules, they are capable of showing conservation while exhibiting interesting behaviour also. Even more interesting and random behaviour occurs when block cells are used, especially using three colours instead of two. In this setup the total number of black cells must remain equal for the entire system. On a local level, however, the number of black cells does not necessarily remain the same.

Multiway systems
In a multiway system all possible replacements are always executed at every step, thereby generating many new strings (i.e. combinations of added up replacements) at each step. ‘In this way they allow for multiple histories for a system. At every step multiple replacements are possible and so, tracing back the different paths from string to string, different histories of the system are possible. This may appear strange, for our understanding of the universe is that it has only one history, not many. But if the state of the universe is a single string in the multiway system, then we are part of that string and we cannot look into it from the outside. Being on the inside of the string it is our perception that we follow just one unique history and not many. Had we been able to look at it from without, then the path that the system followed would seem arbitrary‘ [Wolfram 2002, p 505]. If the universe is indeed a multiway system then another source of randomness is the actual path that its evolution has followed. This randomness component is similar to the outside randomness discussed earlier, but different in the sense that in would occur even if this universe would be perfectly isolated from the rest of the universe.

There are sufficient other sources of randomness to explain interesting behaviour in the universe and that by itself is no sufficient reason to assume the multiway system as a basis for the evolution of the universe. What other reasons can there be to underpin the assumption that the underlying mechanism of the uiverse is a multiway system? For one, multiway systems are much capable of generating a vast many different possible strings and therefore many possible connections between them, meaning different histories.

However, looking at the sequences of those strings it becomes obvious that these can not be arbitrary. Each path is defined by a sequence of ways in which replacements by multiway systems’ rules are applied. And each such path in turn defines a causal network. Certain underlying rules have the property that the form of this causal network ends up being the same regardless of the order in which the replacements are applied. And thus regardless of the path that is followed in the multiway system. If the multiway system ends up with the same causal network, then it must be possible to apply a replacement to a string already generated, to end up at the same final state. Whenever paths always eventually converge then there will be similarities on a sufficiently large scale in the obtained causal networks. And so the structure of the causal networks may vary a lot at the level of individual events. But at a sufficiently large level, the individual details will be washed out and the structure of the causal network will be essentially the same: on a sufficiently high level the universe will appear to have a unique history, while the histories on local levels are different.

Processes of perception and analysis
The processes that lead to some forms of behaviour in systems are comparable to some processes that are involved in their perception and analysis. Perception relates to the immediate reception of data via sensory input, analysis involves conscious and computational effort. Perception and analysis are an effort to reduce events in our everyday lives to manageable proportions so that we can use them. Reduction of data happens by ignoring whatever is not necessary for our everyday survival and by finding patterns in the remaining data so that individual elements in the data do not need to be specified. If the data contains regularities then there is some redundance in the data. The reduction is important for reasons of storage and communication.
This process of perception and analysis is the inverse of the evolving of systems behaviour from simple programs: to identify whatever it is that produces some kind of behaviour. For observed complex behaviour this is not an easy task, for the complex behaviour generated bears no obvious relation to the simple programs or rules that generate them. An important difference is that there are many more ways to generate complex behaviour than there are to recognize the origins of this kind of behaviour. The task of finding the origins of this behaviour is similar to solving problems satisfying a set of constraints.
Randomness is roughly defined as the apparent inability to find a regularity in what we perceive. Absence of randomness implies that redundancies are present in what we see, hence a shorter description can be given of what we see that allows us to reproduce it. In the case of randomness, we would have no choice but to repeat the entire picture, pixel by pixel, to reproduce it. The fact that our usual perceptional abilities do not allow such description doesn’t mean that no such description exists. It is very much possible that randomness is generated by the repetition of a simple rule a few times over. Does it, then, imply that the picture is not random? From a perceptory point of view it is, because we are incapable to find the corresponding rule, from a conceptual point of view this definition is not satisfactory. In the latter case the definition would be that randomness exists if no such rule exists and not only if we cannot immediately discern it. However, finding the short description, i.e. the short program that generates this random behaviour is not possible in a computationally finite way. Resticting the computational effort to find out whether something is random seems unsatisfactory, because it is arbitrary, it still requires a vast amount of computational work and many systems will not be labelled as random for the wrong reasons. So in the definition of randomness some reference needs to be made to how the short descriptions are to be found. ‘..something could be considered to be random whenever there is essentially no simple program that can succeed in detecting regularities in it‘ [Wolfram 2002, p 556]. In practical terms this means that after comparing the behaviour of a few simple programs with the behaviour of the observed would-be random generator and if no regularities are found in it, then the behaviour of the observed system is random.

Complexity
If we say that something is complex, we say that we have failed to find a simple description for it hence that our powers of perception and analysis have failed on it. How the behaviour is described depends on what purpose the description serves, or how we perceive the observed behaviour. The assessment of the involved complexity may differ depending on the purpose of the observation. Given this purpose, then the shorter the description the less complex the behaviour. The remaining question is whether it is possible to define complexity independent of the details of the methods of perception and analysis. The common opinion traditionally was that any complex behaviour stems from a complex system, but this is no longer the case. It takes a simple program to develop a picture for which our perception can find no simple overall description.
So what this means is that, just like every other method of analysis that we have considered, we have little choice but to conclude that traditional mathematics and mathematical formulas cannot in the end realistically be expected to tell us very much about patterns generated by systems like rule 30‘ [Wolfram 2002, p 620].

Human Thinking
Human thinking stands out from other methods of perception in its extensive use of memory, the usage of the huge amount of data that we have encountered and interacted with previously. The way human memory does this is by retrieval based on general notions of similarity rather than exact specifications of whatever memory item is that we are looking for. Hashing could not work, because similar experiences summarized by different words might end up being stored in completely different locations and the relevant piece of information might not be retrieved on the occasion that the key search word involved hits a different hash code. What is needed is information that really sets one piece of information apart from other pieces, to store that and to discard all others. The effect is that the retrieved information is similar enough to have the same representation and thus to be retrieved of some remotely or seemingly remote association occurs with some situation at hand.
This can be achieved with a number of templates that the information is compared with. Only if the remaining signal per layer of nerve cells generates a certain hash code then the information is deemed relevant and retrieved. It is very rare that a variation in the input results in a variation in the output; in other words: quick retrieval (based on the hash code) of similar (not necessarily exactly the same) information is possible. The stored information is pattern based only and not stored as meaningful or a priori relevant information.

But it is my strong suspicion that in fact logic is very far from fundamental, particularly in human thinking‘ [Wolfram 2002, 627]. We retrieve connections from memory without too much effort, but perform logical reasoning cumbersomely, going one step after the next, and it possible that we are in that process mainly using elements of logic that we have learned from previous experience only.

Chapter 11 The Notion of Computation
All sorts of behaviour can be produced by simple rules such as cellular automata. There is a need for a framework for thinking about this behaviour. Traditional science provides this framework only if the observed behaviour is fairly simple. What can we do if the observed behaviour is more complex? The key-idea is the notion of computation. If the various kinds of behaviour are generated by simple rules, or simple programs then the way to think about them is in terms of the computations they can perform: the input is provided by the initial conditions and the output by the state of the system after a number of steps. What happens in between is the computation, in abstract terms and regardless the details of how it actually works. Abstraction is useful when discussing systems’ behaviour in a unified way, regardless the different kinds of underlying rules. For even though the internal workings of systems may be different, the computations they perform may be similar. At this pivot it may become possible to formulate principles applying to a variety of different systems and independent of the detailed structures of their rules.

At some level, any cellular automaton – or for that matter, any system whatsoever – can be viewed as performing a computation that determines what its future behaviour will be‘ [Wolfram, 2002, p 641]. And for some of the cellular automata described it so happens that the computations they perform can be described to a limited extent in traditional mathematical notions. Answers to the question of the framework come from practical computing.

The Phenomenon of Universality
Based on our experience with mechanical and other devices it can be assumed that we need different underlying constructions for different kinds of tasks. The existence of computers has shown that different underlying constructions make universal systems that can be made to execute different tasks by being programmed in different ways. The hardware is the same, different software may be used, programming the computer for different tasks.
This idea of universality is also the basis for programming languages, where instructions from a fixed set are strung together in different ways to create programs for different tasks. Conversely any computer programmed with a program designed in any language can perform the same set of tasks: any computer system or language can be set up to emulate one another. An analog is human language: virtually any topic can be discussed in any language and given two languages, it is largely possible to always translate between them.
Are natural systems universal as well? ‘The basic point is that if a system is universal, then it must effectively be capable of emulating any other system, and as a result it must be able to produce behavior that is as complex as the behavior of any other system. So knowing that a particular system is universal thus immediately implies that the system can produce behavior that is in a sense arbitrarily complex‘ [Wolfram 2002, p 643].

So as the intuition that complex behaviour must be generated by complex rules is wrong, so the idea that simple rules cannot be universal is wrong. It is often assumed that universality is a unique and special quality but now it becomes clear that it is widespread and occurs in a wide range of systems including the systems we see in nature.

It is possible to construct a universal cellular automaton and to input initial conditions so that it emulates any other cellular automata and thus to produce any behaviour that the other cellular automaton can produce. The conclusion is (again) that nothing new is gained by using rules that are more complex than the rules of the universal cellular automaton, because given it, more complicated rules can always be emulated by the simple rules of the universal cellular automaton and by setting up appropriate initial conditions. Universality can occur in simple cellular automata with two colours and next-neighbour rules, but their operation is more difficult to follow than cellular automata with a more complex set-up.

Emulating other Systems with Cellular Automata
Mobile cellular automata, cellular automata that emulate Turing machines, substitution systems2, sequential substitution systems, tag systems, register machine, number systems and simple operators. A cellular automaton can emulate a practical computer as it can emulate registers, numbers, logic expressions and data retrieval. Cellular automata can perform the computations that a practical computer can perform.
And so a universal cellular automaton is universal beyond being capable of emulating all other cellular automata: it is capable of emulating a vast array of other systems, including practical computers. Reciprocally all other automata can be made to emulate cellular automata, including a universal cellular automaton, and they must therefore itself be universal, because a universal cellular automaton can emulate a wide array of systems including all possible mobile automata and symbolic systems. ‘By emulating a universal cellular automaton with a Turing machine, it is possible to construct a universal Turing machine‘ [Wolfram 2002, p 665].

And indeed the fact that it is possible to set up a univeral system using essentially just the operations of ordinary arthmetic is closely related to the proof af Godel’s Theorem‘ [Wolfram 2002, p 673].

Implications of Universality
All of the discussed systems can be made to emulate each other. All of them have certain features in common. And now, thinking in terms of computation, we can begin to see why this might be the case. They have common features just because they can be made to emulate each other. The most important consequence is that from a computational perspective a very wide array of systems with very different underlying structures are at some level fundamentally equivalent. Although the initial thought might have been that the different kinds of systems would have been suitable for different kinds of computations, this is in fact not the case. They are capable of performing exactly the same kinds of computations.
Computation therefore can be discussed in abstract terms, independent of the kind of system that performs the computation: it does not matter what kind of system we use, any kind of system can be programmed to perform any kind of computation. The results of the study of computation at an abstract level are applicable to a wide variety of actual systems.
To be fair: not all cellular automata are capable of all kinds of computations, but some have large computational capabilties: once past a certain threshold, the set of possible computations will be always the same. Beyond that threshold of universality, no additional complexity of the underlying rules might increase the computational capabilties of the system. Once the threshold is passed, it does not matter what kind of system it is that we are observing.

The rule 110 Cellular Automaton
The threshold for the complexity of the underlying rules required to produce complex behaviour is remarkably low.

Class 4 Behaviour and Universality
Rule 110 with random initial conditions exhibits many localized structures that move around and interact with each other. This is not unique to that rule, this kind of behaviour is produced by all cellular automata of Class 4. The suspicion is that any Class 4 system will turn out to have universal computational capabilities. For the 256 nearest-neighbour rules and two colours, only four more or less comply (rule 124, 137 and 193, all require some trivial amendments). But for rules involving more colours, more dimensions and / or next-neigbour rules, Class 4 localized structures often emerge. The crux for the existence of class 4 behaviour is the control of the transmission of information through the system.

Universality in Turing Machines and other Systems
The simplest Universal Turing Machine currently known has two states and five possible colours. It might not be the simplest Universal Turing Machine in existence and so the simplest lies between this and two states and two colours, none of which are Universal Turing Machines; there is some evidence that a Turing Machine with two states and three colours is universal, but no proof exists as yet. There is a close connection between the appearance of complexity and universality.

Combinators can emulate rule 110 and are known to be universal from the 1930s. Other symbolic sytems show complex behaviour and may turn out to be universal too.

Chapter 12 The Principle of Computational Equivalence
The Principle of Computational Equivalence applies to any process of any kind, natural or artificial: ‘all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations‘ [Wolfram 2002, p 715]. This means that any process that follows definite rules can be thought of as a computation. For example the process of evolution of a system like a cellular automaton can be viewed as a computation, even though all it does is generate the behaviour of the system. Processes in nature can be thought of as computations, although the rules they follow are defined by the basic laws of nature and all they do is generate their own behaviour.

Outline of the principle
The principle asserts that there is a fundamental equivalence between many kinds of processes in computational terms.

Computation is defined as that which a universal system as meant here can do. It is possible to imagine another system capable of computations beyond universal cellular automata or other such systems but they can never be constructed in our universe.

Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. In other words: there is just one level of computational sophistication and it is achieved by almost all processes that do not seem obviously simple. Universality allows the construction of universal systems that can perform any computation and thus they must be capable of exhibiting the highest level of computational sophistication. From a computational view this means that systems with quite different underlying structures will show equivalence in that rules can be found for them that achieve universality and that can thus exhibit the same level of computational sophistication.
The rules need not be more complicated themselves to achieve universality hence a higher level of computational sophistication. On the contrary: many simple though not overly simple rules are capable of achieving universality hence computational sophistication. This property should furthermore be very common and occur in a wide variety of systems, both abstract and natural. ‘And what this suggests is that a fundamental unity exists across a vast range of processes in nature and elsewhere: despite all their detailed differences every process can be viewed as corresponding to a computation that is ultimately equivalent in its sophistication‘ [Wolfram 2002, p 719].

We could identify all of the existing processes, engineered or natural, and observe their behaviour. It will surely become clear that in many instances it will be simple repetitive or nested behaviour. Whenever a system shows vastly more complex behaviour, the Principle of Computational Equivalence then asserts that the rules underlying it are universal. Conversely: given some rule it is usually very complicated to find out if it is universal or not.

If a system is universal then it is possible, by choosing the appropriate initial conditions, to perform computations of any sophistication. No guarantee exists, however, that some large portion of all initial conditions result in behaviour of the system that is more interesting and not merely obviously simple. But even rules that are by themselves not complicated, given simple initial conditions, may produce complex behaviour and may well produce processes of computational sophistication.

Introduction of a new law to the effect that no system can carry out explicit computations that are more sopisticated than that can be carried out by systems such as cellular automata or Turing Machines. Almost all processes except those that are obviously simple achieve the limit of Computational Equivalence implying that almost all possible systems with behaviour that is not obviously simple an overwhelming fraction are universal. Every process in this way can be thought of as a ‘lump of computation’.

The Validity of the Principle
The principle is counter-intuitive from the perspective of traditional science and there is no proof for it. Cellular automata are fundamentally discrete. It appears that systems in nature are more sophisticated than these computer systems because they should from a traditional perspective be continuous. But the presumed continuousness of these systems itself is an idealization required by traditional methods. As an example: fluids are traditionally described by continuous models. However, they consist of discrete particles and their computational behaviour must be of a system of discrete particles.
It is my strong suspicion that at a fundamental level absolutely every aspect of our universe will in the end turn out to be discrete. And if this is so, then it immediately implies that there cannot ever ultimately be any form of continuity in our universe that violates the Principle of Computational Equivalence’ [Wolfram 2002, p 730]. In a continuous system, the computation is not local and every digit has in principle infinite length. And in the same vein: ‘.. it is my strong belief that the basic mechanisms of human thinking will in the end turn out to correspond to rather simple computational processes’ [Wolfram 2002, p 733].

Once a system reaches a relatively low threshold of complexity then any real system must exhibit the same level of computational sophistication. This means that observers will tend to be computationally equivalent to the observed systems. As a consequence they will consider the behaviour of such systems complex.

Computational Irreducibility
Scientific triumphs have in common that almost all of them are based on finding ways to reduce the amount of computational work in order to predict how it will behave. Most of the time, the idea is to derive a mathematical formula that allows to detemine what the outcome of the evolution of the system will without having to trace its every step explicitly. There is great shortage of formulas describing all sorts of known and common systems’ behaviour.
Traditional science takes as a starting point that much of the evolutionary steps perfomed by a system are an unnecessarily large effort. It is attempted to shortcut this process and find an outcome with less effort. However, describing the behaviour of systems exhibiting complex behaviour is a difficult task. In general not only the rules for the system are required to do that, but its initial conditions as well. The difficulty is that, knowing the rules and the initial condtions, it might still take an irreducible amount if time to predict its behaviour. When computational irreducibility exists there is no other way to find out how it will behave but to go though its every evolutionary step up to the required state. The predicting system can only outrun the actual system of which we are trying to predict its future with less effort if its computations are more sophisticated. This idea violates the Principle of Computational Equivalence: every system that shows no obviously simple behaviour is computationally exactly equivalent. So predicting models cannot be more sophisticated than the systems they intend to describe. And so for many systems no systematic predictions can be done, their process of evolution cannot be shortcut and they are computationally irreducible. If the behaviour of a system is simple, for example repetitive or nested, then the system is computationally reducible. This reduces the potential of traditional science to advance in studying systems of which the behaviour is not quite simple.

To make use of mathematical formulas for instance only makes sense if the computation is reducible hence the system’s behaviour is relatively simple. Science must constrain itself to the study of relatively easy systems because only these are computationally reducible. This is not the case for the new kind of science, because it uses limited formulas but pictures of the evolution of systems instead. The observed systems may very well be computationally irreducible. They are not a preamble to the actual ‘real’ predictions based on formulas, but they are the real thing themselves. A universal system can emulate any other system, including the predictive model. Using shortcuts means trying to outrun the observed system with another that takes less effort. Because the latter can be emulated by the former (as it is universal), this means that the predictive model must be able to outrun itself. This is relevant because universality is abound in systems.

As a consequence of computational irreducibility there can be no easy theory for everything, there will be no formula that predicts any and every observable process or behaviour that seems complex to us. To deduce the consequences of these simple rules that generate complex behaviour will require irreducible amounts of computational effort. Any system can be observed but there can not be a guarantee that a model of that system exists that accurately describes or predicts how the observed system will behave.

The Phenomenon of Free Will
Though a system may be governed by definite underlying laws, its behaviour may not be describable by reasonable laws. This involves computational irreducibility, because the only way to find out how the system will behave is to actually evolve the system. There is no other way to work out this behaviour more directly.
Analog to this is the human brain: although definite laws underpin its workings, because of irreducible computation no way exists to derive an outcome via reasonable laws. It then seems that, knowing that definite rules underpin it, the system seems to behave in some way that it does not seem to follow any reasonable law at all doing this or that. And yet the underpinning rules are definite without any freedom yet allowing the system’s behaviour some form of apparent freedom. ‘For if a system is computationally irreducible this means that there is in effect a tangible separation between the underlying rules for the system and its overall behaviour associated with the irreducible amount of computational work needed to go from one to the other. And it is this separation, I believe, that the basic origin of the apparent freedom we see in all sorts of systems lies – whether those systems are abstract cellular automata or actual living brains‘ [Wolfram 2002, p 751].
The main issue is that it is not possible to make predictions about the behaviour of a system, for if we could then the behaviour would be determined in a definite way and cannot be free. But now we know that definite simple rules can lead to unpredictability: the ensuing behaviour is so complex that it seems free of obvious rules. This occurs as a result of the evolution of the system itself and no external input is required to derive that behaviour.
‘But this is not to say that everything that goes on in our brains has an intrinsic origin. Indeed, as a practical matter what usually seems to happen is that we receive external input that leads to some train of thought which continues for a while, but then dies out until we get more input. And often this the actual form of this train of thought is influenced by the memory we have developed from inputs in the past – making it not necessarily repeatable evn with exactly the same input‘ [Wolfram 2002, p752 – 53].

Undecidability and Untractability
Undecidability as per Godels Entscheidungsproblem is not a rare case, it can be achieved with very simple rules and it is very common. For every system that seems to exhibit complex behaviour, its evolution is likely to be undecidable. Finite questions about a system can ultimately answered by finite computation, but the computations may have an amount of difficulty that makes intractable. To assess the difficulty of a computation, one assesses the amount of time it takes, how big the program is that runs it and how much memory it takes. However, it is often not knowable if the progam that is used for the computation is the most efficient for the job. Working with very small programs it becomes possible to assess their efficiency.

Implications for Mathematics and its Foundations
Applications in mathematics. In nature and in mathematics simple laws govern complex behaviour. Mathematics has distantiated itself increasingly from correspondence with nature. Universality in an axiom system means that any question about the behaviour of any other universal system can be encoded as a statement in the axiom system and that if the answer can be established in the other system then it can also be given by giving a proof in the axiom system. Every axiom system currently in use in mathematics is universal: it can in a sense emulate every other system.

Intelligence in the Universe
Human beings have no specific or particular position in nature: their computational skills do not vary vastly from the skills of other natural processes.

But the question then remains why when human intelligence is involved it tends to create artifacts that look much simpler than objects that just appear in nature. And I believe the basic answer to this has to do with the fact that when we as humans set up artifacts we usually need to be able to foresee what they will do – for otherwise we have no way to tell whether they will achieve the purposes we want. Yet nature presumably operates under no such constraint. And is fact I have argued that among systems that appear in nature a great many exhibit computational irreducibility – so that in a sense it becomes irreducibly difficult to foresee what they will do‘ [Wolfram 2002, p 828].

A firm as such is not a complicated thing: it takes one question to know what it is (answer: a firm) and another to find out what it does (answer: ‘we manufacture coffee cups’). More complicated is the answer to the question: ‘how do you make coffeecups’, for this requires some considerable explanation. And yet more complicated is the answer to: ‘what makes your firm stand out from other coffeecup manufacturing firms?’. The answer to that will have to involve statements about ‘how we do things around here’, the intricate details of which might have taken you years to learn and practice and now to explain.

A system might be suspected to be built for a purpose if it is the minimal configuration for that purpose.

It would be most satisfying if science were to prove that we as humans are in some fundamental way special, and above everything else in the universe. But if one looks at the history of science many of its greatest advances have come precisely from identifying ways in which we are not special – for this is what allows science to make ever more general statements about the universe and the things in it‘ [Wolfram 2002, p 844].

‘So this means that there is in the end no difference between the level of computational sophistication that is achieved by humans and by all sorts of other systems in nature and elsewhere’ [Wolfram 2002, p 844].

Mikhailovsky and Levic: Entropy, Information and Complexity or Which Aims the Arrow of Time?

This below is my summery of a somewhat quirky article by George E. Mikhailovsky and Alexander P. Levic on MDPI. It suggests a mathematical model for the variation of complexity, using conditional local maximum entropy for (hierarchical) interrelated objects or elements in systems. I am not capable to verify whether this model makes sense mathematically. However I find the logic of it appealing because it brings a relation between entropy, information and complexity. I need this to be able to assess the complexity of my systems, i.e. businesses. Also it is based on / akin to ‘proven technology’ (i.e. existing models for these concepts in a mathematical grid) and it is seems to be more than a wild guess. Additionally it implicates relations between hierarchical levels and objects of a system, using a resources view. Lastly, and connecteed to this last issue, it addresses this ever-intriguing matter of irreversibility and the concept of time on different scales, and the mutual relation to time at a macroscopic level, i.e. how we experience it here and now.

This quote below from the last paragraph is a clue of why I find it important: “The increase of complexity, according to the general law of complification, leads to the achievement of a local maximum in the evolutionary landscape. This gets a system into a dead end where the material for further evolution is exhausted. Almost everybody is familiar with this, watching how excessive complexity (bureaucratization) of a business or public organization leads to the situation when it begins to serve itself and loses all potential for further development. The result can be either a bankruptcy due to a general economic crisis (external catastrophe) or, for example, self-destruction or decay into several businesses or organizations as a result of the loss of effective governance and, ultimately, competitiveness (internal catastrophe). However, dumping a system with such a local maximum, the catastrophe gives it the opportunity to continue the complification process and potentially achieve a higher peak.”

According to the second law entropy increases in isolated systems (Carnot, Clausius). Entropy is the first physical quantity that varies in time asymmetrically. The H-theorem of Ludwig Boltzmann shows how the irreversibility of entropy increase is derived from the reversibility of microscopic processes obeying Newtonian mechanics. He deduced the formula to:

 (1) S = KblnW

S is entropy

Kb is the Boltzmann constant equal to 1.38×10 23 J/K

W is the number of microstates related to a given macrostate

This equation relates to values at different levels or scales in a system hierarchy, resulting in a irreversible parameter as a result.

In 1948, Shannon and Weaver (The Mathematical Theory of Communication) suggested a formula for informational entropy:

(2) H = −KΣpilog pi

K is an arbitrary positive constant

pi the probability of possible events

If we define the events as microstates, consider them equally probable and choose the nondimensional Boltzmann constant, the Shannon Equation (2) becomes the Boltzmann Equation (1). The Shannon equation is a generalisation of the Boltzmann equation with different probabilities for letters making up a message (different microstates leading to a macrostate of a system). Shannon says (p 50): “Quantities of the form H = −KΣpilog pi (the constant K merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information, choice and uncertainty. The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics, where pi is the probability of a system being in cell i of its phase space.”. Note that no reference is quoted to a difference between information and information entropy. Maximum entropy exists when probabilities in all locations, pi, are equal and the information of the system (message) is in maximum disorder. Relative entropy is the ratio of H to maximum entropy.

The meaning of these values has proven difficult, because the concept of entropy is generally seen as something negative, whereas the concept of information is seen as positive. This is an example by Mikhailovsky and Levic: “A crowd of thousands of American spectators at an international hockey match chants during the game “U-S-A! U-S-A!” We have an extremely ordered, extremely degenerated state with minimal entropy and information. Then, as soon as the period of the hockey game is over, everybody is starting to talk to each other during a break, and a clear slogan is replaced by a muffled roar, in which the “macroscopic” observer finds no meaning. However, for the “microscopic” observer who walks between seats around the arena, each separate conversation makes a lot of sense. If one writes down all of them, it would be a long series of volumes instead of three syllables endlessly repeated during the previous 20 minutes. As a result, chaos replaced order, the system degraded and its entropy sharply increased for the “macro-observer”, but for the “micro-observer” (and for the system itself in its entirety), information fantastically increased, and the system passed from an extremely degraded, simple, ordered and poor information state into a much more chaotic, complex, rich and informative one.” In summary: the level of orde depends on the observed level of hierarchy. Additionally, the value attributed to order has changed in time and so may have changed the qualification ‘bad’ and ‘good’ used for entropy and information respectively.

A third concept connected to order and chaos is complexity. The definition of algorithmic complexity K(x) of the final object x is the length of the shortest computer program that prints a full, but not excessive (i.e. minimal), binary description of x and then halts. The equation for Kolmogorov complexity is:

(3) K(x) = lpr + Min(lx)

D is a set of all possible descriptions dx in range x

L is the set of equipotent lengths lx of the descriptions dx in D

lpr is the binary length of the printing algorithm mentioned above

In case x is not binary, but some other description using n symbols, then:

(4) K(x) = lpr + Min((1/n)Σpi2log(pi))

Mikhailovsky and Levic conclude that, although Equation (4) for complexity is not

completely equivalent to Equations (1) and (2), it can be regarded as their generalization in a broader sense.

Now we define an abstract representation of the system as a category that combines a class of objects and a class of morphisms. Objects of the category explicate (nl: expliciteren) the system’s states and morphisms define admissible transitions from one state to another. Categories with the same objects, but differing morphisms are different and describe different systems. For example, a system with transformations as arbitrary conformities differs from a system where the same set of objects transforms only one-to-one. Processes taking place in the first system are richer than in the latter because the first allows transitions between states of a variable number of elements, while the second requires the same number of elements in different states.

Let us take a system described by category S and the system states X and A, identical to objects X and A from S. Invariant I {X in S} (A) is a number of morphisms from X to A in the category S preserving the structure of objects. In the language of systems theory, invariant I is a number of transformations of the state X into the state A, preserving the structure of the system. We interpret the structure of the system as its “macrostate”. Transformations of the state X into the state A will be interpreted as ways of obtaining the state A from state X, or as “microstates”. Then, the invariant of a state is the number of microstates preserving the macrostate of the system, which is consistent with the Boltzmann definition of entropy in Equation (1). More strictly: we determine generalized entropy of the state A of system S (relating to the state X of the same system) as a value:

(5) Hx (A) = ln( I{X in Q}(A) / I{X in Q}(A) )

I{X in Q}(A) is the number of morphisms from set X into set A in the category of structured sets Q, and I{X in Q}(A) is the number of morphisms from set X into set A in the category of structureless sets Q with the same cardinality (number of dimensions) as in category Q, but with an “erased structure”. In particular cases, generalized entropy has the usual “Boltzmann” or, if you like, “Shannon” look (example given). This represents a ratio of the number of transformations preserving the structure by the total number Q of transformations that can be interpreted as the probability of the formation of the state with a given structure. Statistical entropy (1), information (2) and algorithmic complexity (4) are only a few possible interpretations of Equation (5). It is important to emphasize that the formula for the generalized entropy is introduced with no statistic or probabilistic assumptions and is valid for any large or small amounts of elements of the system.

The amount of “consumed” (plus “lost”) resources determines “reading” of the so-called “metabolic clock” of the system. Construction of this metabolic clock implies the ability to count the number of elements replaced in the system. Therefore, a non-trivial application of the metabolic approach requires the ability to compare one structured set to another. This ability comes from a functorial method comparison of structures that offers system invariants as generalization of the concept “number of elements” for structureless sets. Note that the system that consumes several resources exists in several metabolic times. The entropy of the system is an “averager” of metabolic times, and entropy increases monotonically with the flow of each of metabolic time, i.e., entropy and metabolic times of a system are linked uniquely, monotonously and can be calculated one through the other. This relationship is given by:

(7)

Here, H is structural entropy, L ≡ {L1 , L2 , . ., Lm} the set of metabolic times (resources) of system and Lagrange multipliers of the variational problem on the conditional maximum of structural entropy, restricted by flows of metabolic times. For the structure of sets with partitions where morphisms are preserving the partition mapping (or their dual compliances), the variational problem has the form:

(8)

It was proven that ≥ 0, i.e., structural entropy monotonously increases (or at least does not decrease) in the metabolic time of the system or entropy “production” does not decrease along a system’s trajectory in its state space (the theorem is analogous to the Boltzmann H-theorem for physical time). Such a relationship between generalized entropy and resourcescan be considered as a heuristic explanation of the origin of the logarithm in the dependence of entropy on the number of transformations: with logarithms the relationship between entropy and metabolic times becoming a power, not exponential, which in turn simplifies the formulas, which involve both parameterizations of time. Therefore, if the system metabolic time is, generally speaking, a multi-component magnitude and level-specific (relating to hierarchical levels of the system), then entropy time “averaging” metabolic times of the levels parameterizes system dynamics and returns the notion of the time to its usual universality.

The class of objects that explicates a system of categories can be presented as a system’s state space. An alternative to the postulation of the equations of motion in theoretical physics, biology, economy and other sciences is the postulation of extremal principles that generate variability laws of the systems studied. What needs to be extreme in a system? The category-functorial description gives a “natural” answer to this question, because category theory has a systematical method to compare system states. The possibility to compare the states by the strength of their structure allows one to offer an extremal principle for systems’ variation: from a given state, the system goes into a state having the strongest structure. According to the method, this function is the number of transformations admissible by structure of the system. However, a more usual formulation of the extremal principle can be obtained if we consider the monotonic function of the specific amount of admissible transformations that we defined as the generalized entropy of the state; namely given that the state of the system goes into a state for which the generalized entropy is maximal within the limits set by available resources. A generalized category-theoretic entropy allows not guessing and not postulating the objective functions, but strictly calculating them from the number of morphisms (transformations) allowed by the system structure.

Let us illustrate this with an example. Consider a very simple system consisting of a discrete space of 8 × 8 (like a chess board without dividing the fields on the black and white) and eight identical objects distributed arbitrary on these 64 elements of the space (cells). These objects can move freely from cell to cell, realizing two degrees of freedom each. The number of degrees of freedom of the system is twice as much as the number of objects due to the two-dimensionality of our space. We will consider the particular distribution of eight objects on 64 elements of our space (cells) as a system state that is equivalent in this case to a “microstate”. Thus, the number of possible states equals the number of combinations of eight objects from 64 ones: W8 = 64!/(64−8)!/8! = 4,426,165,368 .

Consider now more specific states when seven objects have arbitrary positions, while the position of the eighth one is completely determined by the positions of one, a few or all of the others. In this case, the number of degrees of freedom will reduce from 16 (eight by two) to 14 (seven by two), and the number of admissible states will decrease up to the number of combinations by seven objects, seven from 64 ones: W7 = 64!/(64−7)!/7! = 621,216,192

Let us name a set of these states a “macrostate”. Notice that the number of combinations of k elements from n calculated by the formula

(9) n! / (k! * (n-k)!)

is the cumulative number of “microstates” for “macrostates” with 16, 14, 12, and so on, degrees of freedom. Therefore, to reveal the number of “microstates” related exclusively to a given “macrostate”, we have to subtract W7 from W8 , W6 from W7, etc. These figures make quite clear that our simple model system being left to itself will inevitably move into a “macrostate” with more degrees of freedom and a larger number of admissible states, i.e., “microstates”. Two obvious conclusions immediately follow from these considerations:

• It is far more probable to find a system in a complex state than in a simple one.

• If a system came to a simple state, the probability that the next state will be simpler is immeasurably less than the probability that the next state will be more complicated.

This defines a practically irreversible increase of entropy, information and complexity, leading in turn to the irreversibility of time. For space 16 × 16, we could speak about practical irreversibility only, when reversibility is possible, although very improbable, but for real molecular systems where the number of cells is commensurate with the Avogadro’s number (6.02 × 1023), irreversibility becomes practically absolute. This absolute irreversibility leads to the absoluteness of the entropy extremal principle, which, as shown above, can be interpreted in an information or a complexity sense. This extremal principle implies a monotonic increase of state entropy along the trajectory of the system variation (sequence of its states). Thus, the entropy values parametrize the system changes. In other words, the system’s entropy time does appear. The interval of entropy time (i.e., the increment of entropy) is the logarithm of the value that shows how many times the number of non-equivalent transformations admissible by the structure of the system have changed.

Injective transformations ordering the structure are unambiguous nesting. In other words, the evolution of systems, according to the extremal principle, flows from sub-objects to objects: in the real world, where the system is limited by the resources, a formalism corresponding to the extremal principle is a variation problem on the conditional, rather than global, extremum of the objective function. This type of evolution could be named conservative or causal: the achieved states are not lost (the sub-object “is saved” in the object like some mutations of Archean prokaryotes are saved in our genomes), and the new states occur not in a vacuum, but from their “weaker” (in the sense of ordering by the strength of structure) predecessors.

Therefore, the irreversible flow of entropy time determines the “arrow of time” as a monotonic increase of entropy, information, complexity and freedom as the number of its realized degrees up to the extremum (maximum) defined by resources in the broadest sense and especially by the size of the system. On the other hand, available system resources that define a sequence of states could be considered as resource time that, together with entropy time, explicates the system’s variability as its internal system time.

We formulated and proved a far more general extremal principle applicable to any dynamic system (i.e., described by categories with morphisms), including isolated, closed, opened, material, informational, semantic, etc., ones (rare exceptions are static systems without morphisms, hence without dynamics described exceptionally by sets, for example a perfect crystal in a vacuum, a memory chip with a database backup copy or any system at a temperature of absolute zero). The extremum of this general principle is maximum, too, while the extremal function can be regarded as either generalized entropy, or generalized information, or algorithmic complexity. Therefore, before the formulation of the law related to our general extremal principle, it is necessary to determine the extremal function itself.

In summary, our generalized extremal principle is the following: the algorithmic complexity of the dynamical system, either being conservative or dissipative, described by categories with morphisms, monotonically and irreversibly increases, tending to a maximum determined by external conditions. Accordingly, the new law, which is a natural generalization of the second law of thermodynamics for any dynamic system described by categories, can be called the general law of complification:

Any natural process in a dynamic system leads to an irreversible and inevitable increase in its algorithmic complexity, together with an increase in its generalized entropy and information.

Three differences between this new law and the existing laws of nature are:

1) It is asymmetric with respect to time;

2) It is statistical: chances are larger that a system becomes more complex than that it will simplify over time. These chances for the increase of complexity grow with the increase of the size of the system, i.e. the number of elements (objects) in it;

The vast majority of forces considered by physics and other scientific disciplines could be determined as horizontal or lateral ones in a hierarchical sense. They act inside a particular level of hierarchy: for instance, quantum mechanics at the micro-level, Newton’s laws at the macro-level and relativity theory at the mega-level. The only obvious exception is thermodynamic forces when the movement of molecules at the micro-level (or at the meso-level if we consider the quantum mechanical one as the micro-level) determines the values of such thermodynamic parameters as temperature, entropy, enthalpy, heat capacity, etc., at the macro-level of the hierarchy. One could name these forces bottom-up hierarchical forces. This results in the third difference:

3) Its close connection with hierarchical rather than lateral forces.

The time scale at different levels of the hierarchy in the real world varies by orders of magnitude, the structure of time moments (the structure of the present) on the upper level leads to the irreversibility on a lower level. On the other hand, the reversibility at the lower level, in conditions of low complexity, leads to irreversibility on the top one (Boltzmann’s H-theorem). In both cases, one of the consequences of the irreversible complification is the emergence of Eddington’s arrow of time. Thus:

4) the general law of complification, leading to an increase in diversity and, therefore, accumulation of material for selection, plays the role of the engine of evolution; while selection of “viable” stable variants from all of this diversity is a kind of driver of evolution that determines its specific direction. The role of a “breeder” of this selection plays other, usually less general, laws of nature, which remain unchanged.

External catastrophes include the unexpected and powerful impacts of free energy, to which the system is not adapted. The free energy as an information killer drastically simplifies the system and throws it back in its development. However, the complexity and information already accumulated by the system are not destroyed completely, as a rule, and the system according to conservative or casual evolution, continues developing, not from scratch, but from some already achieved level.

Internal catastrophes are caused by ineffective links within the system, when complexity becomes excessive for a given level of evolution and leads to duplication, triplication, and so on, of relations, circuiting them into loops, nesting loop ones into others and, as a result, to the collapse of the system due to loss of coordination between the elements.