The Way we are Free

‘The Way we are Free’ . David R. Weinbaum (Weaver) . ECCO . VUB . 2017

Abstract: ‘It traces the experience of choice to an epistemic gap inherent in mental processes due to them being based on physically realized computational processes. This gap weakens the grasp of determinism and allows for an effective kind of freedom. A new meaning of freedom is explored and shown to resolve the fundamental riddles of free will, ..’. The supposed train of thought from this summary:

  1. (Physically realized) computational processes underpin mental processes
  2. These computational processes are deterministic
  3. These computational processes are not part of people’s cognitive domain: there is an epistemic gap between them
  4. The epistemic gap between the deterministic computational processes and the cognitive processes weakens the ‘grasp of determinism’ (this must logically imply that the resulting cognitive processes are to some extent based on stochastic processes)
  5. The weakened grasp leads to an ‘effective kind of freedom’ (but what is an effective kind of freedom? Maybe it is not really freedom but it has the effect of it, a de facto freedom, or the feeling of freedom)?
  6. We can be free in a particular way (and hence the title).

First off: the concept of an epistemic gap resembles the concept of a moral gap. Is it the same concept?

p 3: ‘This gap, it will be argued, allows for a sense of freedom which is not epiphenomenal,..’ (a kind of a by-product). The issue is of course ‘a sense of freedom’, it must be something that can be perceived by the beholder. The question is whether this is real freedom or a mere sense of freedom, if there is a difference between these.

‘The thesis of determinism about actions is that every action is determined by antecedently sufficient causal conditions. For every action the causal conditions of the action in that context are sufficient to produce that action. Thus, where  actions are concerned, nothing could happen differently from the way it does in fact happen. The thesis of free will, sometimes called “libertarianism”, states that  some actions, at least, are such that antecedent causal conditions of the action are not causally sufficient to produce the action. Granted that the action did occur, and it did occur for a reason, all the same, the agent could have done something else, given the same antecedents of the action’ [Searle 2001]. In other (my, DPB) words: for all deterministic processes the direction of the causality is dictated by the cause and effect relation. But for choices produced from a state of free will other actions (decisions) are possible, because the causes are not sufficient to produce the action. Causes are typically difficult to deal with in a practical sense because some outcome must be related to its causes. This can only be done after the outcome has occurred. Usually the causes for that outcome are very difficult to identify, because the relation is  if and only if. In addition a cause is usually a kind of a scatter of processes within some given contour or pattern, one of which must then ’take the blame’ as the cause.

There is no question that we have experiences of the sort that I have been calling experiences of the gap; that is, we experience our own normal voluntary actions
in such a way that we sense alternative possibilities of actions open to us, and we sense that the psychological antecedents of the action are not sufficient to fix the action. Notice that on this account the problem of free will arises only for consciousness, and it arises only for volitional or active consciousness; it does not arise for perceptual consciousness‘ [Searle 2001]. This means that a choice is made even though the psychological conditions to make ’the perfect choice’ are not satisfied, information is incomplete or a frivolous choice is made: ‘should I order a pop-soda or chocolate milk?’. ‘The gap is a real psychological phenomenon, but if it is a real phenomenon that makes a difference in the world, it must have a neurobiological correlate’ [Searle 2001]. Our options seem to be equal to us and we can make a choice between various options on a just-so basis (‘god-zegene-de-greep’). Is it therefore not also possible that when people are aware of these limitations they have a greater sense of freedom  to make a choice within the parameters known and available to them?

It says that psychological processes of rational decision making do not really matter. The entire system is deterministic at the bottom level, and the idea that the top level has an element of freedom is simply a systematic illusion… If hypothesis 1 is true, then every muscle movement as well as every conscious thought, including the conscious experience of the gap, the experience of “free” decision making, is entirely fixed in advance; and the only thing we can say about psychological indeterminism at the higher level is that it gives us a systematic illusion of free will. The thesis is epiphenomenalistic in this respect: there is a feature of our conscious life, rational decision making and trying to carry out the decision, where we experience the gap and we experience the processes as making a causal difference to our behavior, but they do not in fact make any difference. The bodily movements were going to be exactly the same regardless of how these processes occurred‘ [Searle 2001]. The argument above presupposes a connection between determinism and inevitability, although the environment is not mentioned in the quote. This appears to be flawed because there is no such connection. I have discussed (ad-nauseam) in the Essay Free Will Ltd, borrowing amply from Dennett (i.a. Freedom Evolves). The above quote can be summarized as: if local rules are determined then the whole system is determined. Its future must be knowable, its behavior unavoidable and its states and effects inevitable. In that scenario our will is not free, our choices are not serious and the mental processes (computation) are a mere byproduct of deterministic processes. However, consider this argument that is relevant here developed by Dennett:

  • In some deterministic worlds avoiders exist that avoid damage
  • And so in some deterministic worlds some things are avoided
  • What is avoided is avoidable or ‘evitable’ (the opposite of inevitable)
  • And so in some deterministic worlds not everything is inevitable
  • And so determinism does not imply inevitability

Maybe this is how it will turn out, but if so, the hypothesis seems to me to run against everything we know about evolution. It would have the consequence
that the incredibly elaborate, complex, sensitive, and – above all – biologically expensive system of human and animal conscious rational decision making would actually make no difference whatever to the life and survival of the organisms’ [Searle 2001]. But the argument cannot logically be true and as a consequence nothing is wasted so far.

In the case that t2>t1, it can be said that a time interval T=t2-t1 is necessary for the causal circumstance C to develop (possibly through a chain of intermediate effects) into E. .. The time interval T needed for the process of producing E is therefore an integral part of the causal circumstance that necessitates the eventual effect E. .. We would like to think about C as an event or a compound set of events and conditions. The time interval T is neither an event nor a condition‘ [p 9-10]. This argument turns out to be a bit of a sideline, but I defend the position that time is not an autonomous parameter, but a derivative from ‘clicks’ of changes in relations with neighboring systems: this quote covers it perfectly: ‘Time intervals are measured by counting events‘ [p 9]. And this argues exactly the opposite: ‘Only if interval T is somehow filled by other events such as the displacement of the hands of a clock, or the cyclic motions of heavenly bodies, it can be said to exist‘ [p 9], because time is the leading parameter and the events such as the moving of the arm of a clock is the product. This appears to be the world explained upside down (the intentions seem right): ‘If these events are also regularly occurring and countable, T can even be measured by counting these regular events. If no event whatsoever can be observed to occur between t1 and t2, how can one possibly tell that there is a temporal difference between them, that any time has passed at all? T becoming part of C should mean therefore that a nonzero number N of events must occur in the course of E being produced from C’ [p. 9]. My argument is that if a number of events lead to the irreversible state E from C then apparently time period T has passed. Else, if nothing irreversible takes place, then no time passes, because time is defined by ‘clicks’ occurring, not the other way around. Note that the footnote 2 on page 9 explains the concept of a ‘click’ between systems in different words.

The concepts of Effective and Neutral T mean a state of a system developing from C to E while conditions from outside the system are injected, and where the system develops to E from its own initial conditions alone. Note that this formulation is different from Weaver’s argument because t is not a term. So Weaver arrives at the right conclusion, namely that this chain of events of Effective T leads to a breakdown of the relation between deterministic rules and predictability [p 10], but apparently for the wrong reasons. Note also that Neutral T is sterile because in practical terms it never occurs. This is probably an argument against the use of the argument of Turing completeness with regards to the modeling of organizations as units of computation: in reality myriad of signals is injected into (and from) a system, not a single algorithm starting from some set of initial conditions, but a rather messy and diffuse environment.

Furthermore, though the deterministic relation (of a computational process DPB) is understood as a general lawful relation, in the case of computational processes, the unique instances are the significant ones. Those particular instances, though being generally determined a priori, cannot be known prior to concluding their particular instance of  computation. It follows therefore that in the case of computational processes, determinism is in some deep sense unsatisfactory. The knowledge of (C, P) still  leaves us in darkness in regards to E during the time interval T while the  computation takes place. This interval represents if so an epistemic gap. A gap during which the fact that E is determined by (C, P) does not imply that E is known or can be known, inferred, implied or predicted in the same manner that  fire implies the knowledge of smoke even before smoke appears. It can be said if so that within the epistemic gap, E is determined yet actually it is unknown and  cannot be known‘ [p 13]. Why is this problematic? The terms are clear, there is no stochastic element, it takes time to compute but the solution is determined prior to the finalization of the computation. Only if the input or the rules changes during the computation, rendering it incomputable or irrelevant. In other words: if the outcome E can be avoided then E is avoidable and the future of the system is not determined.

.. , still it is more than plausible that mental states develop in time in correspondence to the computational processes to which they are correlated. In other words, mental processes can be said to be temporally aligned to the neural  processes that realize them‘ [p 14]. What does temporally aligned mean? I agree if it means that these processes develop following, or along the same sequence of events. I do not agree if  it means that time (as a driver of change) has the same effect on either of the processes, computational (physical) and mental (psychological): time has no effect.

During gap T the status of E is determined by conditions C and P but its specifics remain unknown by anyone during T (suppose it is in my brain then I of all people would be the one to know and I don’t). And at t2, T having passed, any freedom of choice is in retrospect, E now being known. T1 and t2 are in the article  defined as the begin state and the end state of some computational system. If t1 is defined as the moment when an external signal is perceived by the system and t2 is defined as the moment at which a response if communicated by the system to Self and to outside, then the epistemic gap is ’the moral gap’. This phrase refers to the lapsed time between the perception of an input signal and the communicating of the decision to Self and others. The moral comes from the idea that the message was ‘prepared in draft’ and tested against a moral frame of reference before being communicated. The moral gap exists because the human brain needs time to compute and process the input information and formulate an answer. The Self can be seen as the spokesperson, functionally a layer on top of the other functions of the brain and it takes time to make the computation and formulate its communication to Self and to external entities.

After t1 the situation unfolds as: ‘Within the time interval T between t1 and t2, the status of the resulting mental event or action is unknown because, as explained, it is within the epistemic gap. This is true in spite the fact that the determining setup (C, P) is already set at time t1 (ftn 5) , and therefore it can be said that E is already determined at t1. Before time t2, however, there can be no knowledge whether E or its opposite or any other event in <E> would be the actual outcome of the process‘ [p 17]. E is determined but not known. But Weaver counter argues: ‘While in the epistemic gap, the person indeed is going through a change, a computation of a deliberative process is taking place. But as the change unfolds, either E or otherwise can still happen at time t2 and in this sense the outcome is yet to be determined (emphasis by the author). The epistemic gap is a sort of a limbo state where the outcome E of the mental process is both determined (generally) and not determined (particularly) [p 17]. The outcome E is determined but unknown to Self and to God; God knows it is determined, but Self is not aware of this. In this sense it can also be treated as a change of perspective, from the local observer to a distant more objective observer.

During the epistemic gap another signal can be input into the system and set up for computation. The second computation can interrupt the one running during the gap or the first one is paused or they run in parallel. However the case may be, it is possible that E never in fact takes place. While determined by C at t1 not E takes place at t2 but another outcome, namely of another computation that replaced the initial one. If C, E and P are specific for C and started by it then origination is an empty phrase, because now a little tunnel of information processing is started and nothing interferes. If they are not then new external input is required which specifies a C1, and so see the first part of the sentence and a new ’tunnel’ is opened.

This I find interesting: ‘Moreover, we can claim that the knowledge brought forth by the person at t2 be it a mental state or an action is unique and original. This uniqueness and originality are enough to lend substance to the authorship of the person and therefore to the origination at the core of her choice. Also, at least in some sense, the author carrying out the process can be credited or held responsible to the mental state or action E, him being the agent without whom E could not be brought forth‘ [p 18]. The uniqueness of the computational procedure of an individual makes her the author and she can be held responsible for the outcome. Does this uphold even if it is presupposed that her thoughts, namely computational processes, are guided by memes? Is her interpretation of the embedded ideas and her computation of the rules sufficiently personal to mark them as ‘hers’?

This is the summary of the definition of the freedom argued here: ‘The kind of freedom argued for here is not rooted in .., but rather in the very mundane process of bringing forth the genuine and unique knowledge inherent in E that was not available otherwise. It can be said that in any such act of freedom a person describes and defines herself anew. When making a choice, any choice, a person may become conscious to how the choice defines who he is at the moment it is made. He may become conscious to the fact that the knowledge of the choice irreversibly changed him. Clearly this moment of coming to know one‟s choice is indeed a moment of surprise and wonderment, because it could not be known beforehand what this choice might be. If it was, this wouldn‟t be a moment of choice at all and one could have looked backward and find when the  actual choice had been made. At the very moment of coming to know the choice that was made, reflections such as „I could have chosen otherwise‟ are not valid  anymore. At that very moment the particular instance of freedom within the gap  disappears and responsibility begins. This responsibility reflects the manner by  which the person was changed by the choice made‘[pp. 18 -9]. The author claims that it is not a reduced kind of freedom, but a full version, because: ‘First, it is coherent and consistent with the wider understanding we have about the world involving the concept of determinism.  Second, it is consistent with our experience of freedom while we are in the process of deliberation. Third, we can now argue that our choices are effective in the world and not epiphenomenal. Furthermore, evolution in general and each person‟s unique experience and wisdom are critical factors in shaping the mental processes of deliberation‘ [p 19]. Another critique could be that this is a strictly personal experience of freedom, perhaps even in a psychological sense. What about physical and social elements, in other words: how would Zeus think about it?

This is why it is called freedom: ‘Freedom of the will in its classic sense is a confusion arising from our deeply ingrained need for control. The classic problem of free will is the problem of whether or not we are inherently able to control a given life situation. Origination in the classic sense is the ultimate control status. The sense of freedom argued here leaves behind the need for control. The meaning of being free has to do with (consciously observing) the unfolding of who we are while being in the gap, the transition from a state of not knowing into a state of knowing, that is. It can be said that it is not the choice being originated by me but  rather it is I, through choice, who is being continuously originated as the person that I am. The meaning of such freedom is not centered around control but rather around the novelty and uniqueness as they arise within each and every choice as one‟s truthful expression of being‘ [p 20]. But  in this sense there is no control over the situation, and given there is the need to control is relinquished, this fact allows one to be free.

‘An interesting result regarding freedom follows: a person‟s choice is free if and only if she is the first to produce E. This is why it is not an unfamiliar experience that when we are in contact with persons that are slower than us in reading the situation and computing proper responses, we experience an expansion of our freedom and genuineness, while when we are in contact with persons that are faster than us, we experience that our freedom diminishes.

Freedom can then be understood as a dynamic property closely related to computation means and distribution of information. A person cannot expect to be free in the same manner in different situations. When one‟s mental states and actions are often predicted in advance by others who naturally use these  predictions while interacting with him, one‟s freedom is diminished to the point where no genuine unfolding of his being is possible at all. The person becomes a  subject to a priori determined conditions imposed on him. He will probably experience himself being trapped in a situation that does not allow him any genuine expression. He loses the capacity to originate because somebody or something already knows what will happen. In everyday life, what rescues our freedom is that we are all more or less equally competent in predicting each other‟s future states and actions. Furthermore, the computational procedures that implement our theories of mind are far from accurate or complete. They are more like an elaborate guess work with some probability of producing accurate predictions. Within such circumstances, freedom is still often viable. But this may  soon radically change by the advent of neural and cognitive technologies. In fact it is already in a process of a profound change.

In simple terms, the combination of all these factors will make persons much more predictable to others and will have the effect of overall diminishing the number of instances of operating within an epistemic gap and therefore the  conditions favorable to personal freedom. The implications on freedom as described here are that in the future people able to augment their mental processes to enjoy higher computing resources and more access to information will become freer than others who enjoy less computing resources and access to information. Persons who will succeed to keep sensitive information regarding their minute to minute life happenings and their mental states secured and  private will be freer than those who are not. A future digital divide will be translated into a divide in freedom‘ [pp 23-6].

I too believe that our free will is limited, but for additional and different reasons, namely the doings of memes. I do believe that Weaver has a point with his argument of the experience of freedom in the gap (which I had come to know as the ‘Moral Gap’) and the consequences it can have for our dealings with AI. There my critique would be that the AI are assumed to be exactly the same as people, but with two exceptions: the argument made explicit that 1) they compute much faster than people and the argument 2) left implicit that people experience their unique make-up such that they are confirmed by it as per their every computation; this experience represents their freedom. Now people have a unique experience of freedom that an AI can never attain providing them a ticket to relevance among AI. I’m not sure that if argument 2 is true that argument 1 can be valid also.

I agree with this, also in the sense of the coevalness between individuals and firms. If firms do their homework and such that they prepare their interactions with the associated people, then they will come out better prepared. As a result people will feel small and objectivised. They are capable of computing the outcome before you do hence predicting your future and limiting you perceived possibilities. However, this is still a result of a personal and subjective experience and not an objective fact, namely that the outcome is as they say, not as you say.

Time and the Other

Fabian, J. . Time and the Other – How Anthroplogy Makes its Object . Columbia University Press . New York . 1983 . ISBN 0-231-05590-0

Anthropology is the study of humans and their societies in the past and present. Its main subdivisions are social anthropology and cultural anthropology, which describes the workings of societies around the world, linguistic anthropology, which investigates the influence of language in social life, and biological or physical anthropology, which concerns long-term development of the human organism.

‘Time much like language or money, is a carrier of significance, a form through which we define the content of relations between the Self and the Other.. Time may give form to relations of power and inequality under the conditions of capitalist industrial production’ [Preface and Acknowledgements p. IX]. This means that time is an aspect that determines the interface between Self and the Other and so Time influences our view on the Other.

How does our use of the concept of time influence the construction of the object of study of antropology? The difficulty is in our understanding of we as the subject of anthropology, because in that role we as the subject of history can not be presupposed or left implicit nor should it be allowed to define the Other in an easy way. The contradiction is that the study of anthropoloy is conducted by involving with the object of research intensively, but based on the knowledge gained in that field research, to pronounce a discours construing the Other in terms of spatial and temporal distance.

Ch 1 Time and the Emerging Other

Knowledge is power and the claim to power of anthropology stems from its roots: the constituting of its own object of study, the Other (originally the object was the savage). All knowledge of the Other also has a historical, therefore a temporal element. In this sense accumulating knowledge involves a political act, namely from the systematic oppression to anarchic mutual recognition.

Universal time was established in the renaissance and its spread during the Enlightenment. The confusion exists because of the multitude of historical fact. Universal history is a device to distinguish different times by comparing the histories of individual countries with it: in this way it is what a general map is to particular maps [Bossuet 1845: 1, 2]. Universal can have the connotation of total (the entire worlds at all times) and general (applicable to many instances). Bossuet doesn’t address the first, but the second: how can history be presented in terms of generally valid principles? This can be done if in the ‘sequence of things, la suite des choses’ one can discern the ‘order of times’. This can be done if the order can be abbreviated to allow an instant view. the ‘epoch’ is proposed as a device, a resting place in time to consider everything that happened before that point and everything after it.

Travel gave a new impetus to anthropology and to time. Travel is now a vehicle for self-realization and the documents produced as a result form a new discours. The new traveler citiqued the existing philosophes: things seen and experienced while traveling are not as per the reality distorted by preconceived ideas.

The objective of the modern navigators is ’to complete the history of man’ [La Pérouse in Moravia 1967:964 f in Fabian p. 8]. The meaning of complete can be to self-realisation and it can also be understood as to fill out (like a form).

The conceived authenticity of a past, found in ‘savage cultures’ is used to denounce an overly acculturated and urbanized present by presenting counterimages to the pristine wholeness of the authentic life. Time is at this point in the nineteenth century secularized.

From history to evolution (from secularization of time to evolutionary temporalizing): 1) time is immanent to the world, nature, the universe, 2) relations between parts of the world can be understood as as temporal relations. The theory of Darwinian evolution can only be accepted on the condition that the concept of time that is crucial to it, is adapted to the one in vigor. Only then can tis theory be applied to projects with the objective to show evolutionary laws in society. Darwin had based his concept of time on [Charles Lyell’s Principles of Geology 1830] and he cites in a section in the origin of Species named ‘On the lapse of Time’: ‘He who can read Sit Charles Lyell’s grand work on the Principles of Geology, which the future historian will recognize as having produced a revolution in the natural sciences, yet does not admit how incomprehensibly vast have been the past periods of time, may at once close this#volume’ [1861 third ediction:111]. Lyell suggests the theory of Uniformitarianism: ‘All former changes of the organic and physical creation are referable to one uniterrupted succession of physical events, governed by laws now in operation’ [quoted in Peel 1971:293n9 in Fabian p.13]. Geological time endowed them with plausibility and scope they did not have before; the biblical time wasn’t the right kind of time, because it relays significant events from a Christian perspective, but not a neutral time independent of the events it marks. And so it cannot be part of a Cartesian time-space system.

Darwin states that time has no inner necessity or meaning: ‘The mere lapse of time itself doesn’t do anything either for or against natural selection. I state this because it has been erroneously asserted that the element of time is assumed by me to play an all-important part in natural selection, as if all species were necessarily undergoing slow modification from some innate law’ [Darwin 1861:110 f]. Also Darwin hinted at the epistemological status o scientific discovery as a sort of developing language or code. The new naturalized time is a way to order the discontinuous and fragmentary record of natural history of the world. Evolutionists now ‘spatialized’ time: instead of viewing it as a sequence of events, it now becomes a tree of related events.

By claiming to make sense of society in terms of evolutionary stages, Christian Time was now replaced with scientific Time. ‘In fact little more had been done than to replace faith in salvation by faith in progress and industry..’ [Fabian 1981 p17]. In this way the epistemology of anthropology became intellectually linked to colonization and imperialism. All societies past, present and future were placed on a stream of Time. This train of thought implies that the Other is studied in terms of the primitive, Primitive principally being a temporal concept, a category, not an object of western thought.

The Use of Time

The use of Time in anthropologic field research is different from the theoretical discourse. The latter is used for different purposes:

  • Physical time used as a parameter to describe sociocultural process.
  • Mundane time used for grand-scale periodizing.
  • Typological time, used to measure the intervals between sociocultural events.
  • Intersubjective time: an emphasis on the action-interaction in human communication.

‘As soon as culture is no longer primarily conceived as a set of rules to be enacted by individual members of distinct groups, but as the specific way in which actors create, and produce beliefs, values, and other means of social life, it has to be recognized that Time is a constitutive dimension of social reality’ [Fabian 1981 P 24].

The naturalization of time defines temporal relations as exclusive and expansive: the pagan is marked for salvation, the savage is not yet ready for civilization. What makes the savage significant for evolutionary time is that he lives in another time. All knowledge acquired by the anthropologist is affected by the historically established relations (of power and domination) between his society and the society of the one he studies; and therefore it is political in nature. The risk however is distancing. Moreover: distancing is often seen as objective by practitioners. Intersubjective time would seem to preclude distancing as the practitioner and the object are coeval (of the same same age, duration or epoch, similar to synchronous, simultaneous, contemporary), namely share the same time. But for human communication to occur, coevalness has to be created: communication is about creating the same shared time. And so in human communication recognizing intersubjectivity, establishing objectivity is connected with the creating of distance between the participants or object and subject in research. This distancing is implied in the distinction between the sender, the message and the receiver. Even if the coding and decoding of the message is taken out then the TRANSFER of it implies a temporal distance between the sender and the receiver. Distancing devices produce a denial of coevalness: ‘By that I mean a persistent and systematic tendency to place the referent(s) of anthropology in a Time other than the present of the producer of the anthropological discourse‘ [Fabian 1981 p. 31].

Coevalness can be denied by Typological time and by Physical time, intersubjective time may pose the problem described above: if coevalness is a condition for communication and anthropology is based on ethnography and ethnography is a form of communication then the anthropologist is not free to choose coevalness for his interlocutors or not. Either he submits to the condition of coevalness and produces ethnographic knowledge or he doesn’t. If anachronism is a fact or statement that is outdated in a certain timeframe: it is a mistake or an accident. As a device, and not a mistake, this is named allochronism.

Coevalness is present in the field research and not in the theory development and writing. This latter activity is political in the sense that is rooted in the early existence of the science and so it is connected with colonialism. At this point hardly more than technological advance and economical exploitation seem the most available arguments to explain western superiority (p. 35).

Ch 3: Time and Writing about the Other

Even if (an observer) is in communication with other observers, he can only hear what they have seen in their absolute pasts, at times which are also his absolute past. So whether knowledge originates in the experience of a group of people or of a society, it must always be based on what is past and gone, at the moment when it is under consideration‘ [David Bohm in Fabian 1981 p. 71].

In previous chapters it was argued that the temporal conditions experienced in the field differ from those as expressed when writing or teaching. Empirical research can only be productive if the researcher and the researched share time. Usually the intepretation of the research occurs at a (temporal) distance, denying coevalness to the object of inquiry. This is a problem if both activities are part of the same discipline: this was not always so (travelogues versus armchair anthropology). This is also a problem if the practice of coevalness assumed a given in field research indeed contributes to the quality of the research and that it should not in fact be distanced also in an ideal world.

Now historical discourse introduces two new presuppositions in that it, first, replaces the concept of achronicity with that of temporality. At the same time it assumes that the signifier of the text which is in the present has a signified in the past. Then it reifies its signified semantically and takes it for a referent external to the discourse‘ [Greimas 1976:29 in Fabian pp. 77-8]. The referent being a society or a culture of reference, to reify means ‘render something concrete’.

The Ethnographic Present as a literary convention means to give account of other societies and cultures in the present tense. Historical accuracy, if the past tense in the accounts is used, is a matter of the ‘critique of the sources’. Also the comparison with the referents is not strict anymore, because that needs to be based on past data of the referent also. Another problem is that the present tense may freeze the picture of the state of affairs as it is found in a culture, which is a dynamical thing in nature and freezing it doesn’t take this into account. Another issue is with the autobiographical style of reporting of field research: this has a partly etymological and partly practical backdrop.

This is an important foundation for intersubjective knowledge ‘Somehow we must be able to share each other’s past in order to be knowingly in each other’s present‘ [Fabian 1981 p. 92]. In other words: reflexive (reflexion, revealing the researcher) experience is more important than reflective (reflection, neutralised for the researcher’s presence thus eliminating subjectivity) experience, because if the first were unavailable then the information about the object (the individual and his society) would be unidirectional in time and therefore tangential (irrelevant and beside the point) and therefore another symptom of the denial of coevalness. Additionally reflexion requires the researcher to ’travel back and forth in time’ and so the researched can know the researcher as well as the converse. The same goes for the storing of data.

The method of observation can be a source of denial of coevalness also: the structure of the observations, the planning, the visual aspects deemed relevant, the representation of the visual data, the indications of speed included in the observations all presuppose a format stemming from one time and projecting itself and / or conditioning the observation. These are criteria brought to the observation process by the researcher and forms the basis for the production of knowledge. In additon to changing and emphasizing some criteria deemed relevant by the researcher and other criteria are left out at his choice.

Conclusions

Anthropology emerged and established itself as an allochronic discourse; it is a science of other men in another Time. It is a discourse whose referent has been removed from the present of the speaking/writing subject. This ‘petrified relation’ is a scandal. Anthropology’s Other is, ultimately, other people who are our contemporaries‘ [Fabian 1985 p. 143].

The western countries needed Time to accommodate the schemes of a one-way history: progress, development, modernity and their negative mirror images: stagnation, underdevelopment and tradition. The fiction is that interpersonal, intergroup, international the time is ‘public time’, there for the taking of anyone interested and as a consequence allotted by the powers that be. The notion of ‘public time’ provided a notion of simultaneity that is natural and independent of ideology and individual consciousness. And as a result coevalness is no longer required.

As soon as it was realized that fieldwork is a form of communicative interaction with an Other, one that must be carried out coevally, on the basis of shared intersubjective Time and intersocietal contemporaneity, a contradiction had to appear between research and writing, because anthropological writing had become suffused with the strategies and devices of an allochronic discourse‘ [Fabian 1985 p. 148].

they (the sign-theories of culture DPB) have a tendency to reinforce the basic premises of an allochronic discourse in that they consistently align the Here and Now of the signifier (the form, the structure, the meaning) with the Knower, and the There and Then of the signified (the content, the function or event, the symbol or icon) with the Known‘ [Fabian 1985 p. 151].

It is expressive of a political cosmology, that is, a kind of myth. Like other myths, allochronism has the tendency to establish a total grip on our (the anthropologists DPB) discourse. It must therefore be met by a ’total’ response, which is not to say that the critical work be accomplished in one feel swoop‘ [Fabian 1985 p 152].

The ideal of coevalness must of course also guide the critique of the many forms in which coevalness is denied in anthropological discourse‘ [Fabian 1985 p. 152].

Evolutionism established anthropological discourse as allochronic, but was also an attempt to overcome a paralyzing disjunction between the science of nature and the science of man‘ [Fabian 1985 p 153].

That which is past enters the dialectics of the present, if it is granted coevalness’ [Fabian 1985 p. 153].

The absence of the Other from our Time has been his mode of presence in our discourse – as an object and victim‘ [Fabian 1985 p. 154].

Is not the theory of coevalness which is implied (but by no means fully developed) in these arguments a program for ultimate temporal absorption of the Other, just the kind of theory needed to make sense of present history as a ‘world system’, totally dominated by monopoly- and state-capitalism?’ [Fabian 1985 p. 154].

Are there, finally, criteria by which to distinguish denial of coevalness as a condition of domination from refusal of coevalness as an act of liberation?‘ [Fabian 1985 p. 154].

What are opposed, in conflict in fact, locked in antagonistic struggle, are not the same societies at different stages of development, but different societies facing each other at the same Time‘ [Fabian 1985 p 155].

Point of departure for a theory of coevalness: 1) recuperation of the idea of totality (‘.. we can make sense of another society only to the extent that we grasp it as a whole, an organism, a configuration, a system’ [Fabian 1985 p 156]. This is flawed because a) system rules are imposed from outside and above and because culture is now a system, a theory of praxis (the process by which a theory, lesson, or skill is enacted, practiced, embodied, or realized) not provided b) if a theory of praxis is not conceived then anthropology cannot be perceived as an activity that is part of what is studied.

.. the primitive assumption, the root metaphor of knowledge remains that of a difference, and a distance, between thing and image, reality and representation. Inevitably, this establishes and reinforces models of cognition stressing difference and distance between a beholder and an object‘ [Fabian 1985 p 160].

‘A first and fundamental assumption of a materialist theory of knowledge, .. , is to make consciousness, individual and collective, the starting point. Not disembodied consciousness, however, but ‘consciousness with a body’, inextricably bound up with language. A fundamental role for language must be postulated.. Rather, the only way to think of consciousness without separating it from the organism or banning it to some ‘forum internum’ is to insist on its sensuous nature; .. to tie consciousness as an activity to the production of meaningful sound. Inasmuch as the production of meaningful sound involves the transforming, shaping of matter, it may still be possible to distinguish form and content, but the relationship between the two will then constitutive of consciousness. Only in a secondary, derived sense (one in which the conscious organism is presupposed rather than accounted for) can that relationship be called representational (significative, symbolic), or informative in the sense of being a tool or carrier of information’ [Fabian 1985 p 161].

it is wrong to think of the human use of language as characteristically informative, in fact or in intention. Human language can be used to inform or to mislead, to clarify one’s own thoughts or ot display one’s cleverness, or simply for play. If I speak with no concern for modifying your behavior or thoughts, I am not using language any less than if I say exactly the same things with such intention. If we hope to understand human language and the psychological capacities on which it rests, we must first ask what it is, not how or for what purpose it is used‘ [Chomsky 1972 p 70 in Fabian p 162]. Chomsky, N. . Language and Mind – Enlarged Edition . New York: Harcourt Brace Jovanovic . 1972

‘Man does not ‘need’ language; man, in the dialectical, transitive understanding of ’to be’, is language (much like he does not need food, shelter, and so on, but is his food and house). Consciousness, realized by the (producing) meaningful sound, is self-conscious. The Self, however, is constituted fully as a speaking and hearing Self. Awareness, if we may thus designate the first stirrings of knowledge beyond the registering of tactile impressions, is fundamentally based on hearing meaningful sounds produced by Self and Others. .. Not solitary perception but social communication is the starting point for a materialist anthropology, provided that we keep in mind that man does not ‘need’ language as a means of communication, or by extension, society as a means of survival, Man is communication and survival. What saves these assumptions from evaporating in the clouds of speculative metaphysics is, I repeat, a dialectical understanding of the verb ’to be’ in these propositions. Language is not predicated on man (nor is the ‘human mind’ or ‘culture’). Language produces man as man produces language. Production is the pivotal concept of materialist anthropology‘ [Fabian 1985 p162].

The element of thought itself – the element of thought’s living expression-language-is of a sensuous nature. The social reality of nature, and human natural science, or the natural science about man, are identical terms‘ [Marx 1953:245 f, translation from The Economic and Philosophic Manuscripts of 1844 1964:143 in Fabian 1985 p 163]. [Marx, K. . Die Frühschriften . Siegfried Landshut, ed Stuttgart: A. Kröner – 1964. The Economic and Philosophic Manuscripts of 1844 . Dirk Struik . ed. New York : International] en [Marx, K. and Engels, F. . Marx and Engels: Basic Writings on Politics and Philosophy . Feuer, L. S. . ed. Garden City. New York: Doubleday]

‘Concepts are products of sensuous interaction; they themselves are of a sensuous nature inasmuch as their formation and use is inextricably bound up with language… it is the sensuous nature .. that makes language an eminently temporary phenomenon. Its materiality is based on articulation, on frequencies, pitch, tempo, all of which are realized in the dimension of time… The temporality of speaking .. implies cotemporality of producer and product, speaker and listener, Self and Other’ [Fabian 1985 p. 163-4].

A New Kind of Science

Wolfram concludes that ’the phenomenon of complexity is quite universal – and quite independent of the details of particular systems’. This complex behaviour does not depend on system features such as the way cellulare automata are typically arranged in a rigid array or that they are processed in parallel. Very simple rules of cellular automata generally lead to repetitive behaviour, slightly more complex rules may lead to nested behaviour and even more complex rules may lead to complex behaviour of the system. Complexity with regards to the underlying rules means how they can be intricate or their assembly or make-up is complicated. Complexity with regards to the behaviour of the overall system means that little or no regularity is observed.

The surprise is that the threshold for the level of complexity of the underlying rules to generate overall system complexity is relatively low. Conversely, above the threshold, there is no requirement for the rules to become more complex for the overall behaviour of the system to become more complex.

And vice versa: even the most complex of rules are capable of producing simple behaviour. Moreover: the kinds of behaviour at a system level are similar for various kinds of underlying rules. They can be categorized as repetitive, nested, random and ‘including localized structures’. This implies that general principles exist that produce the behaviour of a wide range of systems, regardless of the details of the underlying rules. And so, without knowing every detail of the observed system, we can make fundamental statements about its overall behaviour. Another consequence is that in order to study complex behaviour, there is no need to design vastly complicated computer programs in order to generate interesting behaviour: the simple programs will do [Wolfram, 2002, pp. 105 – 113].

Numbers
Systems used in textbooks for complete analysis may have a limited capacity to generate complex behaviour because they, given the difficulties to make a complete analysis, are specifically chosen for their amenability to complete analysis, hence of a simple kind. If we ignore the need for analysis and look only at results of computer experiments, even simple ‘number programs’ can lead to complex results.

One difference is that in traditional mathematics, numbers are usually seen as elementary objects, the most important attribute of which is their size. Not so for computers: numbers must be represented explicity (in their entirety) for any computer to be able to work with it. This means that a computer uses numbers as we do: by reading them or writing them down fully as a sequence of digits. Whereas we humans do this on base 10 (0 to 9), computers typically use base 2 (0 to 1). Operations on these sequences have the effect that the sequences of digits are updated and change shape. In tradional mathematics, this effect is disregarded: the effect of an operation on a sequence as a consequence of an operation is considered trivial. Yet this effect amongst others is by itself capable of introducing complexity. However, even when the size only is represented as a base 2 digit sequence when executing a simple operator such as multiplication with fractions or even whole numbers, complex behaviour is possible.

Indeed, in the end, despite some confusing suggestions from traditional mathematics, we will discover that the general behavior of systems based on numbers is very similar to the general behavior of simple programs that we have already discussed‘ [Wolfram, 2002, p 117].

The underlying rules for systems like cellular automata are usually different from those for systems based on numbers. The main reason forr that is that rules for cellular automata are always local: the new color of any particular cell depends only on the previous colour of that cell and its immediate neighbours. But in systems based on numbers there is usually no such locality. But despite the absence of locality in the underlying rules of systes based on numbers it is possible to find localized structures also seen in cellular automata.

When using recursive functions of a form such as f(n) = f(n – f(n- 1) then only subtraction and addition are sufficient for the development of small programs based on numbers that generate behaviour of great complexity.

And almost by definition, numbers that can be obtained by simple mathematical operations will correspond to simple such (symbolic DPB) expressions. But the problem is that there is no telling how difficult it may be to compute the actual value of a number from the symbolic expression that is used to represent it‘ [Wolfram, 2002, p143].

Adding more dimensions to a cellular automaton or a turing machine does not necessarily mean that the complexity increases.

But the crucial point that I will discuss more in Chapter 7 is that the presence of sensitive dependence on initial conditions in systems like (a) and (b) in no way implies that it is what is responsible for the randomness and complexity we see in these systems. And indeed, what looking at the shift map in terms of digit sequences shows us is that this phenomenon on its own can make no contribution at all to what we can reasonably consider the ultimate production of randomness‘ [Wolfram, 2002, p. 155].

Multiway Systems
The design of this class of systems is so that the systems can have multiple states at any one step. The states at some time generate states at the nex step according to the underlying rules. All states thus generated remain in place after they have been generated. Most Multiway systems grow very fast or not at all and slow growth is as rare as is randomness. The usual behaviour is that repetition occurs, even if it is after a large number of seemingly random states. The threshold seems to be in the rate of growth: if the system is allowed to grow faster then the chances that it will show complex behaviour increases. In the process, however, it generates so many states that it becomes difficult to handle [Wolfram 2002, pp. 204 – 209].

Chapter 6: Starting from Randomness
If systems are started with random initial conditions (up to this point they started with very simple initial conditions such as one black or one white cell), they manage to exhibit repetitive, nested as well as complex behaviour. They are capable of generating a pattern that is partially random and partially locally structured. The point is that the intial conditions may be in part but not alone responsible for the existence of complex behaviour of the system [Wolfram 2002, pp. 223 – 230].

Class 1 – the behaviour is very simple and almost all initial conditions lead to exactly the same uniform final state

Class 2 – there are many different possible final states, but all of them consist just of a certain set of simple structures that either remain the same forever or repeat every few steps

Class 3 – the behaviour is more complicated, and seems in many respects random, although triangles and other small-scale structures are essentially always on some level seen

Class 4 – this class of systems involves a mixture of order and randomness: localized structures are produced which on their own are fairly simple, but these structures move around and interact with each other in very complicated ways.

‘There is no way of telling into which class a cellular automaton falls by studying its rules. What is needed is to run them and visually ascertain which class it belongs to’ [Wolfram 2002, Chapter 6, pp.235].

One-dimensional cellular automata of Class 4 are often on the boundary between Class 2 and Class 3, but settling in neither one of them. There seems to be some kind of transition. They do have characteristics of their own, notably localized structures, that do neither belong to Class 2 or Class 3 behaviour. This behaviour including localized structures can occur in ordinary discrete cellular automata as well as in continuous cellular automata as well as in two-dimensional cellular automata.

Sensitivity to Initial Conditions and Handling of Information
Class 1 – changes always die out. Information about a change is always quickly forgotten

Class 2 – changes may persist, but they remain localized, contained in a part of the system. Some information about the change is retained in the final configuration, but remains local and therefore not communicated thoughout the system

Class 3 – changes spread at a uniform rate thoughout the entire system. Change is communicated long-range given that local structures travelling around the system are affected by the change

Class 4 – changes spread sporadically, affecting other cells locally. These systems are capable of communicating long-range, but this happens only when localized structures are affected [Wolfram 2002, p. 252].

In Class 2 systems, the logical connection between their eventually repetitive behaviour and the fact that no long-range communication takes place is that the absence of long-range communication forces the system to behave as if its size were limited. This behaviour follows a general result that any system of limited size, discrete steps and definite rules will repeat itself eventually.

In Class 3 systems the possible sources of randomness are the randomness present in initial conditions (in the case of a cellular automaton the initial cells are chosen at random versus one single black or white cell for simple initial conditions) and the sensitive dependence on initial conditions of the process. Random behaviour in a Class 3 system can occur if there is no randomness in its initial conditions. There is not an a priori difference in the behaviour of most systems generated on the basis of random initial conditions and one based on simple intial conditions1. The dependence on the initial conditions of the patterns arising in the pattern in the overall behaviour of the system is limited in the sense that although the produced randomness is evident in many cases, the exact shape can differ from the initial conditions. This is a form of stability, for, whatever the initial conditions the system has to deal with, it always produces similar recognizable random behaviour as a result.

In Class 4 there must be some structures that can persist forever. If a system is capable of showing a sufficiently complicated structure then eventually at some initial condition, a moving structure is found also. Moving structures are inevitable in Class 4 systems. It is a general feature of Class 4 cellular automata that with appropriate initial conditions they can mimick the behaviour of all sorts of other systems. The behaviour of Class 4 cellular automata can be diverse and complex even though their underlying rules are very simple (compared to other cellular automata). The way that diffferent structures existing in Class 4 systems interact is difficult to predict. The behaviour resulting from the interaction is vastly more complex than the behaviour of the individual structures and the effects of the interactions may take a long time (many steps) after the collision to become clear.

It is common to be able to design special initial conditions so that some cellular automaton behaves like another. The trick is that the special initial conditions must then be designed so that the behaviour of the cellular automaton emulated is contained within the overall behaviour of the other cellular automaton.

Attractors
The behaviour of a cellular automaton depends on the specified initial conditions. The behaviour of the system, the sequences shown, gets progressively more restricted as the system develops. The resulting end-state or final configuration can be thought of as an attractor for that cellular automaton. Usually many different but related initial conditionss lead to the same end-state: the basin of attraction leads it to an attractor, visible to the observer as the final configuration of the system.

Chapter 7 Mechanisms in Programs and Nature
Processes happening in nature are complicated. Simple programs are capable of producing this complicated behaviour. To what extent is the behaviour of the simple programs of for instance cellular automata relevant to phenomena observed in nature? ‘It (the visual similarity of the behaviour of cellular automata and natural processes being, DPB) is not, I believe, any kind of coincidence, or trick of perception. And instead what I suspect is that it reflects a deep correspondence between simple programs and systems in nature‘ [Wolfram 2002, p 298].

Striking similarities exist between the behaviours of many different processes in nature. This suggests a kind of universality in the types of behaviour of these processes, regardless the underlying rules. Wolfram suggests that this universality of behaviour encompasses both natural systems’ behaviour and that of cellular automata. If that is the case, studying the behaviour of cellular automata can give insight into the behaviour of processes occurring in nature. ‘For it (the observed similarity in systems behaviour, DPB) suggests that the basic mechanisms responsible for phenomena that we see in nature are somehow the same as those responsible for phenomena that we see in simple programs‘ [Wolfram 2002, p 298].

A feature of the behaviour of many processes in nature is randomness. Three sources of randomness in simple programs such as cellular automata exist:
the environment – randomness is injected into the system from outside from the interactions of the system with the environment.
initial conditions – the initial conditions are a source of randomness from outside. The randomness in the system’s behaviour is a transcription of the randomness in the initial conditions. Once the system evolves, no new randomness is introduced from interactions with the environment. The system’s behaviour can be no more random than the randomness of the initial conditions. In practical terms many times isolating a system from any outside interaction is not realistic and so the importance of this category is often limited.
intrinsic generation – simple programs often show random behaviour even though no randomness is injected from interactions with outside entities. Assuming that systems in nature behave like the simple programs, it is reasonable to assume that the intrinsic generating of randomness occurs in nature also. How random is this internally generated randomness really? Based on tests using existing measures for randomness they are at least as random as any process seen in nature. It is not random by a much used definition classifying behaviour as random if it can never be generated by a simple procedure such as the simple programs at hand, but this is a conceptual and not a practical definition. A limit to the randomness of numbers generated with a simple program, is that it is bound to repeat itself if it exists in a limited space. Another limit is the set of initial conditions: because it is deteministic, running a rule twice on the same initial conditions will generate the same sequence and the same random number as a consequence. Lastly truncating the generated number will limit its randomness. The clearest sign of intrinsic randomness is its repeatability: in the generated graphs areas will evolve with similar patterns. This is not possible starting from different initial conditions or with external randomness injected while interacting. The existence of intrinsic randomness allows a discrete system to behave in seemingly continuous ways, because the randomness at a local level averages out the differences in behaviour of individual simple programs or system elements. Continuous systems are capable of showing discrete behaviour and vice versa.

Constraints
But despite this (capability of constraints to force complex behaviour DPB) my strong suspicion is that of all of the examples we see in nature almost none can in the end best be explained in terms of constraints‘ [Wolfram 2002, p 342]. Constraints are a way of making a system behave as the observer wants it to behave. To find out which constraints are required to deliver the desired behaviour of a system in nature is in practical terms far too difficult. The reason for that difficulty is that the number of configurations in any space soon becomes very large and it seems impossible for systems in nature to work out which constraint is required to satisfy the constraints at hand, especially if this procedure needs to be performed routinely. Even if possible the procedure to find the rule that actually satisfies the constraint is so cumbersome and computationally intensive, that it seems unlikely that nature uses it to evolve its processes. As a consequence nature seems to not work with constraints but with explicit rules to evolve its processes.

Implications for everyday systems
Intuitively from the perspective of traditional science the more complex is the system, the more complex is its behaviour. It has turned out that this is not the case: simple programs are much capable of generating compicated behaviour. In general the explicit (mechanistic) models show behaviour that confirms the behaviour of the corresponding systems in nature, but often diverges in the details.
The traditional way to use a model to make predictions about the behaviour of an observed system is to input a few numbers from the observed system in your model and then to try and predict the system’s behaviour from the outputs of your model. When the observed behaviour is complex (for example if it exhibits random behaviour) this approach is not feasible.
If the model is represented by a number of abstract equations, then it is unlikely (nor was it intended) that the equations describe the mechanics of the system, but only to describe its behaviour in whatever way works to make a prediction about its future behaviour. This usually implies disregarding all the details and only taking into account only the important factors driving the behaviour of the system.
Using simple programs, there is also no direct relation between the behaviour of the elements of the studied system and the mechanics of the program. ‘.. all any model is supposed to do – whether it is a cellular automaton, a differential equation or anything else – is to provide an abstract representation of effects that are important in detemining the behaviour of a system. And below the level of these effects there is no reason that the model should actually operate like the system itself‘ [Wolfram 2002, p 366].
The approach in the case of the cellular automata is to then visually compare (compare the pictures of) the outcomes of the model with the behaviour of the system and try and draw conclusions about similarities in the behaviour of the observed system and the created system.

Biological Systems
Genetic material can be seen as the programming of a life form. Its lines contain rules that determine the morphology of a creature via the process of morphogenesis. Traditional darwinism suggests that the morphology of a creature determines its fitness. Its fitness in turn detemines its chances of survival and thus the survival of its genes: the more individuals of the species survive, the bigger its representation in the genepool. In this evolutionary process, the occurrence of mutations will add some randommness, so that the species continuously searches the genetic space of solutions for the combination of genes with the highest fitness.
The problem of maximizing fitness is essentially the same as the problem of satisfying constraints..‘ [Wolfram 2002, p386]. Sufficiently simple constraints can be satisfied by iterative random searches and converge to some solution, but if the constraints are complicated then this is no longer the case.
Biological systems have some tricks to speed up this process, like sexual reproduction to mix up the genetic offspring large scale and genetic differentiation to allow for localized updating of genetic information for separate organs.
Wolfram however consides it ‘implausible that the trillions or so of generations of organisms since the beginning of life on earth would be sufficient to allow optimal solutions to be found to constraints of any significant complexity‘ [Wolfram 2002 p 386]. To add insult to injury, the design of many existing organisms is far from optimal and is better described as a make-do, easy and cheap solution that will hopefully not immediately be fatal to its inhabitant.
In that sense not every feature of every creature points at some advantage towards the fitness of the creature: many features are hold-overs from elements evolved at some earlier stage. Many features are so because they are fairly straightforward to make based on simple programs and then they are just good enough for the species to survive, not more and not less. Not the details filled in afterwards, but the relatively coarse features support the survival of the species.
In a short program there is little room for frills: almost any mutation in the program will tend to have an immediate effect on at least some details of the phenotype. If, as a mental exercise, biological evolution is modeled as a sequence of cellular automata, using each others output sequentially as input, then it is easy to see that the final behaviour of the morphogenesis is quite complex.
It is, however, not required that the program be very long or complicated to generate complexity. A short program with some essential mutations suffices. The reason that there isn’t vastly more complexity in biological systems while it is so easy to come by and while the forms and patterns usually seen in biological systems are fairly simple is that: ‘My guess is that in essence it (the propensity to exhibit mainly simple patterns DPB) reflects limitations associated with the process of natural selection .. I suspect that in the end natural selection can only operate in a meaningful way on systems or parts of systems whose behaviour is in some sense quite simple‘ [Wolfram 2002, pp. 391 – 92]. The reasons are:
when behaviour is complex, the number of actual configurations quickly becomes too large to explore when the layout of different individuals in a species becomes very differnent then the details may have a different weight in their survival skills. If the variety of detail becomes large then acting consitently and definitively becomes increasingly difficult when the overall behaviour of a system is more complex then any of its subsystems, then any change will entail a large number of changes to all the subsystems, each with a different effect on the behaviour of the individual systems and natural selection has no way to pick the relevant changes
if chances occur in many directions, it becomes very difficult for changes to cancel out or find one direction and thus for natural selection to understand what to act on iterative random searches tend to be slow and make very little progress towards a global optimum.

If a feature is to be succesfully optimized for different environments then it must be simple. While it has been claimed that natural selection increases complexity of organisms, Wolfram suggests that it reduces complexity: ..’it tends to make biological systems avoid complexity, and be more like systems in engineering‘ [Wolfram 2002, p 393]. The difference is that in engineering systems are designed and developed in a goal oriented way, whereas in evolution it is done by an iterative random search process.

There is evidence from the fossil record that evolution brings smooth change and relative simplicity of features in biological systems. If all this evoltionary process points at simple features and smooth changes, then where comes the diversity from? It turns out that a change in the rate of growth changes the shape of the organism dramatically as well as its mechanical operation.

Fundamental Physics
My approach in investigating issues like the Second Law is in effect to use simple programs as metaphors for physical systems. But can such programs in fact be more than that? And for example is it conceivable that at some level physical systems actually operate directly according to the rules of a simple program? ‘ [Wolfram 2002, p. 434].

Out of 256 rules for cellular automata based on two colours and nearest neighbour interaction, only six exhibit reversible behaviour. This means that overall behaviour can be reversed if the rules of each automaton are played backwards. Their behaviour, however, is not very interesting. Out of 7,500 billion rules based on three colours and next-neighbour interaction, around 1,800 exhibit reversible behaviour of which a handful shows interesting behaviour.

The rules can be designed to show reversible behaviour if their pictured behaviour can be mirrored vertically (the graphs generated are usually from top to bottom, DPB): the future then looks the same as the past. It turns out that the pivotal design feature of reversible rules is that existing rules can be adapted to add dependence on the states of neighbouring cells two steps back. Note that this reversibily of rules can also be constructed by using the preceding step only, if, instead of two states, four are allowed. The overall behaviour showed by these rules is reversible, whether the intial conditons be random or simple. It is shown that a small fraction of the reversible rules exhibit complex behaviour for initial conditions that are simple or random alike.

Whether this reversibility actually happens in real life depends on the theoretical definition of the initial conditions and in our ability to set them up so as to exhibit the reversible overall behaviour. If the initial conditons are exactly right then increasingly complex behaviour towards the future can become simpler when reversed. In practical terms this hardly ever happens, because we tend to design and implement the intial conditions so that they are easy to describe and construct to the experimenter. It seems reasonable that in any meaningful experiment, the activities to set up the experiment should be simpler than the process that the experiment is intended to observe. If we consider these processes as computations, then the computations required to set up the experiment should be simpler than the computations involved in the evolution of the system under review. So starting with simple initial conditions and trace back to the more complex ones, then, starting the evolution of the system there anew, we will surely find that the system shows increasingly simple behaviour. Finding these complicated seemingly random initial conditions in any other way than tracing a reversible process to and fro the simple initial conditions seems impossible. This is also the basic argument for the existence of the Second Law of Thermodynamics.

Entropy is defined as the amount of information about a system that is still unknown after measurements on the system. The Second Law means that if more measurements are performed over time then the entropy will tend to decrease. In other words: should the observer be able to know with absolute certainty properties such as the positions and velocities of each particle in the system, then the entropy would be zero. According to the definition entropy is the information with which it would be possible to pick out the configuration the system is actually in from every possible configuration of the distribution of particles in the system satisfying the outcomes of the measurements on the system. To increase the number and quality of the measurements involved amounts to the same computational effort as is required for the actual evolution of the system. Once randomness is produced, the actual behaviour of the system becomes independent of the details of the initial conditions of the system. In a reversible system different initial conditions must lead to a diffent evolution of the system, for else there would be no way of reversing the system behaviour in a unique way. But even though the outcomes from different initial conditions can be much different, the overall patterns produced by the system can still look much the same. But to identify the initial conditions from the state of a system at any time implies a computational effort that is far beyond the effort for a practical and meaningful measurement procedure. If a system generates sufficient randomness, then it evolves towards a unique equilibrium whose properties are for practical reasons independent of its initial conditions. In this why it is possible to identify many systems based on a few typical parameters.

‘With cellular automata it is possible, using reversible rules and starting from a random set of initial conditions, to generate behaviour that increases order instead of tending towards more random behaviour, e.g. rule 37R [Wolfram 2002, pp. 452 – 57]. Its behaviour neither completely settles down to order nor does it generate randomness only. Although it is reversible, it does not obey the Second Law. To be able to reverse this process, however, the experimenter would have to set up initial conditions exactly so as to be able to reach the ‘earlier’ stages, else the information generated by the system is lost. But how can there be enough information to reconstruct the past? All the intermediate local structures that passed on the way to the ’turning point’ would have to be absorbed by the system on its way back to in the end to reach its original state. No local structure emitted on the way to the turning point can escape.

The evolution in systems is therefore intrinsically? not reversible. All forms of self organisation in cellular automata without reversible rules can potentially occur?

For these reasons it is possible to parts of the universe get more organised than other parts, even with all laws of nature being reversible. What the cellular automata such as 37R show is that this is even possible for closed systems to not follow the Second Law. If the systems gets partitioned then within the partitions order might evolve while simultaneously elsewhere in the system randomness is generated. Any closed system will repeat itself at some point in time. Until then it must visit every possible configuration. Most of these will be or seem to be random. Rule 37R does not produce this ergodicity: it visits only a small fraction of all possible states before repeating.

Conserved Quantities and Continuum Phenomena
Examples are quantities of energy and electric charge. Can the amount of information in exchanged messages be a proxy for a quantity to be conserved?

With nearest neighbour rules, cellular automata do exhibit this principle (shown as the number of cells of equal colour conserved in each step), but without showing sufficient complex behaviour. Using next-neighbour rules, they are capable of showing conservation while exhibiting interesting behaviour also. Even more interesting and random behaviour occurs when block cells are used, especially using three colours instead of two. In this setup the total number of black cells must remain equal for the entire system. On a local level, however, the number of black cells does not necessarily remain the same.

Multiway systems
In a multiway system all possible replacements are always executed at every step, thereby generating many new strings (i.e. combinations of added up replacements) at each step. ‘In this way they allow for multiple histories for a system. At every step multiple replacements are possible and so, tracing back the different paths from string to string, different histories of the system are possible. This may appear strange, for our understanding of the universe is that it has only one history, not many. But if the state of the universe is a single string in the multiway system, then we are part of that string and we cannot look into it from the outside. Being on the inside of the string it is our perception that we follow just one unique history and not many. Had we been able to look at it from without, then the path that the system followed would seem arbitrary‘ [Wolfram 2002, p 505]. If the universe is indeed a multiway system then another source of randomness is the actual path that its evolution has followed. This randomness component is similar to the outside randomness discussed earlier, but different in the sense that in would occur even if this universe would be perfectly isolated from the rest of the universe.

There are sufficient other sources of randomness to explain interesting behaviour in the universe and that by itself is no sufficient reason to assume the multiway system as a basis for the evolution of the universe. What other reasons can there be to underpin the assumption that the underlying mechanism of the uiverse is a multiway system? For one, multiway systems are much capable of generating a vast many different possible strings and therefore many possible connections between them, meaning different histories.

However, looking at the sequences of those strings it becomes obvious that these can not be arbitrary. Each path is defined by a sequence of ways in which replacements by multiway systems’ rules are applied. And each such path in turn defines a causal network. Certain underlying rules have the property that the form of this causal network ends up being the same regardless of the order in which the replacements are applied. And thus regardless of the path that is followed in the multiway system. If the multiway system ends up with the same causal network, then it must be possible to apply a replacement to a string already generated, to end up at the same final state. Whenever paths always eventually converge then there will be similarities on a sufficiently large scale in the obtained causal networks. And so the structure of the causal networks may vary a lot at the level of individual events. But at a sufficiently large level, the individual details will be washed out and the structure of the causal network will be essentially the same: on a sufficiently high level the universe will appear to have a unique history, while the histories on local levels are different.

Processes of perception and analysis
The processes that lead to some forms of behaviour in systems are comparable to some processes that are involved in their perception and analysis. Perception relates to the immediate reception of data via sensory input, analysis involves conscious and computational effort. Perception and analysis are an effort to reduce events in our everyday lives to manageable proportions so that we can use them. Reduction of data happens by ignoring whatever is not necessary for our everyday survival and by finding patterns in the remaining data so that individual elements in the data do not need to be specified. If the data contains regularities then there is some redundance in the data. The reduction is important for reasons of storage and communication.
This process of perception and analysis is the inverse of the evolving of systems behaviour from simple programs: to identify whatever it is that produces some kind of behaviour. For observed complex behaviour this is not an easy task, for the complex behaviour generated bears no obvious relation to the simple programs or rules that generate them. An important difference is that there are many more ways to generate complex behaviour than there are to recognize the origins of this kind of behaviour. The task of finding the origins of this behaviour is similar to solving problems satisfying a set of constraints.
Randomness is roughly defined as the apparent inability to find a regularity in what we perceive. Absence of randomness implies that redundancies are present in what we see, hence a shorter description can be given of what we see that allows us to reproduce it. In the case of randomness, we would have no choice but to repeat the entire picture, pixel by pixel, to reproduce it. The fact that our usual perceptional abilities do not allow such description doesn’t mean that no such description exists. It is very much possible that randomness is generated by the repetition of a simple rule a few times over. Does it, then, imply that the picture is not random? From a perceptory point of view it is, because we are incapable to find the corresponding rule, from a conceptual point of view this definition is not satisfactory. In the latter case the definition would be that randomness exists if no such rule exists and not only if we cannot immediately discern it. However, finding the short description, i.e. the short program that generates this random behaviour is not possible in a computationally finite way. Resticting the computational effort to find out whether something is random seems unsatisfactory, because it is arbitrary, it still requires a vast amount of computational work and many systems will not be labelled as random for the wrong reasons. So in the definition of randomness some reference needs to be made to how the short descriptions are to be found. ‘..something could be considered to be random whenever there is essentially no simple program that can succeed in detecting regularities in it‘ [Wolfram 2002, p 556]. In practical terms this means that after comparing the behaviour of a few simple programs with the behaviour of the observed would-be random generator and if no regularities are found in it, then the behaviour of the observed system is random.

Complexity
If we say that something is complex, we say that we have failed to find a simple description for it hence that our powers of perception and analysis have failed on it. How the behaviour is described depends on what purpose the description serves, or how we perceive the observed behaviour. The assessment of the involved complexity may differ depending on the purpose of the observation. Given this purpose, then the shorter the description the less complex the behaviour. The remaining question is whether it is possible to define complexity independent of the details of the methods of perception and analysis. The common opinion traditionally was that any complex behaviour stems from a complex system, but this is no longer the case. It takes a simple program to develop a picture for which our perception can find no simple overall description.
So what this means is that, just like every other method of analysis that we have considered, we have little choice but to conclude that traditional mathematics and mathematical formulas cannot in the end realistically be expected to tell us very much about patterns generated by systems like rule 30‘ [Wolfram 2002, p 620].

Human Thinking
Human thinking stands out from other methods of perception in its extensive use of memory, the usage of the huge amount of data that we have encountered and interacted with previously. The way human memory does this is by retrieval based on general notions of similarity rather than exact specifications of whatever memory item is that we are looking for. Hashing could not work, because similar experiences summarized by different words might end up being stored in completely different locations and the relevant piece of information might not be retrieved on the occasion that the key search word involved hits a different hash code. What is needed is information that really sets one piece of information apart from other pieces, to store that and to discard all others. The effect is that the retrieved information is similar enough to have the same representation and thus to be retrieved of some remotely or seemingly remote association occurs with some situation at hand.
This can be achieved with a number of templates that the information is compared with. Only if the remaining signal per layer of nerve cells generates a certain hash code then the information is deemed relevant and retrieved. It is very rare that a variation in the input results in a variation in the output; in other words: quick retrieval (based on the hash code) of similar (not necessarily exactly the same) information is possible. The stored information is pattern based only and not stored as meaningful or a priori relevant information.

But it is my strong suspicion that in fact logic is very far from fundamental, particularly in human thinking‘ [Wolfram 2002, 627]. We retrieve connections from memory without too much effort, but perform logical reasoning cumbersomely, going one step after the next, and it possible that we are in that process mainly using elements of logic that we have learned from previous experience only.

Chapter 11 The Notion of Computation
All sorts of behaviour can be produced by simple rules such as cellular automata. There is a need for a framework for thinking about this behaviour. Traditional science provides this framework only if the observed behaviour is fairly simple. What can we do if the observed behaviour is more complex? The key-idea is the notion of computation. If the various kinds of behaviour are generated by simple rules, or simple programs then the way to think about them is in terms of the computations they can perform: the input is provided by the initial conditions and the output by the state of the system after a number of steps. What happens in between is the computation, in abstract terms and regardless the details of how it actually works. Abstraction is useful when discussing systems’ behaviour in a unified way, regardless the different kinds of underlying rules. For even though the internal workings of systems may be different, the computations they perform may be similar. At this pivot it may become possible to formulate principles applying to a variety of different systems and independent of the detailed structures of their rules.

At some level, any cellular automaton – or for that matter, any system whatsoever – can be viewed as performing a computation that determines what its future behaviour will be‘ [Wolfram, 2002, p 641]. And for some of the cellular automata described it so happens that the computations they perform can be described to a limited extent in traditional mathematical notions. Answers to the question of the framework come from practical computing.

The Phenomenon of Universality
Based on our experience with mechanical and other devices it can be assumed that we need different underlying constructions for different kinds of tasks. The existence of computers has shown that different underlying constructions make universal systems that can be made to execute different tasks by being programmed in different ways. The hardware is the same, different software may be used, programming the computer for different tasks.
This idea of universality is also the basis for programming languages, where instructions from a fixed set are strung together in different ways to create programs for different tasks. Conversely any computer programmed with a program designed in any language can perform the same set of tasks: any computer system or language can be set up to emulate one another. An analog is human language: virtually any topic can be discussed in any language and given two languages, it is largely possible to always translate between them.
Are natural systems universal as well? ‘The basic point is that if a system is universal, then it must effectively be capable of emulating any other system, and as a result it must be able to produce behavior that is as complex as the behavior of any other system. So knowing that a particular system is universal thus immediately implies that the system can produce behavior that is in a sense arbitrarily complex‘ [Wolfram 2002, p 643].

So as the intuition that complex behaviour must be generated by complex rules is wrong, so the idea that simple rules cannot be universal is wrong. It is often assumed that universality is a unique and special quality but now it becomes clear that it is widespread and occurs in a wide range of systems including the systems we see in nature.

It is possible to construct a universal cellular automaton and to input initial conditions so that it emulates any other cellular automata and thus to produce any behaviour that the other cellular automaton can produce. The conclusion is (again) that nothing new is gained by using rules that are more complex than the rules of the universal cellular automaton, because given it, more complicated rules can always be emulated by the simple rules of the universal cellular automaton and by setting up appropriate initial conditions. Universality can occur in simple cellular automata with two colours and next-neighbour rules, but their operation is more difficult to follow than cellular automata with a more complex set-up.

Emulating other Systems with Cellular Automata
Mobile cellular automata, cellular automata that emulate Turing machines, substitution systems2, sequential substitution systems, tag systems, register machine, number systems and simple operators. A cellular automaton can emulate a practical computer as it can emulate registers, numbers, logic expressions and data retrieval. Cellular automata can perform the computations that a practical computer can perform.
And so a universal cellular automaton is universal beyond being capable of emulating all other cellular automata: it is capable of emulating a vast array of other systems, including practical computers. Reciprocally all other automata can be made to emulate cellular automata, including a universal cellular automaton, and they must therefore itself be universal, because a universal cellular automaton can emulate a wide array of systems including all possible mobile automata and symbolic systems. ‘By emulating a universal cellular automaton with a Turing machine, it is possible to construct a universal Turing machine‘ [Wolfram 2002, p 665].

And indeed the fact that it is possible to set up a univeral system using essentially just the operations of ordinary arthmetic is closely related to the proof af Godel’s Theorem‘ [Wolfram 2002, p 673].

Implications of Universality
All of the discussed systems can be made to emulate each other. All of them have certain features in common. And now, thinking in terms of computation, we can begin to see why this might be the case. They have common features just because they can be made to emulate each other. The most important consequence is that from a computational perspective a very wide array of systems with very different underlying structures are at some level fundamentally equivalent. Although the initial thought might have been that the different kinds of systems would have been suitable for different kinds of computations, this is in fact not the case. They are capable of performing exactly the same kinds of computations.
Computation therefore can be discussed in abstract terms, independent of the kind of system that performs the computation: it does not matter what kind of system we use, any kind of system can be programmed to perform any kind of computation. The results of the study of computation at an abstract level are applicable to a wide variety of actual systems.
To be fair: not all cellular automata are capable of all kinds of computations, but some have large computational capabilties: once past a certain threshold, the set of possible computations will be always the same. Beyond that threshold of universality, no additional complexity of the underlying rules might increase the computational capabilties of the system. Once the threshold is passed, it does not matter what kind of system it is that we are observing.

The rule 110 Cellular Automaton
The threshold for the complexity of the underlying rules required to produce complex behaviour is remarkably low.

Class 4 Behaviour and Universality
Rule 110 with random initial conditions exhibits many localized structures that move around and interact with each other. This is not unique to that rule, this kind of behaviour is produced by all cellular automata of Class 4. The suspicion is that any Class 4 system will turn out to have universal computational capabilities. For the 256 nearest-neighbour rules and two colours, only four more or less comply (rule 124, 137 and 193, all require some trivial amendments). But for rules involving more colours, more dimensions and / or next-neigbour rules, Class 4 localized structures often emerge. The crux for the existence of class 4 behaviour is the control of the transmission of information through the system.

Universality in Turing Machines and other Systems
The simplest Universal Turing Machine currently known has two states and five possible colours. It might not be the simplest Universal Turing Machine in existence and so the simplest lies between this and two states and two colours, none of which are Universal Turing Machines; there is some evidence that a Turing Machine with two states and three colours is universal, but no proof exists as yet. There is a close connection between the appearance of complexity and universality.

Combinators can emulate rule 110 and are known to be universal from the 1930s. Other symbolic sytems show complex behaviour and may turn out to be universal too.

Chapter 12 The Principle of Computational Equivalence
The Principle of Computational Equivalence applies to any process of any kind, natural or artificial: ‘all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations‘ [Wolfram 2002, p 715]. This means that any process that follows definite rules can be thought of as a computation. For example the process of evolution of a system like a cellular automaton can be viewed as a computation, even though all it does is generate the behaviour of the system. Processes in nature can be thought of as computations, although the rules they follow are defined by the basic laws of nature and all they do is generate their own behaviour.

Outline of the principle
The principle asserts that there is a fundamental equivalence between many kinds of processes in computational terms.

Computation is defined as that which a universal system as meant here can do. It is possible to imagine another system capable of computations beyond universal cellular automata or other such systems but they can never be constructed in our universe.

Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. In other words: there is just one level of computational sophistication and it is achieved by almost all processes that do not seem obviously simple. Universality allows the construction of universal systems that can perform any computation and thus they must be capable of exhibiting the highest level of computational sophistication. From a computational view this means that systems with quite different underlying structures will show equivalence in that rules can be found for them that achieve universality and that can thus exhibit the same level of computational sophistication.
The rules need not be more complicated themselves to achieve universality hence a higher level of computational sophistication. On the contrary: many simple though not overly simple rules are capable of achieving universality hence computational sophistication. This property should furthermore be very common and occur in a wide variety of systems, both abstract and natural. ‘And what this suggests is that a fundamental unity exists across a vast range of processes in nature and elsewhere: despite all their detailed differences every process can be viewed as corresponding to a computation that is ultimately equivalent in its sophistication‘ [Wolfram 2002, p 719].

We could identify all of the existing processes, engineered or natural, and observe their behaviour. It will surely become clear that in many instances it will be simple repetitive or nested behaviour. Whenever a system shows vastly more complex behaviour, the Principle of Computational Equivalence then asserts that the rules underlying it are universal. Conversely: given some rule it is usually very complicated to find out if it is universal or not.

If a system is universal then it is possible, by choosing the appropriate initial conditions, to perform computations of any sophistication. No guarantee exists, however, that some large portion of all initial conditions result in behaviour of the system that is more interesting and not merely obviously simple. But even rules that are by themselves not complicated, given simple initial conditions, may produce complex behaviour and may well produce processes of computational sophistication.

Introduction of a new law to the effect that no system can carry out explicit computations that are more sopisticated than that can be carried out by systems such as cellular automata or Turing Machines. Almost all processes except those that are obviously simple achieve the limit of Computational Equivalence implying that almost all possible systems with behaviour that is not obviously simple an overwhelming fraction are universal. Every process in this way can be thought of as a ‘lump of computation’.

The Validity of the Principle
The principle is counter-intuitive from the perspective of traditional science and there is no proof for it. Cellular automata are fundamentally discrete. It appears that systems in nature are more sophisticated than these computer systems because they should from a traditional perspective be continuous. But the presumed continuousness of these systems itself is an idealization required by traditional methods. As an example: fluids are traditionally described by continuous models. However, they consist of discrete particles and their computational behaviour must be of a system of discrete particles.
It is my strong suspicion that at a fundamental level absolutely every aspect of our universe will in the end turn out to be discrete. And if this is so, then it immediately implies that there cannot ever ultimately be any form of continuity in our universe that violates the Principle of Computational Equivalence’ [Wolfram 2002, p 730]. In a continuous system, the computation is not local and every digit has in principle infinite length. And in the same vein: ‘.. it is my strong belief that the basic mechanisms of human thinking will in the end turn out to correspond to rather simple computational processes’ [Wolfram 2002, p 733].

Once a system reaches a relatively low threshold of complexity then any real system must exhibit the same level of computational sophistication. This means that observers will tend to be computationally equivalent to the observed systems. As a consequence they will consider the behaviour of such systems complex.

Computational Irreducibility
Scientific triumphs have in common that almost all of them are based on finding ways to reduce the amount of computational work in order to predict how it will behave. Most of the time, the idea is to derive a mathematical formula that allows to detemine what the outcome of the evolution of the system will without having to trace its every step explicitly. There is great shortage of formulas describing all sorts of known and common systems’ behaviour.
Traditional science takes as a starting point that much of the evolutionary steps perfomed by a system are an unnecessarily large effort. It is attempted to shortcut this process and find an outcome with less effort. However, describing the behaviour of systems exhibiting complex behaviour is a difficult task. In general not only the rules for the system are required to do that, but its initial conditions as well. The difficulty is that, knowing the rules and the initial condtions, it might still take an irreducible amount if time to predict its behaviour. When computational irreducibility exists there is no other way to find out how it will behave but to go though its every evolutionary step up to the required state. The predicting system can only outrun the actual system of which we are trying to predict its future with less effort if its computations are more sophisticated. This idea violates the Principle of Computational Equivalence: every system that shows no obviously simple behaviour is computationally exactly equivalent. So predicting models cannot be more sophisticated than the systems they intend to describe. And so for many systems no systematic predictions can be done, their process of evolution cannot be shortcut and they are computationally irreducible. If the behaviour of a system is simple, for example repetitive or nested, then the system is computationally reducible. This reduces the potential of traditional science to advance in studying systems of which the behaviour is not quite simple.

To make use of mathematical formulas for instance only makes sense if the computation is reducible hence the system’s behaviour is relatively simple. Science must constrain itself to the study of relatively easy systems because only these are computationally reducible. This is not the case for the new kind of science, because it uses limited formulas but pictures of the evolution of systems instead. The observed systems may very well be computationally irreducible. They are not a preamble to the actual ‘real’ predictions based on formulas, but they are the real thing themselves. A universal system can emulate any other system, including the predictive model. Using shortcuts means trying to outrun the observed system with another that takes less effort. Because the latter can be emulated by the former (as it is universal), this means that the predictive model must be able to outrun itself. This is relevant because universality is abound in systems.

As a consequence of computational irreducibility there can be no easy theory for everything, there will be no formula that predicts any and every observable process or behaviour that seems complex to us. To deduce the consequences of these simple rules that generate complex behaviour will require irreducible amounts of computational effort. Any system can be observed but there can not be a guarantee that a model of that system exists that accurately describes or predicts how the observed system will behave.

The Phenomenon of Free Will
Though a system may be governed by definite underlying laws, its behaviour may not be describable by reasonable laws. This involves computational irreducibility, because the only way to find out how the system will behave is to actually evolve the system. There is no other way to work out this behaviour more directly.
Analog to this is the human brain: although definite laws underpin its workings, because of irreducible computation no way exists to derive an outcome via reasonable laws. It then seems that, knowing that definite rules underpin it, the system seems to behave in some way that it does not seem to follow any reasonable law at all doing this or that. And yet the underpinning rules are definite without any freedom yet allowing the system’s behaviour some form of apparent freedom. ‘For if a system is computationally irreducible this means that there is in effect a tangible separation between the underlying rules for the system and its overall behaviour associated with the irreducible amount of computational work needed to go from one to the other. And it is this separation, I believe, that the basic origin of the apparent freedom we see in all sorts of systems lies – whether those systems are abstract cellular automata or actual living brains‘ [Wolfram 2002, p 751].
The main issue is that it is not possible to make predictions about the behaviour of a system, for if we could then the behaviour would be determined in a definite way and cannot be free. But now we know that definite simple rules can lead to unpredictability: the ensuing behaviour is so complex that it seems free of obvious rules. This occurs as a result of the evolution of the system itself and no external input is required to derive that behaviour.
‘But this is not to say that everything that goes on in our brains has an intrinsic origin. Indeed, as a practical matter what usually seems to happen is that we receive external input that leads to some train of thought which continues for a while, but then dies out until we get more input. And often this the actual form of this train of thought is influenced by the memory we have developed from inputs in the past – making it not necessarily repeatable evn with exactly the same input‘ [Wolfram 2002, p752 – 53].

Undecidability and Untractability
Undecidability as per Godels Entscheidungsproblem is not a rare case, it can be achieved with very simple rules and it is very common. For every system that seems to exhibit complex behaviour, its evolution is likely to be undecidable. Finite questions about a system can ultimately answered by finite computation, but the computations may have an amount of difficulty that makes intractable. To assess the difficulty of a computation, one assesses the amount of time it takes, how big the program is that runs it and how much memory it takes. However, it is often not knowable if the progam that is used for the computation is the most efficient for the job. Working with very small programs it becomes possible to assess their efficiency.

Implications for Mathematics and its Foundations
Applications in mathematics. In nature and in mathematics simple laws govern complex behaviour. Mathematics has distantiated itself increasingly from correspondence with nature. Universality in an axiom system means that any question about the behaviour of any other universal system can be encoded as a statement in the axiom system and that if the answer can be established in the other system then it can also be given by giving a proof in the axiom system. Every axiom system currently in use in mathematics is universal: it can in a sense emulate every other system.

Intelligence in the Universe
Human beings have no specific or particular position in nature: their computational skills do not vary vastly from the skills of other natural processes.

But the question then remains why when human intelligence is involved it tends to create artifacts that look much simpler than objects that just appear in nature. And I believe the basic answer to this has to do with the fact that when we as humans set up artifacts we usually need to be able to foresee what they will do – for otherwise we have no way to tell whether they will achieve the purposes we want. Yet nature presumably operates under no such constraint. And is fact I have argued that among systems that appear in nature a great many exhibit computational irreducibility – so that in a sense it becomes irreducibly difficult to foresee what they will do‘ [Wolfram 2002, p 828].

A firm as such is not a complicated thing: it takes one question to know what it is (answer: a firm) and another to find out what it does (answer: ‘we manufacture coffee cups’). More complicated is the answer to the question: ‘how do you make coffeecups’, for this requires some considerable explanation. And yet more complicated is the answer to: ‘what makes your firm stand out from other coffeecup manufacturing firms?’. The answer to that will have to involve statements about ‘how we do things around here’, the intricate details of which might have taken you years to learn and practice and now to explain.

A system might be suspected to be built for a purpose if it is the minimal configuration for that purpose.

It would be most satisfying if science were to prove that we as humans are in some fundamental way special, and above everything else in the universe. But if one looks at the history of science many of its greatest advances have come precisely from identifying ways in which we are not special – for this is what allows science to make ever more general statements about the universe and the things in it‘ [Wolfram 2002, p 844].

‘So this means that there is in the end no difference between the level of computational sophistication that is achieved by humans and by all sorts of other systems in nature and elsewhere’ [Wolfram 2002, p 844].

Information, Entropy, Complexity

Original question

If information is defined as ’the amount of newness introduced’ or ’the amount of surprise involved’ then chaotic behaviour implies maximum information and ‘newness’. Systems showing periodic or oscillating behaviour are said to ‘freeze’ and nothing new emerges from them. New structure or patterns emerge from systems showing behaviour just shy of chaos (the edge of chaos) and not from systems showing either chaotic or oscillating behaviour. What is the, for lack of a better word, role of information in this emergent behaviour of complex adaptive systems (cas).

Characterizing cas

One aspect characterizing cas is generally associated with complex behaviour. This behaviour in turn is associated with emergent behavior or forming of patterns new to the system, that are not programmed in its constituent parts and that are observable. The mechanics of a cas are also associated with large systems of a complicated make-up and consisting of a large number of hierarchically organised components of which the interconnections are non-linear. These ‘architectural’ conditions are not a sine-qua-non for systems to demonstrate complex behaviour. They may very well not show behaviour as per the above, and they may for that reason not be categorised as cas. They might become one, if their parameter space is adapted via an event at some point in time. Lastly systems behaviour is associated with energy usage (or cost) and with entropy production and information. However, confusion exists as to how to perform the measuring and interpret the outcomes of measurements. No conclusive definition exists about the meaning of any of the above. In other words: to date to my knowledge none of these properties when derived from a cas give a decisive answer to the question whether the system at hand is in fact complex.

The above statements are obviously self-referencing, unclear and undecisive. It would be useful to have an objective principle by which it is possible to know whether a given system shows complex behaviour and is therefore to be classified as a cas. The same goes for clear definitions for the meaning of the terms energy, entropy (production) and information in this context. It is useful to have a clear definition of the relationships of some such properties between themselves and between them and the presumed systems characteristics. This enables an observer to identify characteristics such as newness, surprise, reduction of uncertainty, meaning, information content and their change.

Entropy and information

It appears to me (no more than that) that entropy and information are two sides of the same coin, or in my words: though not separated within the system (or aspects of the same system at the same time), they are so to speak back-to-back, simultaneously influencing the mechanics (the interrelations of the constituent parts) and the dynamics (the interactions of the parts leading up to overall behavioral change of the system in time) of the system. What is the ‘role’ of information when a cas changes and how does it relate to the proportions mentioned.

The relation between information and entropy might then be: structures/patterns/algorithms distributed in a cas enable it in the long run to increase its relative fitness by reducing the cost of energy used in its ‘daily activities’. The cost of energy is part of the fitness function of the agent and stored information allows it to act ‘fit’. Structures and information in cas are distributed: the patterns are proportions of the system and not of individual parts. Measurements therefore must lead to some system characteristic (ie overall and not stop at individual agents) to get a picture of the learning/informational capacity of the entire CAS as a ‘hive’. This requires correlation between the interactions of the parts to allow the system to ‘organize itself’.

CAS as a TM

I suspect (no more than that) that it is in general possible to treat cas as a Turing Machine (TM), ‘disguised’ in any shape or, conversely, to treat complex adaptive systems as an instance of a TM. That approach makes the logic corresponding to TM available to the observer. An example of a system for which this classification is proven is 2-dimensional Cellular Automata of Wolfram class 4. This limited proof decreases the general applicability, because complex adaptive systems, unlike TM in all aspects, are parallel, open and asynchronous.

Illustration

Perhaps illustrative for a possible outcome, is, misusing the Logistic map because no complexity lives there, to ‘walk the process’ by changing parameter mu. Start at the right: in the chaotic region, newness (or reduction of uncertainty / surprise / information) is large, bits are very many, meaning (as in emerging patterns): small. Travel left to any oscillating region: newness is small, bits are very few, meaning is small. Now in between where there is complex behaviour: newness is high, bits fewer than the chaotic region, meaning is very high.

The logical underpinning of ‘newness’ or ‘surprise’ is: if no bit in a sequence can be predicted from a subset of that sequence, it is random. Each bit in the sequence is a ‘surprise’ or ‘new’ and the amount of information is highest. If 1 bit can be predicted, there is a pattern, an algorithm can be designed and, given it is shorter than this bit (this is theoretical) the surprise is less, as is the amount of information. The more pattern, the less surprise it holds and the more information appears to be stored ‘for later use’ such as processing of a new external signal that the system has to deal with. What we observe in a cas is patterns and so a limitation of this ‘surprise’.

A research project

I suggest the objective of such project is to design and test meaningful measurements for entropy production, energy cost and information processing of a complex adaptive system so as to relate them to each other and to the system properties of a cas in order to better recognize and understand them.

The suggested approach is to use a 2-dimensional CA structure parameterized to show complex behavior as per Wolfram class 4 as described in ‘A New Kind of Science’ of Stephen Wolfram.

The actual experiment is then to use this system to solve well-defined problems. As the theoretical characteristics of (the processing of and the storage by) a TM are known, this approach allows for a reference for the information processing and information storage requirements that can be compared to the actual processing and storing capacities of the system at hand.

Promising measurements are:

Measurement Description Using
Entropy Standard: this state related to possible states Gibbs or similar
Energy cost Theoretical energy cost required to solve a particular problem versus the energy the complex adaptive system at hand uses See slide inserted below, presentation e-mailed earlier: https://www.youtube.com/watch?v=9_TMMKeNxO0#t=649

Schermafdruk van 2015-06-09 12:56:03

Information Earlier discussion: Using this approach, we could experimentally compute the bits of information that agents have learned resulting from the introduction of new information into the system. I suggest to add: ..compute the bits of information that agents have learned relating to the system…. That subset of information distributed stored in the system representing the collective aspect of the system, i.e. distributed collective information. Amount of information contained in the co-evolving interfaces of the agents or parts of the system equivalent to labels as suggested by Holland.

Kapitalisme en Vooruitgang

goudzwaardDeze post is gebaseerd op het boek Kapitalisme en Vooruitgang van Bob Goudzwaard, professor filosofie aan de VU Amsterdam en mede-oprichter van het CDA van het gereformeerde contingent. Ik was benieuwd of economische groei een ‘natuurwet’ is of dat dat iets is dat wij onszelf opleggen. Die vraag is deels beantwoord, namelijk het laatste. Wat er ‘gratis’ bij kwam is een beschrijving van de samenleving die de westerse mens heeft onwikkeld in zijn streven naar vrijheid en beheersing, om zich vervolgens afhankelijk te maken van die maatschappelijke inrichting.

De eerste zin van het boek is: ‘Zoals in de evangeliën wordt vermeld draagt elk geloof, hoe klein het ook is, het vemogen in zich tot het verplaatsen van bergen’. Zonder commentaar parkeer ik hier die zin voor later gebruik.

De Renaissance heeft de betekenis van de kerk voor het dagelijks, inclusief economisch, leven losgemaakt van God. In de plaats van de verticale structuur zoals de gildes van de middeleeuwen ontstond de mogelijkheid van horizontale ontwikkeling, de onderneming wordt zelfstandig. Arbeid, grond en kapitaal worden losgemaakt van de bestaande structuren en dienen rationeel te worden ingezet. Het handelen wordt bepaald door: ‘no moral rule above the letter of law’ (Hobbes).

Niet langer was de voorzienigheid van de middeleeuwen relevant, die het dagelijks leven voorzag in de besturing door God en bovendien het bouwen van de mens op eigen mogelijkheden veroordeelde. Het einde van de Goddelijke lotsbepaling werd tot stand gebracht door deïsme: zelf-vrijspraak en zelf-openbaring. God schept de wereld en alles erop en daarna is zijn rol uitgespeeld. Bovendien levert hetgene dat overblijft van de voorzienigheid alleen maar ‘goede uitkomsten’ op voor degenen die met de voorgeschreven orde willen leven. Naast christendom kwam het humanisme op als levensovertuiging met idealen van het streven naar vrijheid en beheersing.

De mechanistische ‘invisible hand’ van Adam Smith was een deïstische uitdrukking van de voorzienigheid. De ‘invisible hand’ brengt mensen ertoe het algemeen welzijn te dienen, zelfs al denken zij dat ze hun eigen belangen nastreven. Zo draagt een ieder die meedoet bij aan: ’that great purpose of human life which we call bettering our condition’. Door een verregaande arbeidsdeling wordt productie efficiënter ingezet wat een grotere welvaart inclusief ‘wealth of nations’ oplevert.

Het klassieke economische wereldbeeld

1) Mens versus natuur: het economisch leven wordt getypeerd door een individu door zijn eigen menselijke arbeid in een gegeven natuur tot een zo groot mogelijke welvaart probeert te komen. Daarbij is het voornaamste instrument zijn eigen rationeel inzicht. De betekenis van het leven ligt in de omgang met de dingen van deze wereld, de natuur. De individu maakt zich waar door de dingen die hij zelf denkend en scheppend vormgeeft. De markt is niet een ontmoeting van mensen maar een ontmoeting van ieder individu afzonderlijk met een voor hem gegeven prijs. De markt is een belangeloos mechanisme

2) Dienend natuurrecht: Smith stelt voor opbloei van de welvaart de voorwaarde het respecteren van de natuurlijke orde, namelijk wat het natuurrecht vergt. De prijs die een individu ontvangt voor zijn inzet is een prijs die tot stand komt onder vrije mededinging op de markt en het natuurrecht is het recht op vrije mededinging. Het is aan de onverheid om die rechten te beschermen, zoals eigendom, contractsluiting en vrije vestiging. Het recht is dienend geworden aan de economie

3) Evenwicht als harmonie: het nieuwe voorkomen van de voorzienigheid is een evenwicht op de markt als gevolg van de werking van de ‘invisible hand’. Harmonie ontstaat als er evenwicht op de markten is: daar worden menselijke belangen gewogen. Die harmonie wordt bereikt, ook als een ieder zijn eigenbelang maximaal nastreeft. Dat laatste is dus niet een adagium, maar het draagt ook bij aan de harmonie. Hier is een paradox (Mandeville), namelijk: het gaat niet aan om tegelijkertijd maximalisatie van materiële zaken na te streven en vol te houden dat men de menselijke moraal voor ogen heeft. Uiteindelijk moet er gekozen worden

4) Verband tussen materiële welvaart en moraal: dat is utilitarisme (nuttigheid, Bentham): de mens beslist door een afweging van nut, namelijk consumptiegoederen, en noodzaak, namelijk te verrichten arbeid (samen ‘utilities and disutilities’), om te komen tot een maximalisatie van nut. Bentham stelt dat dat is wat individuen en instituties behoren te doen. Het is de moraal van ’the greatest happiness for the greatest number’. De moraal is aangepast aan de economie

Vooruitgangsgeloof

Tijdens de verlichting kwam het inzicht dat men vooruitgaat door doelbewust te handelen en geleid door verstandelijke overwegingen om zich te bevrijden. Het vooruitgangsgeloof hechtte zich in de westerse cultuur. Dat kan, net als voorzienigheid en humaniteit, niet worden bewezen, het is een geloof. Geloof in vooruitgang houdt in het geloof dat de situatie in alle aspecten beter is op een later tijdstip dan op een eerder. Alleen door zich te verbeteren in alle aspecten van de wetenschap kan in dit geloof vooruitgang worden geboekt.

Alleen als de mens zich laat leiden door dit geloof is er zekerheid dat vooruitgang zal worden geboekt. Daarvoor is vervolmaking van de mens zelf nodig. Het proces van vooruitgang gaat niet buiten de individu om, maar speelt zich in de individu a die zelf groeit in zijn strijd met de natuur. Met dat sluitstuk kan het verloren paradijs alsnog bereikt worden en is deze ideologie praktisch geworden: het verbeteren van zichzelf is het programma dat leidt tot vooruitgang.

De verlichting heeft de basis gevormd voor de grote revoluties in Europa. De logica is als volgt: de mens is in principe goed. Het kwaad op deze wereld komt dus niet van binnen maar van die maatschappelijk structuren die de mens dwingen tot onrecht. De conclusie is dat de vijand van de veranderingsgezinde individu degenen zijn die zich met de gevestigde maatschappelijke orde hebben vereenzelvigd. Want daardoor staan zij de benodigde aanpassingen in de weg. En het logische sluitstuk is dan dat hun eliminatie de weg is om het maatschappelijk heil weer beschikbaar te krijgen.

De levensvisie tijdens de industriële revolutie (in Engeland) was een synthese van: individualistisch-puriteinse beroepsethiek, deïstisch-utilitaire maatschappijbeschouwing en een ongecompliceerd vooruitgangsgeloof. In Frankrijk was het eerder een verzet tegen bestaande instituties en het bevorderen vann het heil van de natie. Waar de industriële revolutie scheef liep is in de absolute voorrang in de ontwikkeling van de cultuur voor techniek en industriële productie, terwijl sociaal-ethische en rechtvaardigheidsnormen weinig aandacht kregen. Utilitariteit kreeg a priori het morele primaat: al het andere was dienstbaar gemaakt, waardoor gelijktijdige verwerking van economische en niet-economische normen niet mogelijk was

Programma van verbeteringen

De factoren die economische groei mogelijk maken zijn: menselijke arbeid efficiënter inzetten, gereedschap inschakelen, arbeid afzonderen voor het stelselmatig verbeteren van gereedschap en werkmethoden.

Door het vooruitgangsgeloof worden deze tot het uiterste benut, omdat van hun inzet de komst van het totale menselijk geluk in de samenleving afhangt. Het geloof in vooruitgang is direct gekoppeld aan een praktisch en concreet programma van verbeteringen. Als deze groeifactoren echter geïsoleerd en absoluut worden ingezet, dan heeft de individu initieel wel beheersing over de natuur, maar uiteindelijk wordt het individu afhankelijk van die factoren die hij zelf op de troon heeft gezet.

Na 1850 verandert het vooruitgangsgeloof van een ideologie tot een praktische en meetbare vooruitgang in technisch, economisch en wetenschappelijk opzicht en vooruitgaan wordt een doel op zich, belangrijker dan het einddoel. Duidelijk wordt dat de mens een stap is in de evolutionaire keten van opvolging. Omdat het evolutionaire proces al bestond, is de mens ‘maar’ een schakel en in plaats van het subject van de vooruitgang een object ervan geworden. Een andere invloed uit de evolutieleer is dat de ontwikkelingen tot stand moeten komen door een selectieproces: competitieve strijd en aanpassingen aan de omgeving bieden de beste kansen op de uiteindelijke overwinning.

Ontwikkeling van het kapitalisme na 1850

1) Ondernemingen worden groter, ondernemerschap en eigenaarschap worden gescheiden. De arbeidsverdeling wordt geperfectioneerd. De ondernemingen gaan voor hun zelfstandig voortbestaan leiding en kapitaal opeisen: niet de wil van de ondernemer maar de wet van de maatschappelijke evolutie is maatgevend geworden. De ondernemersfunctie wordt aan het bedrijf gebonden: het bedrijf als systeem heeft continue leiding nodig en de onderneming gaat de ondernemersfunctie als onderdeel van het eigen systeem opnemen: in plaats van dat de ondernemer eigenaar van de onderneming is, lijkt nu de onderneming de eigenaar van de ondernemer te worden. Datzelfde geldt voor technische vernieuwing. Door deze ontwikkelingen is niet langer alleen de kapitaalverschaffer de principaal en verschuift het ondernemingsdoel van rendement naar continuïteit. In de nieuwe omgeving waar economische en sociale vooruitgang een constante zijn geworden, is dit voor de onderneming een aanpassing van de onderneming aan die vooruitgang.

2) Relatie tot concurrenten en consumenten. De positie ten opzicht van concurrenten is steeds meer gericht op mededinging. Via prijsconcurrentie kan door vrije concurrentie de overlevende / winnaar een monopolie toevallen. Concurrentie richt zich in toenemende mate op technologische concurrentie en op beïnvloedingsconcurrentie: door de smaak van de consument te sturen wordt een trouwe klantenbasis opgebouwd, waardoor de continuïteit van de onderneming zeker wordt gesteld. De smaken van de afnemers worden als het ware in de planning van de onderneming opgenomen. De consument verliest zijn souvereiniteit

3) De relatie onderneming tot overheid. Door het uitgangspunt van laissez faire als economische exponent van de vooruitgang met volledige mededinging als ideaal, moest de overheid zich actief bemoeien met de economie. Ook de overheid ontkwam niet aan de consequenties van vooruitgang als systeem. Een ander aspect was bestrijding van de werkloosheid: om de individu in staat te stellen zichzelf te ontplooien, was werkgelegenheid belangrijk en daar was een rol voor de overheid weggelegd.

Dus wordt souvereiniteit opgeofferd aan de vooruitgangsideaal door de onderneming, de ondernemer, de consument, de individu en de overheid. Om de voortdurende vooruitgang in stand te houden is een samenlevingssysteem nodig dat hieraan dienend is. Gedrag van mensen dat gericht is op het voorkomen van een niet-gewenste einduitkomst van het vigerende systeem is een overlevings-ethiek. Dat biedt niets nieuws, omdat dat is afgeleid van datzelfde systeem, zij het dan om een ongewenste uitkomst te voorkomen.

De aangepaste mens

Citaat: ‘In een samenleving, waarin de vooruitgang de toon aangeeft en alle menselijke instituties en relaties daarop steeds meer zijn afgestemd geraakt, is het immers niet meer dan logisch dat ook de leden van die samenleving in hun eigen denken, doen en laten daardoor fundamenteel worden geraakt en beïnvloed. Waarom zouden wel grootheden en instituties zoals de techniek, de overheid, het prijzen- en geldstelsel en de ondernemingsfunctie aan internaliseringskrachten bloot staan en de mens zelf niet? Ook zal hij welhaast onweerstaanbaar worden getrokken binnen het krachtenveld van de vooruitgang.’ Voorbeelden zijn:

  • De ondernemer of manager moet voortdurende denken en handelen in termen van het economische meegroeien en technisch voorblijven. Dit vergt een diepgaande mentale inspanning en een ingreep in de levenshouding
  • Voor de werknemer zijn veel werkzaamheden zeer gefragmenteerd en zodanig ontdaan van hun menselijkheid. Dat is van invloed op hun gedachten en hun omgang met anderen. Als die werkwijze al wordt gewaardeerd dan lijkt dat er eerder het gevolg van te zijn dat de individu zich aan de machine heeft aangepast dan andersom
  • De verhoudingen tussen mensen wijken voor relaties tot dingen. Ook al is dat duidelijk, er is geen verandering in deze benadering, het lijkt alsof de westerse mens verlamd is geraakt in dit maatschappelijk bestel en dat de maker van de vooruitgang machtelozer wordt ten opzichte van die vooruitgang
  • De keuze voor het individu is nu tussen zich revolutionair afwenden van deze maatschappij of zich continu en onbeperkt aan te passen aan alle eisen van het systeem

De zetel van de macht

Er is een sterke binding van de westerse mens aan zijn vooruitgangsgeloof. Dat geloof heeft religieuze aspecten. Een kenmerk van elk geloof en elke vorm van religie is dat het zijn aanhangers nooit onveranderd achterlaat: het drukt een stempel op hen en op hun denken en hun relatie met anderen. Het besef van machteloosheid hangt samen met de geloofsdimensie van het vooruitgangsmotief: de eigen macht is gedelegeerd en het vooruitgangsgeloof lokt juist uit tot die overdracht van macht. De vraag is niet langer: hoe zien wij economie, techniek en wetenschap, maar hoe zien zij ons?

De humanistische beheersings- en de vrijheidsideaal zijn aan elkaar gerelateerd, omdat persoonlijke vrijheid het best tot uiting komt door het beheersen van de wereld. Er is een spanning tussen, want beide eisen de hele mens en de hele wereld op. Je kunt niet alle processen in de wereld willen kennen maar ervan afzien zodra je persoonlijke vrijheid in het gedrang komt of aan de hele wereld je vrije wil opleggen maar ophouden als je ziet dat je beheersing van de wereld daardoor in het gedrang komt. Ook het vooruitgangsgeloof, voortgekomen uit het humanisme, draagt die spanning in zich. Het kan niet anders dan de hele wereld en alles daarin aan zich ondergeschikt te maken.

De westerse samenleving heeft zich geordend en geschikt tot een doelgericht systeem ter bevordering van economische en technische vooruitgang en oefent een aanhoudende druk uit op individuen om zich aan te passen. Deze objectivering van mensen staat in verband met de beheersingsdrang van diezelfde mensen en hun vooruitgangsgeloof. Thompson: ‘We started with the law of the survival of the fittest, but now end with the law of the fitting of the survivors’. Op dit punt breng ik de eerste zin uit het boek in gedachten: door het geloof in vooruitgang doet iedere individu er alles aan om dit vooruitgangsmodel tot werkelijkheid te maken.

Goudzwaard noemt deze samenleving een gesloten of een tunnelsamenleving: een strakke organisatie is gecombineerd met allesoverheersende doelstellingen. Daarbinnen staat alles in dienst om zo snel mogelijk het eind van de tunnel te bereiken. In de sociale betrekkingen heeft niets waarde dat niet bijdraagt aan het voortbewegen in die tunnel, al het andere is zinloos of waardeloos. Dit is een gechargeerd model: de meeste westerse samenlevingen zijn minder gesloten. De sleutel om te ontsnappen aan deze tunnel moet worden gezocht aan de wortel van het humanistisch ideaal, namelijk beheersing versus vrijheid. Dat overstijgt alle politieke of institutionele krachten: de structuur van de op vooruitgang gebaseerde samenleving is zo sterk, dat de werking van deze samenleving voortgaat, zoals een vliegwiel. Een verandering moet zich richten op het doorbreken van de verzelfstandigde rol van de vooruitgangskrachten in de samenleving. Bovendien moet voor een verandering niet de vooruitgang de maat te geven voor de samenleving, Het vooruitgangsdenken is circa 250 jaar gestart en is tot in de kleinste details en over de hele breedte in de westerse samenleving aanwezig. Het is te verwachten dat dit niet op korte termijn zal zijn aangepast.

Levitt over de Publieke Opinie

De onderwerpen in het boek Freakonomics zijn alledaags: er worden mechanismes in beschreven in alledaagse situaties. De aanpak is praktisch en niet een poging tot een allesoverkoepelende economische theorie. De definitie van economie die ik zelf heb geleerd is: ‘het gedrag van mensen op het snijvlak van vraag en aanbod’. Levitt gebruikt deze variant: ‘explaining how people get what they want‘. Zijn uitgangspunt is dat wat zich afspeelt tussen mensen weetbaar is. De uitkomsten zijn vaak intrigerend, omdat ze verrassend en tegenintuïtief zijn. De onderzoeker, de schrijver dus, laat zich niet meeslepen door de ‘communis opinio’, de publieke opinie.

Levitt citeert J.K. Galbraith over publieke opinie, in het engels conventional wisdom‘ als volgt: ‘We associate truth with convenience, with what most closely accords with self-interest and personal well-being, or promises best to avoid awkward effort or unwelcome dislocation of life. We also find highly acceptable what contributes most to self-esteem. Economic and social behaviors are complex, and to comprehend their character is mentally tiring. Therefore we adhere, as though to a raft, to those ideas which represent our understanding’.

Ik wijd deze post aan dat boek, omdat die aanpak me aanspreekt: uitkomsten van welk onderzoek, model of theorie dan ook moeten minimaal iets beschrijven, verklaren, liefst voorspellen van de wereld om ons heen in praktische zin. Zie hieronder een aantal verkorte herkenbare voorbeelden, in het boek is ook de statistische onderbouwing opgenomen. Want hoewel ik veel vertrouwen heb in de ‘wisdom of the crowd’ heb ik dat niet in de publieke opinie.

Voorbeelden

Men denkt

De prikkel is

De feiten zijn

Wat veroorzaakt de vermindering van city crime na 1995? Wapenwetten, sterke economie, nieuwe aanpak van de politie, betere gevangenissen, veroudering Legalisering van abortus Na verloop van tijd minder geboortes uit arme eenoudergezinnen, minder potentiele criminelen
Zorgt een makelaar voor de hoogste prijs voor je huis? Kent de markt, kent de waarde van het huis, kent het gedrag van de koper, kortom betere kennis, beter geïnformeerd De makelaar verdient een % van de totale verkoopprijs Bijv. 6% 1/1 delen met de kopende makelaar en 1/1 delen met kantoor=1,5%. 10k verschil levert 150 euro op. Sneller verkopen beter dan een hogere prijs.
Passen leraren de cijfers van hun leerlingen aan? Een goede leraar=> hoge cijfers=>de kinderen doen het goed. Hij zelf ook. Eer en glorie, (financieel) voordeel. Ja: de leraar wil beter scoren door te verdoezelen dat hij zelf slecht is
Verloopt de Sumo competitie eerlijk? Eervolle en oude sport: winst wordt niet gekocht, verlies niet toegestaan Tournooien (66 worstelaars) bestaan uit 15 potjes. Als je er 8 wint blijf je in het circuit. Daar blijven en 100K bonus voor de tournooiwinnaar. Laatste tournooidag: 7-7 winnen vaak tegen 8-6 of 9-5. Die kunnen het tournooi niet meer winnen, dus laten de 7-7 winnen. Het volgende tournooi winnen ze zelf.
Zijn mensen betrouw baar? Mensen betalen trouw hun consumpties uit een onbewaakte counter. Wel bagel eten, toch niet betalen. Slecht betalen: mdw grote bedrijven, relatief slecht weer, ‘beladen’ feestdagen (X-mas), onprettige bedrijven, hogere rang
Waarom is de KKK lastig te bestrijden? Ondoordringbaar bastion. Eigen code. Lijkt gevaarlijk door geheimen. Geheim genootschap met eigen (gekke) rituelen, status voor leden. Als geheime informatie publiek wordt, verdwijnt het ‘elan’.
Waarom wonen drugdealers bij hun moeder? Crack dealers verdienen veel geld. In ruil nemen ze grote risico’s. De kans om veel geld te verdienen en voor succes en de eer. Kleine kans om legaal succesvol te worden. In slechte buurten is het aanbod van jeugd met weinig kansen groot. Zoals in alle sectoren is het inkomen van ‘werknemers’ in drugsbendes een fractie van dat van de top.
Hangt het succes van een kind af van de opvoeding? Een betere (en meer) opvoeding leidt tot een hogere kans op maatschappelijk succes voor de kinderen. Meer actie en activiteiten van de ouders leiden tot geslaagde kinderen. Het is niet wat de ouders doen, maar wat ze zijn dat leidt tot succes van de kinderen. Een ’tiger mom’ is niet effectief.
Bepaalt je naam je toekomst? Je naam heeft invloed op je toekomstig succes. De ouder geeft het kind een naam waarin zijn verwachtingen over de toekomst van het kind besloten ligt. Je naam bepaalt niet je toekomst.

Conclusies en relevantie

  • Incentives bepalen het gedrag van de ontvanger ervan. Niet noodzakelijk ook degene voor wie het voordeel was bedoeld.
  • De publieke opinie zit er vaak naast. Die heeft ook weer zijn eigen redenen om iets te vinden, maar niet (noodzakelijk) de werkelijkheid.
  • Grote effecten hebben vaak een kleine, subtiele origine.
  • Experts gebruiken hun informatie- en kennisvoorsprong voor hun eigen gewin.
  • Als X en Y zich hetzelfde gedragen, is er dan een correlatie of een causaliteit? In het eerste geval: is X de reden voor Y, Y de reden voor X of is Z de reden voor X en Y?
  • Mensen zijn niet goed in het inschatten van risico’s. Dingen die Nu gebeuren worden zwaarder ingeschat. Dingen die Afschuw wekken worden zwaarder ingeschat.
  • De aanpak van dit onderzoek relevant voor dat van mijn onderzoek, zoals in de eerste alinea beschreven
  • De uitkomsten geven betekenis aan de interacties tussen mensen: de incentive bepaalt de richting van het gedrag van de ontvanger en heeft dus invloed op de autonomie van de ontvanger. Die beslist niet langer autonoom wat haar goed dunkt maar volgt de prikkel van iemand die haar betaalt voor een bepaalde uitkomst.

Gebaseerd op het boek: Freakonomics – A Rogue Economist Explores the Hidden Sides of Everything, van Steven D. Levitt en Stephen J. Dubner.

De Piloten van Luyendijk

Deze post is een reactie op het recente en waardevolle boek van Joris Luyendijk: Dit Kan Niet Waar Zijn. Luyendijk analyseert als ’tot antropoloog opgeleide journalist’ en zonder kennis van financiële markten, het gedrag van mensen in hun professionele habitat: de financiële sector in Londen. Zijn eerste interview vraag is ongeveer deze: ‘hoe kun jij met jezelf leven na wat je de mensheid hebt aangedaan in de crisis van 2008?’. Zijn beeld na circa twee jaar onderzoek en 200 interviews is: een vliegtuig met problemen en een lege cockpit. Met de kennis die ik tot nu toe heb verzameld over complexe adaptieve systemen ga ik op zoek naar de missende piloten van Luyendijk. Verder lezen De Piloten van Luyendijk

North over Institutional Change

Deze post is gebaseerd op het artikel ‘Some Fundamental Puzzles in Economic History/Development van D.C. North verschenen in de SFI Proceedings ‘The Economy as an Evolving Complex System’.

North is heeft de term Institutional Economics geïntroduceerd, zie de vorige post: Economische modellen. Hij stelt in dit artikel de vraag waarom de historische ontwikkeling van landen onderling zo verschilt en hoe economische verandering in modellen kan worden opgenomen. Verder lezen North over Institutional Change

Padgett over zelf-organisatie

Deze post is grotendeels gebaseerd op het artikel ‘The Emergence of Simple Ecologies of Skill: A Hypercycle Approach to Economic Organisation’ van John F. Padgett opgenomen in Santa Fé Proceedings, ‘The Economy as an Evolving Complex System’.

Dit artikel is één van de sleutels voor mijn onderzoek, omdat het een antwoord geeft op de vraag hoe er samenhang kan ontstaan in activiteiten waarin die samenhang niet expliciet is. Verder bevat het geresenteerde model een voorstel voor een mechanisme waarmee lokale acties naar globaal gedrag propageren. Het model sluit aan bij mijn ‘velden van activiteiten’ (zie post Simplexity en Complicity), de Concepten van Dennett, de Memes van Dawkins, de Bucket Brigade algorithm van Holland (zie de post Inductie) en voorstellen van  Kauffman. Het model is ingebed in de evolutietheorie en geeft daarin een fundament aan het begip organisatie. Als laatste is er een hint naar een natuurlijke moraal die voortkomt uit de vorm van het proces en daar ga ik nog een post aan wijden. Verder lezen Padgett over zelf-organisatie