Ashby Principles of the self-organizing system

Ashby WR . Principles of the Self-Organizing System . Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G.W. Zopf, jr editors . Pergamon Press London UK pp. 255-278 . 1962

What is organization?

The hard core of the concept (of organization DPB) is, in my opinion, that of ‘conditionality’. As soon as the relation between two entities A and B becomes conditional on C’s value or state then a necessary component of ‘organization’ is present. Thus the theory of organization is partly co-extensive with the theory of functions of more than one variable’ [Ashby 1962 p 256, emphasis of the author]. DPB: this is my example of the chess board FIND CHESS and, apparently, how the pieces are organized by the conditions of the others. Refer to this text there. The converse of ‘conditional on’ is ‘not conditional on’: the converse of ‘organization’ is separability or reducibility. See below.In a mathematical sense this means that some parts of a function of many variables do not depend on some other parts of it. In a mechanical sense it means that some components of a machine work independent of other components of that machine. DPB: the outcome of the function or the machine depend on the workings of the reducible variables in a simple way. The converse of conditionality is reducibility. DPB: conditionality implies organization. Reducibility implies a lack of organization. This is the opposite of what I thought because whatever is organized is repetitive, a pattern, and it can be reduced away, because it can be summarized in a rule.

In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. “Harder” means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction). First, we find ourselves trying to solve a problem that is similar to a problem we’ve already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions. Second: suppose we have a problem that we’ve proven is hard to solve, and we have a similar new problem. We might suspect that it is also hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard. In mathematics, a topological space is called separable if it contains a countable, dense subset [Wikipedia].

The treatment of ‘conditionality’ (whether by functions of many variables, by correlation analysis, by uncertainty analysis, or by other ways) makes us realize that the essential idea is that there is first a product space – that of the possibilities – within which some sub-set of points indicates the actualities. This way of looking at ‘conditionality’ makes us realize that it is related to that of ‘communication’; and it is, of course, quite plausible that we should define parts as being ‘organized’ when ‘communication’ (in some generalized sense) occurs between them. (Again the natural converse is that of independence, which represents non-communication.)’ [Ashby 1962 p 257 emphasis of the author]. DPB: the fist sentence bears a relation to the virtual-actual-real. The second sentence can be read as the existence of some sort of a relation between the organized parts. And hence a kind of communication takes place between them. When there is no communication, then A and B can be wherever on the chess board, and there is no constraint between them, and hence no organization: ‘This the presence of ‘organization’ between variables is equivalent to the existence of a constraint in the product-space of the possibilities. I stress this point, because while, in the past, biologists have tended to think of organization as something extra, something added to the elementary variables, the modern theory, based on the logic of communication, regards organization as a restriction or constraint [Ashby p 257 emphasis of the author]

DPB: This is much like the chess example: Organization comes from the elements, and it is not imposed from somewhere else. The product space of a system is its Idea. ‘Whence comes this product space? Its chief peculiarity is that it contains more than actually exists in the real physical world, for it is the latter that gives us the actual, constrained subset’ [Ashby p 257]. DPB: I have explained this in terms of individuation: the virtual+actual makes the real. Refer to this quote above at the chess game section!

The real world gives the subset of what is; the product space represents the uncertainty of the observer’ [Ashby 1962 p 258]. DPB: this is relevant too, because it related to the virtual: everything it could be in the focus of the observer, its space of possibilities. The space changes when the observer changes and two observers can have different spaces: ‘The ‘constraint’ is thus a relation between observer and thing; the properties of any particular constraint will depend in both the real thing and on the observer. It follows that a substantial part of the theory of organization will be concerned with properties that are not intrinsic to the thing but are relational between observer and thing’ [Ashby p 258]. Re: OBSERVER SUBJECT / OBJECT

Whole and Parts

In regards the concept of ‘organization’ it is assumed that there is a whole that is composed of parts: a) fx= x1 + x2+..+ xn means that there are n parts in this system. b) S1, S2, .. means that there are states of a system S without mention of its parts if any. The point is that a system can show dynamics without reference to parts, and that does therefore not refer to the concept of organization: the concepts are independent. This emphasizes the idea that organization is in the eye of the observer: ‘..I will state the proposition that: given a whole with arbitrarily given behavior, a great variety of arbitrary ‘parts’ can be seen in it; for all that is necessary, when the arbitrary part is proposed, is that we assume the given part to be coupled to another suitably related part, so that the two together form a whole isomorphic with the whole that was given’ [Ashby 1962 p 259]. DPB: isomorphic means invertible mathematical mapping. Does this mean that A and B are the structure that forms C which is the whole under a set of relations between A and B? ‘Thus, subject only to certain requirements (e.g. that equilibria map into equilibria) any dynamic system can be made to display a variety of arbitrarily assigned ‘parts’, simply by a change in the observer’s view point’ [Ashby 1962 p 260 amphasis of the author]. DPB: dit is een belangrijke opmerking die past bij het Deleuze / Luhmann verhaal over de observer. Also the pattern ‘versus’ coherence section. Re OBSERVER

Machines in general

The question is whether general systems theory deals with mathematical systems, in which case they need only be internally consistent) or with physical systems also, in which case they are tied to what the real world offers. Machines need not be material and reference to energy is irrelevant. ‘A ‘machine’ is that which behaves in a machine-like way, namely, that its internal state, and the state of its surroundings, defines uniquely the next state it will go to’ [Ashby 1962 p 261]. This definition was originally proposed in [Ashby W.R. . The Physical origin of adaptation by trial and error . G. Gen. Psychol., 32, pp. 13-25 . 1945]. DPB: this is much applicable to FIND INDIVIDUATION. See how to incorporate it there as a quote. I is the set of input state, S is the set of internal states, f is a mapping IxS into S. The ‘organization’ of a machine is f: change f and the organization changes. ‘In other words, the possible organizations between the parts can be set into one-one correspondence with the set of possible mapings of IxS into S. ‘Thus ‘organization’ and ‘mapping’ are two ways of looking at the same thing – the organization being noticed by the observer of the actual system, and the mapping being recorded by the person who represents the behavior in mathematical or other symbolism’ [Ashby p 262]. DPB: I referred to the organization as per Ashby observed as a pattern, which is the result of a coherence of the system in focus, Ashby says the actual system. Re COHERENCE PATTERN

‘Good’ organization

Whether an ‘organization’ is good depends on its usefulness. Biological systems have often come to be useful (DPB: preserving something, rendering it irreversible) under the pressure of natural selection. Engineered systems are often not useful: a) most organizations are bad ones b) the good ones have to be sought for c) what is meant with ‘good’ must be clearly defined, explicitly if necessary, in every case. What is meant with a ‘good’ organization of a brain? In the case of organisms this is the case if it supports its survival. In general: an organization can be considered ‘good’ if it keeps the values of a set of (essential) variables within their particular limits. These are mechanisms for homeostasis: the organization is ‘good’ if it makes the system stable around an equilibrium. The essence of the idea is that a number of variables so interacts as to achieve some given ‘focal condition’. But:’ .. what I want to say here – there is no such thing as ‘good organization’ in any absolute sense. Always it is relative; and an organization that is good in one context or under one criterion may be bad under another’ [Ashby 1962 p 263 emphasis of the author]. DPB: the OUTBOARD ENGINE is good to produce exhaust fumes and to consume toxic fossil materials and not good at driving boats. Every faculty of a brain is conditional because it can be handicapped in at least one environment by precisely that faculty: ’.. whatever that faculty or organization achieves, let that be not in the focal conditions’ [p 264 emphasis of the author]. There is no faculty (property, organization) of the brain that cannot be (become) undesirable, even harmful under certain circumstances. ‘Is it not good that a brain should have memory? Not at all, I reply – only when the environment is of a type in which the future often copies the past; should he future often be the inverse of the past, memory is actually disadvantageous. .. Is it not good that a brain should have its parts in rich functional connection? I say NO – not in general; only when the environment is itself richly connected. When the environment’s parts are not richly connected (when it is highly reducible in other words), adaptation will go faster if the brain is also highly reducible, i.e. if its connectivity is small (Ashby 1960, d)’ [Ashby 1962 pp. 264-5]. DPB: this is relevant for the holes that Vid can observe where others are. re VID Ashby refers to Sommerhof: a set of disturbances must be given as well as a focal condition. The disturbances threaten to drive the outcome outside of the focal condition. The ‘good’ organization is the relation between the set of disturbances and the goal (the focal condition): change the circumstances and the outcome will not lead to the goal and be evaluated ‘bad’.

Self-Organizing Systems

Two meanings of the concept: a) Changing from parts separated to parts joined (‘Changing from unorganized to organized’), and this concept can also be covered with the concept of self-connecting b) ‘Changing from a ‘bad’ organization to a ‘good’ one’ [Ashby 1962 p 267]. DPB: do I address this somewhere in regards the self-organization I guess I talk only about the first meaning? The last one refers to the case where the organization changes itself from showing bad behavior to showing good behavior. ‘..no machine can be self-organizing in this sense’ [Ashby 1962 p 267]. f: I x S = S. f is defined as a set of couples such that si leads to sj by the internal drive of the system. To allow f to be a function of the state is to make nonsense of the whole concept. DPB: but this is exactly what individuation does! ‘Were f in the machines to be some function of the state S, we would have to redefine our machine’ [Ashby 1962 p 268]. DPB: the function does not depend on the set S, because then all of the states, past and present could be occurring simultaneously, hence the reference to the new machine. But, given the concept of individuation, it should depend on the present in S? ‘We start with the set S of states, and assume that f changes, to g say. So we really have a variable, a(t) say, a function of time that had at first the value f and later the value g. This change, as we have just seen, cannot be ascribed to any cause in the set S; so it must have come from some outside agent, acting on the system S as input. If the system is to be in some sense ‘self-organizing’, the ‘self’ must be enlarged to include this variable a, and, to keep the whole bounded, the cause of a’s change must be in S (or a). Thus the appearance of of being ‘self-organizing’ can be given only by the machine S being coupled to another machine (of one part)..’ [p 269]. DPB: Big surpise. How to deal with this? Through individuation, and I feel the use of time t as an independent is confusing. So what happens is that that a is in the milieu. Therefore a is not in S. Therefore the Monad can only exist in the Nomad &c. Re INDIVIDUATION, MILIEU

The spontaneous generation of organization

.. every isolate determinate dynamic system obeying unchanging laws will develop ‘organisms’ that are adapted to their ‘environments. The argument is simple enough in principle. We start with the fact that systems in general go to equilibrium. Now most of a system’s states are non-equilibrial (if we exclude the extreme case of the systems in neutral equilibrium). So in going from any state to one of the equilibria, the system is going from a larger number of states to a smaller. In this way it is performing a selection, in the purely objective sense that it rejects some states, by leaving them, and retains some other state, by sticking to it. Thus, as every determinate system goes to equilibrium, so does it select. ## tot zo ver? We have heard ad nauseam the dictum that a machine cannot select; the truth is just the opposite: every machine, as it goes to equilibrium, performs the corresponding act of selecting##. Now, equilibrium in simple systems is usually trivial and uninteresting … when the system is more complex, the and dynamic, equilibrium, and the stability around it, can be much more interesting. .. What makes the change, from trivial to interesting, is simply the scale of the events. ‘Going to equilibrium’ is trivial in the simple pendulum, for the equilibrium is no more than a single point. But when the system is more complex; when, say, a country’s economy goes back from wartime to normal methods then the stable region is vast, and much more interesting activity can occur within it’ [Ashby 1962 pp. 270-1]. DPB: this is useful in regards the selective mechanisms of individuation re machines.

Competition

So the answer to the question:How can we generate intelligence synthetically? Is as follows. Take a dynamic systems whose laws are unchanging and single-valued, and whose size is so large that after it has gone to an equilibrium that involves only a small fraction of its total states, this small fraction is still large enough to allow room for a good deal of change and behavior. Let it go on for a long enough time to get to such an equilibrium. Then examine the equilibrium in detail. You will find that the states or forms now in being are peculiarly able to survive against the disturbances induced by the laws. Split the equilibrium in two, call one part ‘organism’ and the other part ‘environment’: you will find that this ‘organism’ is peculiarly able to survive the disturbances from this ‘environment’. The degree of adaptation and complexity that this organism can develop is bounded only by the size of the whole dynamic system and by the time over which it is allowed to progress towards equilibrium. Thus, as I said, every isolated determinate system dynamic system will develop organisms that are adapted to their environments. .. In this sense, then, every machine can be thought of as ‘self-organizing’, for it will develop , to such a degree as its size and complexity allow, some functional structure homologous with an ‘adapted organism’ [Ashby 1962 p 272]. DPB: I know this argument and I’ve quoted it before, I seem to remember in Design for a Brain or else the article about Requisite Variety. FIND NOMAD MONAD The point seems to be that the environment serves as the a, but is is not an extension of the machine in the sense that it belongs to it, because it belongs to its environment and is by definition not a part of it. ‘To itself, its own organization will always, by definition, be good. .. But these criteria come after the organization for survival; having seen what survives we then see what is ‘good’ for that form. What emerges depends simply on what are the system’s laws and from what state it started; there is no implication that the organization developed will be ‘good’ in any absolute sense, or according to the criterion of any outside body such as ourselves’ [p 273]. DPB: this is the point of Wolfram that the outcome is only defined by the rules and the initial conditions.

Chemical Organization Theory and Autopoiesis

E-mail communication of Francis Heylighen on 29 May 2018:

Inspired by the notion of autopoiesis (“self-production”) that Maturana and Varela developed as a definition of life, I wanted to generalize the underlying idea of cyclic processes to other ill-understood phenomena, such as mind, consciousness, social systems and ecosystems. The difference between these phenomena and the living organisms analysed by Maturana and Varela is that the former don’t have a clear boundary or closure that gives them a stable identity. Yet, they still exhibit this mechanism of “self-production” in which the components of the system are transformed into other components in such a way that the main components are eventually reconstituted.

This mechanism is neatly formalized in COT’s notion of “self-maintenance” of a network of reactions. I am not going to repeat this here but refer to my paper cited below. Instead, I’ll give a very simple example of such a circular, self-reproducing process:

A -> B,

B -> C,

C -> A

The components A, B, C are here continuously broken down but then reconstituted, so that the system rebuilds itself, and thus maintains an invariant identity within a flux of endless change.

A slightly more complex example:

A + X -> B + U

B + Y -> C + V

C + Z -> A + W

Here A, B, and C need the resources (inputs, or “food”) X, Y and Z to be reconstituted, while producing the waste products U, V, and W. This is more typical of an actual organism that needs inputs and outputs while still being “operationally” closed in its network of processes.

In more complex processes, several components are being simultaneously consumed and produced, but so that the overall mixture of components remains relatively invariant. In this case, the concentration of the components can vary the one relative to the other, so that the system never really returns to the same state, only to a state that is qualitatively equivalent (having the same components but in different amounts).

One more generalization is to allow the state of the system to also vary qualitatively: some components may (temporarily) disappear, while others are newly added. In this case, we  no longer have strict autopoiesis or [closure + self-maintenance], i.e. the criterion for being an “organization” in COT. However, we still have a form of continuity of the organization based on the circulation or recycling of the components.

An illustration would be the circulation of traffic in a city. Most vehicles move to different destinations within the city, but eventually come back to destinations they have visited before. However, occasionally vehicles leave the city that may or may not come back, while new vehicles enter the city that may or may not stay within. Thus, the distribution of individual vehicles in the city changes quantitatively and qualitatively while remaining relatively continuous, as most vehicle-position pairs are “recycled” or reconstituted eventually. This is what I call circulation.

Most generally, what circulates are not physical things but what I have earlier called challenges. Challenges are phenomena or situations that incite some action. This action transforms the situation into a different situation. Alternative names for such phenomena could be stimuli (phenomena that stimulate an action or process), activations (phenomena that are are active, i.e. ready to incite action) or selections (phenomena singled out as being important, valuable or meaningful enough to deserve further processing). The term “selections” is the one used by Luhmann in his autopoietic model of social systems as circulating communications.

I have previously analysed distributed intelligence (and more generally any process of self-organization or evolution) as the propagation of challenges: one challenge produces one or more other challenges,  which in turn produce further challenges, and so on. Circulation is a special form of propagation in which the initial challenges are recurrently reactivated, i.e. where the propagation path is circular, coming back to its origins.

This to me seems a better model of society than Luhmann’s autopoietic social systems. The reason is that proper autopoiesis does not really allow the system to evolve, as it needs to exactly rebuild all its components, without producing any new ones. With circulating challenges, the main structure of society is continuously rebuilt, thus ensuring the continuity of its organization, however while allowing gradual changes in which old challenges (distinctions, norms, values…) dissipate and new ones are introduced.

Another application of circulating challenges are ecosystems. Different species and their products (such as CO2, water, organic material, minerals, etc.) are constantly recycled, as the one is consumed in order to produce the other, but most are eventually reconstituted. Yet, not everything is reproduced: some species may become extinct, while new species invade the ecosystem. Thus the ecosystem undergoes constant evolution, while being relatively stable and resilient against perturbations.

Perhaps the most interesting application of this concept of circulation is consciousness. The “hard problem” of consciousness asks why information processing in the brain does not just function automatically or unconsciously, the way we automatically pull back our hand from a hot surface, before we even have become conscious of the pain of burning. The “global workspace” theory of consciousness says that various subconscious stimuli enter the global workspace in the brain (a crossroad of neural connections in the prefrontal cortext), but that only a few are sufficiently amplified to win the competition for workspace domination. The winners are characterized by much stronger activation and their ability to be “broadcasted” to all brain modules (instead of remaining restricted to specialized modules functioning subconsciously). These brain modules can then each add their own specific interpretation to the “conscious” thought.

In my interpretation, reaching the level of activation necessary to “flood” the global workspace means that activation does not just propagate from neuron to neuron, but starts to circulate so that a large array of neurons in the workspace are constantly reactivated. This circulation keeps the signal alive long enough for the different specialized brain modules to process it, and add their own inferences to it. Normally, activation cannot stay in place, because of neuronal fatigue: an excited neuron must pass on its “action potential” to connected neurons, it cannot maintain activation. To maintain an activation pattern (representing a challenge) long enough so that it can be examined and processed by disparate modules that pattern must be stabilized by circulation.

But circulation, as noted, does not imply invariance or permanence, merely a relative stability or continuity that undergoes transformations by incoming stimuli or on-going processing. This seems to be the essence of consciousness: on the one hand, the content of our consciousness is constantly changing (the “stream of consciousness”), on the other hand that content must endure sufficiently long for specialized brain processes to consider and process it, putting part of it in episodic memory, evaluating part of it in terms of its importance, deciding to turn part of it into action, or dismissing or vetoing part of it as inappropriate.

This relative stability enables reflection, i.e. considering different options implied by the conscious content, and deciding which ones to follow up, and which ones to ignore. This ability to choose is the essence of “free will“. Subconscious processes, on the other hand, just flow automatically and linearly from beginning to end, so that there is no occasion to interrupt the flow and decide to go somewhere else. It is because the flow circulates and returns that the occasion is created to interrupt it after some aspects of that flow have been processed and found to be misdirected.

To make this idea of repetition with changes more concrete, I wish to present a kind of “delayed echo” technique used in music. One of the best implementation is Frippertronics, invented by avant-garde rock guitarist Robert Fripp (of King Crimson): https://en.wikipedia.org/wiki/Frippertronics

The basic implementation consist of an analogue magnetic tape on which the sounds produced by a musician are recorded. However, after having passed the recording head of the tape recorder, the tape continues moving until it is read by another head that reads and plays the recorded sound. Thus, the sound recorded at time t is played back at time t + T, where the interval T depends on the distance between the recording and playback heads. But while the recorded sound in played back, the recording head continues recording all the sound, played by either the musician(s) or the playback head, on the same tape. Thus, the sound propagates from musician to recording head, from where is is transported by tape to the playback head, from where it is propagated in the form of a sound wave back to the recording head, thus forming a feedback loop.

If T is short, the effect is like an echo, where the initial sound is repeated a number of times until it fades away (under the assumption that the playback is slightly less loud than the original sound). For a longer T, the repeated sound may not be immediately recognized as a copy of what was recorded before given that many other sounds have been produced in the meantime. What makes the technique interesting is that while the recorded sounds are repeated, the musician each time adds another layer of sound to the layers already on the recording. This allows the musician to build up a complex, multilayered, “symphonic” sound, where s/he is being accompanied by her/his previous performance. The resulting music is repetitive, but not strictly so, since each newly added sound creates a new element, and these elements accumulate so that they can steer the composition in a wholly different direction.

This “tape loop” can be seen as a simplified (linear or one-dimensional) version of what I called circulation, where the looping or recycling maintains a continuity, while the gradual fading of earlier recordings and the addition of new sounds creates an endlessly evolving “stream” of sound. My hypothesis is that consciousness corresponds to a similar circulation of neural activation, with the different brain modules playing the role of the musicians that add new input to the circulating signal. A differences is probably that the removal of outdated input does not just happen by slow “fading” but by active inhibition, given that the workspace can only sustain a certain amount of circulating activation, so that strong new input tends to suppress weaker existing signals. This and the complexity of circulating in several directions of a network may explain why conscious content appears much more dynamic than repetitive music.

Chemical Organization Theory as a Modeling Tool

Heylighen, F., Beigi, S. and Veloz, T. . Chemical Organization Theory as a modeling framework for self-organization, autopoiesis and resilience . Paper to be submitted based on working paper 2015-01.

Introduction

Complex systems consist of many interacting elements that self-organize: coherent patterns of organization or form emerge from their interactions. There is a need of theoretical understanding of self-organization and adaptation: our mathematical and conceptual tools are limited for the description of emergence and interaction. The reductionist approach analyzes a system into its constituent static parts and their variable properties; the state of the system is determined by the values of these variable properties and processes are transitions between states; the different possible states determine an a priori predefined state-space; only after introducing all these static elements and setting up a set of conditions for the state-space can we study the evolution of the system in that state-space. This approach makes it difficult to understand a system property such as emergent behavior. Process metaphysics and action ontology assume that reality is not constituted from things but from processes or actions; the difficulty is to represent these processes in a precise, simple, and concrete way. This paper aims to formalize these processes as reaction networks of chemical organization theory; here the reactions are the fundamental elements, the processes are primary; states take the second place as the changing of the ingredients as the processes go on; the molecules are not static objects but raw materials that are produced and consumed by the reactions. COT is a process ontology; it can describe processes in any sphere and hence in scientific discipline; ‘.. method to define and construct organizations, i.e. self-sustaining networks of interactions within a larger network of potential interactions. .. suited to describe self-organization, autopoiesis, individuation, sustainability, resilience, and the emergence of complex, adaptive systems out of simpler components’ [p 2]. DPB: this reminds me of the landscape of Jobs; all the relevant aspects are there. It is hoped that this approach helps to answer the question: How does a system self-organize; how are complex wholes constructed out of simpler elements?

Reaction Networks

A reaction network consists of resources and reactions. The resources are distinguishable phenomena in some shared space, a reaction vessel, called the medium. The reactions are elementary processes that create or destroy resources. RN = <R,M>, where RM is a reaction network, R is a reaction, M is a resource: M = {a,b,c,…} and R is a subset of P(M) x P(M), where P is the power set (set of all subsets) of M and each reaction transforms a subset Input of M into a subset Output of M; the resources in I are the reactants and the resources in O are the products; I and O are multisets meaning that resources can occur more than once. R:x1+x2+x3+..→y1+y2+… The + in the left term means a conjunction of necessary resources x: if all are simultaneously present in I(r) then the reaction takes place and produces the products y.

Reaction Networks vs. Traditional Networks

The system <M,R> forms a network because the resources in M are linked by the reactions in R transforming one resource into another. What is specific for COT is that a reaction represents the transform from a multiplicity of resources into another multiplicity of them: a set I transforms to a set O. DPB: this reminds me of category theory. My principal question at this point is whether the problem of where organization is produced is not relocated: first the question was how to tweak static object into self-organization, now it is which molecules in which quantities and combination to conjuncture to get them to produce other resources and showing patterns at it. In RN theory the transform of resources can occur through a disjunction or a conjunction: the disjunction is represented by the juxtaposed reaction formulae, the conjunction by the + within a reaction formula.

Reaction Networks and Propositional Logic

Conjunction: AND: &; Disjunction: OR: new reaction line; Implication: FOLLOWS: →; Negation: NOT: -. For instance: a&b&c&..x. But the resources at the I side are not destroyed by the process then formally a&b&..→a&b&x&… Logic is static because no propositions are destroyed: new implications can be found, but nothing new is created. Negation can be thought of as the production of the absence of a resource: a+bc+ d = ac+ d – b. I and O can be empty and a resource can be created from nothing (affirmation, a) or a resource can create nothing (elimination, aor →-a). Another example is aa and hence a+(-a) = a-aand a-a: the idea is that a particle and its anti-particle annihilate one another, but they can be created together from nothing.

Competition and cooperation

The concept of negative resources allow the expression of conflict, contradiction or inhibition: a→-b what is the same as a+b0 (empty set): the more of a produced, the less of b is present: the causal relation is negative. The relation “a inhibits b” holds if: : a is required to consume but not produce b. The opposite “a promotes b” means that a is required to produce but not to consume b. When the inhibiting and promoting relations are symmetrical, a and b inhibit (a and b competitors) or promote (a and b cooperators) each other, but they do not need to be. Inhibition is a negative causality and promotion is a positive influence. If only positive influences or an even number of negative influences are included in a cycle then negative feedback occurs. When the number of negative influences is uneven then a positive feedback occurs. Negative feedback leads to stabilization or oscillation, positive feedback leads to exponential growth. In a social network a particular message can be promoted, suppressed or inhibited by another. Interaction sin the network occur through their shared resources.

Organizations

In COT and organization is defined as a self-sustaining reaction system: produced and consumed resources are the same: ‘This means that although the system is intrinsically dynamic or process-based, constantly creating or destroying its own components, the complete set of its components (resources) remains invariant, because what disappears in one reaction is recreated by another on, while no qualitatively new components are added’ [p 8]. DPB: I find this an appealing idea. But I find it also hard to think of the basic components that would make up a particular memeplex, even using the connotations. What in other words would the resources have to be and what the reactions to construct a memeplex from them? If the resource is an idea then one idea leads to another, which matches my theory. But this method would have to cater for reinforcement: and the idea itself does not much change, it does get reinforced as it is repeated. And in addition how would the connotation be attached to them: or must it be seen as an ‘envelope’ that contains the address &c, and that ‘arms’ the connoted idea (meme) to react (compare) with others such that the ranking order in the mind of the person is established? And such that stable network of memes is established such that they form a memeplex. The property of organization above, is central to the theory of autopoiesis, but, as stated in the text, without the boundary of a living system. But I don’t agree with this: the RC church has a very strong boundary that separates it from everything that is not the RC church. And so the RN model should cater for more complexity than only the forming of molecules (‘prior to the first cell’). The organization of a subRN <M’,R> of a larger RN <M,R> is defined by these characteristics: 1. closure: when I(r) is a part of M’ then O(r) is a part of M’ for all resources 2. semi-self-maintenance: no existing resource is removed, each resource consumed by some reaction is produced again by some other reaction working on the same starting set and 3. self-maintenance: each consumed resource x element of M’ is produced by some reaction in <M’,R> in at least the same amount as the amount consumed (this is a difficult one, because a ledger is required over the existence of the system to account for the quantities of each resource). ‘We are now able to define the crucial concept of organization: a subset of resources and reactions <M’,R> is an organization when it is closed and self-maintaining. This basically means that while the reactions in R are processing the resources in set M’, they leave the set M’ invariant: no resources are added (closure) and no resources are removed (Self-maintenance)’( emphasis of the author) [p 9]. The difference with other models is that the basic assumption is that everything changes, but this concept of organization means that stability can arise while everything changes continually, in fact this is the definition of autopoiesis.

Some examples

If a resource appears in both the I and the O then it is a catalyst.

Extending the model

A quantitative shortcoming, a possible extension, is the absence of relative proportions and of the relative speeds of the reactions. To extend quantitatively the model can be detailed to encompass all the processes that make up some particular ecology of reactions.

Self-organization

If we apply the rules for closure and maintenance we can know how organization emerges. If a reaction is added, a source for some resource is added which interrupts closure, or a sink is added which interrupts the self-maintenance. In general a starting set of resources will not be closed; their reactions will lead to new resources and so on; but the production of new ones will stop if no new resources are possible given the resources in the system; at that point closure is reached: ‘Thus, closure can be seen as an attractor of the dynamics defined by resource addition: it is the end point of the evolution, where further evolution stops’ [p 12]. In regards to self-maintenance, starting at the closed set, some of the resources will be consumed but not produced in sufficient amounts to replace the used amounts; these will disappear from the set; this does not affect closure because loss of resources cannot add new resources; resources now start to disappear one by one from the set; this process stops when the remaining resources only depend on the remaining ones (and not the disappeared ones): ‘Thus, self-maintenance too can be seen as an attractor of the dynamics defined by resource removal. The combination of resource addition ending in closure followed by resource removal ending in self-maintenance produces an invariant set of resources and reactions. This unchanging reaction network is by definition an organization’ [p 12]. Every dynamic system will end up in a attractor, namely a stationary regime that the system cannot leave: ‘In the attractor regime the different components of the system have mutually adapted, in the sense that the one no longer threatens to extinguish the other they have co-evolved to a “symbiotic”state, where they either peacefully live next to each other, or actively help one another to be produced, thus sustaining their overall interaction’ [p 12]. DPB: from the push and pull of these different attractors emerges (or is selected) an attractor that manages the behavior of the system.

Sustainability and resilience

An organization in the above sense is by definition self-maintaining and therefore sustainable. Many organizations grow because they produce more resources than they consume (e.g. positive feedback of resources: overproduced). Sustainability means the ability of an organization to grow without outside interference. Resilience means the ability to maintain the essential organization in the face of outside disturbances; a disturbance can be represented by the injection or the removal of a resource that reacts with others in the system. Processes of control are: buffering, negative feedback, feedforward (neutralizing the disturbance before it has taken effect). The larger the variety of controls the systems sports, the more disturbances it can handle, an implementation of Asby’s law of requisite variety. Arbitrary networks of reactions will self-organize to produce sustainable organizations, because an organization is an attractor of their dynamics. DPB: this attractor issue and bearing in mind the difficulties with change management, this reminds me of the text about the limited room an attracted system takes up in state-space (containment) explains why a system once it is ‘attracted’ it will not change to another state without an effort of galactic proportions. ‘However, evolutionary reasoning shows that resilient outcomes are more likely in the long run than fragile ones. First, any evolutionary process starts from some arbitrary point in the state space of the system, while eventually reaching some attractor region within that space. Attractors are surrounded by basins, from which all states lead into the attractor (Heylighen, 2001). The larger the basin of an attractor, the larger the probability that the starting point is in that basin. Therefore, the system is more likely to end up in an attractor with a large basin than in one with a small basin. The larger the basin, the smaller the probability that a disturbance pushing the system out of its attractor would also push it out of the basin, and therefore the more resilient the organization corresponding to the attractor. Large basins normally represent stable systems characterized by negative feedback, since the deviation from the attractor is automatically counteracted by the descent back into the attractor. .. However, these unstable attractors will normally not survive long, as nearly any perturbation will push the system out of that attractor’s basin into the basin of a different attractor. . This very general, abstract reasoning makes it plausible that systems that are regularly perturbed will eventually settle down in a stable, resilient organization’ [p 15].

Metasystem transitions and topological structures

A metasystem transition = a major evolutionary transition = the emergence of a higher order organization from lower order organizations. COT can be understood in this way if an organization S (itself a system of elements, albeit organized) behaves like a resource of the catalyst type: invariant under reactions but it has an input of resources it consumes I(S) and an output of resources it produces O(S), resulting in this higher order reaction: I(S) + S S + O(S), assume that I(S) = {a,b} and O(S) = {c,d,e}, then this can be rewritten as a+b+S S+c+d+e. S itself constitutes of organized elements and it behaves like a black box processing some input to an output. If S is resilient it can even respond to changes in its input with a changed output. Now the design space of meta-systems can be widened to include catalyst resources of the type S, organizations that are self-maintaining and closed.

Concrete applications

It is possible to mix different kinds of resources; this enables the modeling of complex environments; this is likely to make the ensuing systems’ organizations more stable. ‘Like all living systems, the goal or intention of an organizatrion is to maintain and grow. To achieve this, it needs to produce the right actions for the right conditions (e.g. produce the right resource to neutralize a particular disturbance). This means that it implicitly contains a series of “condition-action rules” that play the role of the organization’s “knowledge”on how to act in its environment. The capability of selecting the right (sequence of) action(s) to solve a given problem constitutes the organization’s “intelligence”. To do this, it needs to perceive what is going on in its environment, i.e. to sense particular conditions (the presence or absence of certain resources) that are relevant to its goals. Thus, an organization can be seen as a rudimentary “intelligence” or “mind”’ [p 20]. DPB: I find this interesting because of the explanation of how such a model would work: the resources are the rules that the organization needs to sort out and to put in place at the right occasion.