Ashby WR . Principles of the Self-Organizing System . Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G.W. Zopf, jr editors . Pergamon Press London UK pp. 255-278 . 1962
What is organization?
‘The hard core of the concept (of organization DPB) is, in my opinion, that of ‘conditionality’. As soon as the relation between two entities A and B becomes conditional on C’s value or state then a necessary component of ‘organization’ is present. Thus the theory of organization is partly co-extensive with the theory of functions of more than one variable’ [Ashby 1962 p 256, emphasis of the author]. DPB: this is my example of the chess board FIND CHESS and, apparently, how the pieces are organized by the conditions of the others. Refer to this text there. The converse of ‘conditional on’ is ‘not conditional on’: the converse of ‘organization’ is separability or reducibility. See below.In a mathematical sense this means that some parts of a function of many variables do not depend on some other parts of it. In a mechanical sense it means that some components of a machine work independent of other components of that machine. DPB: the outcome of the function or the machine depend on the workings of the reducible variables in a simple way. The converse of conditionality is reducibility. DPB: conditionality implies organization. Reducibility implies a lack of organization. This is the opposite of what I thought because whatever is organized is repetitive, a pattern, and it can be reduced away, because it can be summarized in a rule.
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. “Harder” means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction). First, we find ourselves trying to solve a problem that is similar to a problem we’ve already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions. Second: suppose we have a problem that we’ve proven is hard to solve, and we have a similar new problem. We might suspect that it is also hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard. In mathematics, a topological space is called separable if it contains a countable, dense subset [Wikipedia].
‘The treatment of ‘conditionality’ (whether by functions of many variables, by correlation analysis, by uncertainty analysis, or by other ways) makes us realize that the essential idea is that there is first a product space – that of the possibilities – within which some sub-set of points indicates the actualities. This way of looking at ‘conditionality’ makes us realize that it is related to that of ‘communication’; and it is, of course, quite plausible that we should define parts as being ‘organized’ when ‘communication’ (in some generalized sense) occurs between them. (Again the natural converse is that of independence, which represents non-communication.)’ [Ashby 1962 p 257 emphasis of the author]. DPB: the fist sentence bears a relation to the virtual-actual-real. The second sentence can be read as the existence of some sort of a relation between the organized parts. And hence a kind of communication takes place between them. When there is no communication, then A and B can be wherever on the chess board, and there is no constraint between them, and hence no organization: ‘This the presence of ‘organization’ between variables is equivalent to the existence of a constraint in the product-space of the possibilities. I stress this point, because while, in the past, biologists have tended to think of organization as something extra, something added to the elementary variables, the modern theory, based on the logic of communication, regards organization as a restriction or constraint’ [Ashby p 257 emphasis of the author]
DPB: This is much like the chess example: Organization comes from the elements, and it is not imposed from somewhere else. The product space of a system is its Idea. ‘Whence comes this product space? Its chief peculiarity is that it contains more than actually exists in the real physical world, for it is the latter that gives us the actual, constrained subset’ [Ashby p 257]. DPB: I have explained this in terms of individuation: the virtual+actual makes the real. Refer to this quote above at the chess game section!
‘The real world gives the subset of what is; the product space represents the uncertainty of the observer’ [Ashby 1962 p 258]. DPB: this is relevant too, because it related to the virtual: everything it could be in the focus of the observer, its space of possibilities. The space changes when the observer changes and two observers can have different spaces: ‘The ‘constraint’ is thus a relation between observer and thing; the properties of any particular constraint will depend in both the real thing and on the observer. It follows that a substantial part of the theory of organization will be concerned with properties that are not intrinsic to the thing but are relational between observer and thing’ [Ashby p 258]. Re: OBSERVER SUBJECT / OBJECT
Whole and Parts
In regards the concept of ‘organization’ it is assumed that there is a whole that is composed of parts: a) fx= x1 + x2+..+ xn means that there are n parts in this system. b) S1, S2, .. means that there are states of a system S without mention of its parts if any. The point is that a system can show dynamics without reference to parts, and that does therefore not refer to the concept of organization: the concepts are independent. This emphasizes the idea that organization is in the eye of the observer: ‘..I will state the proposition that: given a whole with arbitrarily given behavior, a great variety of arbitrary ‘parts’ can be seen in it; for all that is necessary, when the arbitrary part is proposed, is that we assume the given part to be coupled to another suitably related part, so that the two together form a whole isomorphic with the whole that was given’ [Ashby 1962 p 259]. DPB: isomorphic means invertible mathematical mapping. Does this mean that A and B are the structure that forms C which is the whole under a set of relations between A and B? ‘Thus, subject only to certain requirements (e.g. that equilibria map into equilibria) any dynamic system can be made to display a variety of arbitrarily assigned ‘parts’, simply by a change in the observer’s view point’ [Ashby 1962 p 260 amphasis of the author]. DPB: dit is een belangrijke opmerking die past bij het Deleuze / Luhmann verhaal over de observer. Also the pattern ‘versus’ coherence section. Re OBSERVER
Machines in general
The question is whether general systems theory deals with mathematical systems, in which case they need only be internally consistent) or with physical systems also, in which case they are tied to what the real world offers. Machines need not be material and reference to energy is irrelevant. ‘A ‘machine’ is that which behaves in a machine-like way, namely, that its internal state, and the state of its surroundings, defines uniquely the next state it will go to’ [Ashby 1962 p 261]. This definition was originally proposed in [Ashby W.R. . The Physical origin of adaptation by trial and error . G. Gen. Psychol., 32, pp. 13-25 . 1945]. DPB: this is much applicable to FIND INDIVIDUATION. See how to incorporate it there as a quote. I is the set of input state, S is the set of internal states, f is a mapping IxS into S. The ‘organization’ of a machine is f: change f and the organization changes. ‘In other words, the possible organizations between the parts can be set into one-one correspondence with the set of possible mapings of IxS into S. ‘Thus ‘organization’ and ‘mapping’ are two ways of looking at the same thing – the organization being noticed by the observer of the actual system, and the mapping being recorded by the person who represents the behavior in mathematical or other symbolism’ [Ashby p 262]. DPB: I referred to the organization as per Ashby observed as a pattern, which is the result of a coherence of the system in focus, Ashby says the actual system. Re COHERENCE PATTERN
Whether an ‘organization’ is good depends on its usefulness. Biological systems have often come to be useful (DPB: preserving something, rendering it irreversible) under the pressure of natural selection. Engineered systems are often not useful: a) most organizations are bad ones b) the good ones have to be sought for c) what is meant with ‘good’ must be clearly defined, explicitly if necessary, in every case. What is meant with a ‘good’ organization of a brain? In the case of organisms this is the case if it supports its survival. In general: an organization can be considered ‘good’ if it keeps the values of a set of (essential) variables within their particular limits. These are mechanisms for homeostasis: the organization is ‘good’ if it makes the system stable around an equilibrium. The essence of the idea is that a number of variables so interacts as to achieve some given ‘focal condition’. But:’ .. what I want to say here – there is no such thing as ‘good organization’ in any absolute sense. Always it is relative; and an organization that is good in one context or under one criterion may be bad under another’ [Ashby 1962 p 263 emphasis of the author]. DPB: the OUTBOARD ENGINE is good to produce exhaust fumes and to consume toxic fossil materials and not good at driving boats. Every faculty of a brain is conditional because it can be handicapped in at least one environment by precisely that faculty: ’.. whatever that faculty or organization achieves, let that be not in the focal conditions’ [p 264 emphasis of the author]. There is no faculty (property, organization) of the brain that cannot be (become) undesirable, even harmful under certain circumstances. ‘Is it not good that a brain should have memory? Not at all, I reply – only when the environment is of a type in which the future often copies the past; should he future often be the inverse of the past, memory is actually disadvantageous. .. Is it not good that a brain should have its parts in rich functional connection? I say NO – not in general; only when the environment is itself richly connected. When the environment’s parts are not richly connected (when it is highly reducible in other words), adaptation will go faster if the brain is also highly reducible, i.e. if its connectivity is small (Ashby 1960, d)’ [Ashby 1962 pp. 264-5]. DPB: this is relevant for the holes that Vid can observe where others are. re VID Ashby refers to Sommerhof: a set of disturbances must be given as well as a focal condition. The disturbances threaten to drive the outcome outside of the focal condition. The ‘good’ organization is the relation between the set of disturbances and the goal (the focal condition): change the circumstances and the outcome will not lead to the goal and be evaluated ‘bad’.
Two meanings of the concept: a) Changing from parts separated to parts joined (‘Changing from unorganized to organized’), and this concept can also be covered with the concept of self-connecting b) ‘Changing from a ‘bad’ organization to a ‘good’ one’ [Ashby 1962 p 267]. DPB: do I address this somewhere in regards the self-organization I guess I talk only about the first meaning? The last one refers to the case where the organization changes itself from showing bad behavior to showing good behavior. ‘..no machine can be self-organizing in this sense’ [Ashby 1962 p 267]. f: I x S = S. f is defined as a set of couples such that si leads to sj by the internal drive of the system. To allow f to be a function of the state is to make nonsense of the whole concept. DPB: but this is exactly what individuation does! ‘Were f in the machines to be some function of the state S, we would have to redefine our machine’ [Ashby 1962 p 268]. DPB: the function does not depend on the set S, because then all of the states, past and present could be occurring simultaneously, hence the reference to the new machine. But, given the concept of individuation, it should depend on the present in S? ‘We start with the set S of states, and assume that f changes, to g say. So we really have a variable, a(t) say, a function of time that had at first the value f and later the value g. This change, as we have just seen, cannot be ascribed to any cause in the set S; so it must have come from some outside agent, acting on the system S as input. If the system is to be in some sense ‘self-organizing’, the ‘self’ must be enlarged to include this variable a, and, to keep the whole bounded, the cause of a’s change must be in S (or a). Thus the appearance of of being ‘self-organizing’ can be given only by the machine S being coupled to another machine (of one part)..’ [p 269]. DPB: Big surpise. How to deal with this? Through individuation, and I feel the use of time t as an independent is confusing. So what happens is that that a is in the milieu. Therefore a is not in S. Therefore the Monad can only exist in the Nomad &c. Re INDIVIDUATION, MILIEU
The spontaneous generation of organization
‘.. every isolate determinate dynamic system obeying unchanging laws will develop ‘organisms’ that are adapted to their ‘environments. The argument is simple enough in principle. We start with the fact that systems in general go to equilibrium. Now most of a system’s states are non-equilibrial (if we exclude the extreme case of the systems in neutral equilibrium). So in going from any state to one of the equilibria, the system is going from a larger number of states to a smaller. In this way it is performing a selection, in the purely objective sense that it rejects some states, by leaving them, and retains some other state, by sticking to it. Thus, as every determinate system goes to equilibrium, so does it select. ## tot zo ver? We have heard ad nauseam the dictum that a machine cannot select; the truth is just the opposite: every machine, as it goes to equilibrium, performs the corresponding act of selecting##. Now, equilibrium in simple systems is usually trivial and uninteresting … when the system is more complex, the and dynamic, equilibrium, and the stability around it, can be much more interesting. .. What makes the change, from trivial to interesting, is simply the scale of the events. ‘Going to equilibrium’ is trivial in the simple pendulum, for the equilibrium is no more than a single point. But when the system is more complex; when, say, a country’s economy goes back from wartime to normal methods then the stable region is vast, and much more interesting activity can occur within it’ [Ashby 1962 pp. 270-1]. DPB: this is useful in regards the selective mechanisms of individuation re machines.
‘So the answer to the question:How can we generate intelligence synthetically? Is as follows. Take a dynamic systems whose laws are unchanging and single-valued, and whose size is so large that after it has gone to an equilibrium that involves only a small fraction of its total states, this small fraction is still large enough to allow room for a good deal of change and behavior. Let it go on for a long enough time to get to such an equilibrium. Then examine the equilibrium in detail. You will find that the states or forms now in being are peculiarly able to survive against the disturbances induced by the laws. Split the equilibrium in two, call one part ‘organism’ and the other part ‘environment’: you will find that this ‘organism’ is peculiarly able to survive the disturbances from this ‘environment’. The degree of adaptation and complexity that this organism can develop is bounded only by the size of the whole dynamic system and by the time over which it is allowed to progress towards equilibrium. Thus, as I said, every isolated determinate system dynamic system will develop organisms that are adapted to their environments. .. In this sense, then, every machine can be thought of as ‘self-organizing’, for it will develop , to such a degree as its size and complexity allow, some functional structure homologous with an ‘adapted organism’ [Ashby 1962 p 272]. DPB: I know this argument and I’ve quoted it before, I seem to remember in Design for a Brain or else the article about Requisite Variety. FIND NOMAD MONAD The point seems to be that the environment serves as the a, but is is not an extension of the machine in the sense that it belongs to it, because it belongs to its environment and is by definition not a part of it. ‘To itself, its own organization will always, by definition, be good. .. But these criteria come after the organization for survival; having seen what survives we then see what is ‘good’ for that form. What emerges depends simply on what are the system’s laws and from what state it started; there is no implication that the organization developed will be ‘good’ in any absolute sense, or according to the criterion of any outside body such as ourselves’ [p 273]. DPB: this is the point of Wolfram that the outcome is only defined by the rules and the initial conditions.