Alchian – Uncertainty, Evolution, and Economic Theory

Alchian, A.A. . Uncertainty, Evolution, and Economic Theory . The Journal of Political Economy Vol 58, No 3 pp. 211-221 . The University of Chicago Press . 1950

DPB: firms are exposed to the scrutiny of their environment and they can be adopted by it to a varying extent. The development of the elements in the environment is largely random. Firms are more inert and they develop less. But firms are not passive: they can be adaptive and devise methods to achieve success. One method of adaptation is to imitate organized behavior of a successful firm such that it is hoped that the success comes with the imitated behavior. Another method is to perform trial-and-error at any point in the development. Whether these experiments are successful can only be tried in practical terms. To predict success based on these is impossible because the relations of the firm with the environment are too complex, too variable and too many. This is useful: it corroborates with the idea that the organization of a firm is just-so. The starting point is that chance plays a pivotal role in the development and that personal brilliance has a limited effect for the success of the firm.

Incorporate incomplete information and uncertain foresight as axioms in economic theory. Profit maximization is dispensed with as well as predictable individual behavior. This approach embodies the concept of biological evolution and natural selection. The economic system is seen as an adoptive mechanism that chooses from exploratory actions adaptively, pursuing ‘success’ or ‘profit’. DPB: this sounds like ideas for profit being explored by the economic system through a selection mechanism. The applicability is not only to the standard situation but also to what is considered aberrations by the existing theory. The postulates of accurate anticipation and fixed states of knowledge are removed. Structure of the article: 1) where foresight is uncertain, the principle of ‘profit maximization’ is meaningless as a guide to action. 2) Construction begins with the introduction of the concept of environmental adoption: a posteriori most appropriate action based on the criterion of ‘realized positive profits’. The concept of environmental adoption is then fused with individually motivated behavior based on uncertainty and incomplete information: ‘Adaptive, imitative, and trial-and-error behavior in the pursuit of ‘positive profits’ is utilized rather than its sharp contrast, the pursuit of ‘maximized profits’ [Alchian 1950 p 211]. DPB: this is very interesting, because it allows for just-so elements, namely trial and error of economic practice. 3) Conclusions and conjectures.

1 Profit maximization not a guide to action

Economic agents are assumed to use demand and supply curves, but their position and slopes are uncertain. Under uncertainty one action can have various results, according to a distribution. After the action, the result will come to the fore. But the (distributions of the) results of different actions will overlap. To maximize a distribution does not exist. To select an action that generates maximum profit is only possible if there is no overlapping distribution. If they do overlap then the result does not point to one action. The task is therefore not to maximize profit, but to choose an action that leads to an optimum distribution, leading to a positive profit goal definition.

2 Success is based on results not motivation

Realized positive profits, not maximum profits, are the mark of success and viability. It does not matter through what process of reasoning or motivation such success was achieved. The fact of its accomplishment is sufficient. This is the criterion by which the economic system selects survivors: those who realize positive profits are the survivor; those who suffer losses disappear’ [Alchian 1950 p 213 (emphases by the author)]. Positive profits accrue to those who are better relative to their actual competitors (not a hypothetical ideal one). When uncertainty is larger the profit is more likely to go to the more venturous and lucky and less to the logical and well-informed. Concluding: a) success points at relative superiority and b) not motivation but circumstance may lead to the positive profit. Competitors with the most appropriate conditions will be selected by the environment for testing and adoption.

3) Chance or luck is one method of achieving success

Determination of the situation and the appropriateness depends on chance. Ability to adapt oneself to the situation is another element. The survivors may appear to have adapted to the environment, or the environment has adopted the survivors (emphasis DPB). A useful example is presented: travelers have to choose a path from one city to another. Petrol isn’t available on all of the paths. The travelers don’t know on which path petrol is available and on which there isn’t. Only the travelers on the path with petrol can travel, the others are not. They are considered smart, the others are not. When the petrol supply is changed to another path, then the latter travelers move and the others have to stop. Now these ones are considered the smart travelers and the others are foolish. The environment, namely the path infrastructure and the petrol supply, adopts the travelers. Their traveling skills can only be applied when the environment enables them to, they are ‘adoptable’. They travel when the environment ‘adopts’ them, and in that case they can show ‘their best traveling’, but whether they do get the opportunity is decided by chance. DPB: can this be translated into a situation of attraction and repulsion? A path with petrol attracts travelers: they are given the chance to travel, and in a particular direction. They are restricted by the availability of petrol: purposeful action is attracted to it, lack of it is repelled from it. The ‘correct’ direction of travel can be established if the availability of petrol on particular paths is certain. By determining the environment, the success of the travelers can be determined as well as the conditions conducive to it.

4 Chance does not imply non-directed, random allocation of resources

It might seem that the facts of life deny chance to be the deciding factor for the adoption principle in the economic system. Size of firms and heritage seem to indicate wisdom and foresight. Mathematician Borél has shown that these examples do not provide evidence against luck. If a million pairs play toss for 8 hours a day and one toss takes 1 second and the play stops if the winner of the first toss is equaled, then 100 pairs are still in play after 10 years. And, if the game is inherited, statistically 12 pairs play after a thousand years. So chance is likely to play a part in the survival of a 100 years old firm. There are not too many but too few firms to corroborate this analysis. Note that a) if all economic actions were random, the variety would be large and therefore the probability is large that the path of perfect foresight will turn out to be one of the survivors without him ever having had the intention. b) if some or even all of the participating firms behave non-random then the set of their behaviors is indistinguishable from a random set in terms of variety. c) A chance dominated model does not mean that the behavior cannot be predicted or explained. ‘It is sufficient if all firms are slightly different so that in the new environmental situation those who have their fixed internal conditions closer to the new, but unknown, optimum position now have a greater probability of survival and growth’ [Alchian 1950 p 216]. DPB: this matches very well the logic of process metaphysics. Where there are differences there is a chance that something will change. In that case an attractor can emerge from the changing environment to which the kind of firms, because of its ’internal conditions’, and knowingly or not, is attracted or repelled. This occurs because there are repeated trials and because there are more firms with a similar characteristic that have an elevated chance or landing in that basin of attraction and on that attractor. d) Not the characteristics of the firms change, but the characteristics of the set of firms that survives the new environmental circumstance. e) Individual motivations are sufficient but not a necessary condition. Instead what is required necessarily is the set of requirements of the economic circumstance.

5 Individual adapting via imitation and trial and error

Purposive motivation and foresight are added to the extreme model of adoption (and not to merge it with perfect foresight &c. and profit maximization). It is assumed here that the objective is the sufficient condition of realized positive profit. That is the condition for survival (not profit maximization). The fulfillment of the pursuit of profit is rewarded with survival. Only perfect knowledge of past results and awareness of the present do not guarantee perfect foresight: chance is a determining factor. As a consequence modes of conscious adaptive behavior replace this knowledge: a) common elements of behavior associated with the successes of successful enterprises are imitated. This is motivated by the absence of clear-cut criteria, a very large number of them, their variability, lack of room for trial and error, &c. Also imitation relieves one of the need to really innovate and be responsible for the outcome if it fails. ‘Unfortunately, failure or success often reflects the willingness to depart from rules, when conditions have changed; what counts, then, is not only imitative behavior but the willingness to abandon it at the ‘right’ time and circumstances. Those who are different and successful ‘become’ innovators, while those who fail ‘become’ reckless violators of tried-and-true rules’ [Alchian 1950 p 218]. DPB: behavior associated with success is replicated: perceive success and behavior, define which behavioral elements determine success and how, define the rules for own behavior, mimick them as long as required. b) trial and error is a second type of adaptive behavior. Trial, and with ensuing success continuation of, and with a lack of success a change of action. But firstly trial must be recognizable as success or not (local optimum). Secondly there can be no intermediate descent or the approach will be abandoned. Both conditions are not likely in the case of economic life. A changing environment prevents one to compare some course of action to the predefined conception of success. These elements frustrate a trial and error process, because that is a survival and death situation, not a personal optimization approach. ‘Success is discovered by the economic system through a blanketing shotgun process, not by the individual through a converging search’ [Alchian 1950 p 219]. DPB: just-so, nomad/monad, individuation. Variation is achieved because imitations are imperfect. ‘All the preceding arguments leave the individual economic participant with imitative, venturesome, innovative, trial-and-error adaptive behavior. Most conventional economic tools and concepts are still useful, although in a vastly different framework – one which is closely akin to the theory of biological evolution. The economic counterparts of genetic heredity, mutations, and natural selection are imitation, innovation, and positive profits’ [p 220].

6 Conclusions and summary

First some behavior (organization) must be submitted to the economic system (mutation) and then tried for its viability (natural selection). These appear to be interrelated: if the probability for viability is higher then the probability for action being taken is higher also, but that is not necessarily so, because there is no for ‘inner directed urge towards perfection’. What counts is not the plans for perfect action but trial of promising action, because from there success is selected. That proven success there can lead to ensuing action. The economist can know effects of changes in the environment on the economic participant, even if he doesn’t know how the participant takes his decisions, by inferring the requirements of the environment. In other words: which organization is adopted by the conditions of that environment.

PS: exaptation (the original term pre-adaptation was replaced because it seemed to suppose intentionality) is the assigning a new function to an existing trait. For instance the feathers of a bird initially served a purpose for insulation and only later supported flight.

Simon – The Architecture of Complexity

Simon, H.A. . The Architecture of Complexity . Proceedings of the American Philosophical Society, Vol 106, No 6, pp. 467 – 482 . 1962

Development of a general systems theory to find out which abstracting properties from all of them can apply to all kinds of systems. Do diverse systems have anything non-trivial in common? This is addressed by ideas under the umbrella of cybernetics (if not a theory than at least an interesting point of view). The goal is to cast some light on the ways complexity exhibits itself wherever it is found in nature. The rough description of a complex system used here is: a system made up of many parts which interact in a non-simple way. In such systems the whole may be more than the parts in a pragmatic sense: ‘In the face of complexity, an in-principle reductionist may at the same time be a pragmatic holist’ [Simon 1962 p 468]. How complexity frequently takes the form of hierarchy is discussed in four sections: 1) frequency of the occurrence of hierarchy in complex systems 2) hierarchic systems evolve more quickly than non-hierachic systems 3) dynamic properties of complex systems and they can be decomposed into subsystems 4) relation between complex systems and their descriptions.

>HIERARCHIC SYSTEMS

A hierarchic system is a system that is composed of other interrelated hierarchic systems. DPB: a hierarchic system integrates other hierarchic systems until some lower, elementary level of subsystems is arrived at. What that level is, is somewhat arbitrary and how it can be done is a subject of this article. Hierarchy is often referred to the structure where systems are subordinated by a relation of authority to the system they belong to. This means the existence of a boss and subordinate subsystems. Each system has a boss who is subordinated to the boss of the system. This is a formal approach to hierarchy. ‘I shall use hierarchy in the broader sense introduced in the previous paragraphs, to refer to all complex systems analyzable into successive sets of subsystems, and speak of ‘formal hierarchy’ when I want to refer to the more specialized concept’ [Simon 1962 p 468].

>>Social Systems

One kind of hierarchy in social sciences is the formal organization of businesses &c. Another is families, tribes, clans, &c.

> >Biological and physical systems

Cell-up Cell>tissue>organ>system. Cell-down: Cell>nucleus>mitochondria>membrane>microsomes.

Elementary particles, Planetary systems

A gas is seen as a random distribution of complex systems, namely particles.

Hierarchy refers to a system with a moderate number of subsystems with their subsystems (a diamond is a flat hierarchy with many subsystems, and atypical). The number of subsystems subordinated to the system is the span of that system. If the span of a (sub) system is wide it is flat at that location. A diamond has a wide span / is flat at the crystal level, but not at the molecular level. Biological and physical systems differ from social systems in that the first are described in spatial terms and the second by defining who interacts with whom. This can be reconciled by defining hierarchy by intensity of interactions.

>>Symbolic systems: Books>Chapters>Paragraphs>Alinea>Words>Letters, &c.

>THE EVOLUTION OF COMPLEX SYSTEMS

Watch maker 1: One system. When assemblage is interrupted the entire watch falls apart. Watch maker 2: Subsystems of 10 subsystems each. When assemblage is interrupted the subsystem at hand falls apart. This one is more likely to survive.

>>Biological evolution

The time required for the evolution of a complex form from simple elements depends critically on the numbers and distribution of potential intermediate stable forms’ [Simon 1962 p 471]. Comments: a) no teleology is suggested and the structure can come from random processes. When complex forms once existent become stable they give direction. But this is survival of the fittest, namely survival of the stable b) not all large systems appear hierarchical c) the evolution of complex systems from simple elements implies nothing concerning the change of entropy: free energy can be taken up or generated by the evolutionary process

>>Problem solving as natural selection

Problem solving requires selective trial-and-error. .. In problem solving, a partial result that represents recognizable progress toward the goal plays the role of a stable sub-assembly’ [Simon 1962 p 472]. Human problem solving involves only trial-and-error and selectivity. The selectivity derives from heuristics to suggest which paths to try first.

>> The sources of selectivity

When we examine the sources from which the problem-solving system, or the evolving system, as the case may be, derives its selectivity, we discover that selectivity can always be equated with some kind of feedback of information from the environment’ [Simon 1962 p 473]. DPB: the approach to modeling evolution is the same as that to modeling problem solving. There are two paths of selection in problem solving: a) various paths are tried out, the results are noted and this information is used for further search and b) using previous experience: doing the same paths that lead to an earlier solution. In this way trial-and-error is reduced or eliminated. The closest analogue of this in organic evolution is reproduction.

>>On empires and empire building

When an empire breaks up, it doesn’t tend to fall apart into its smallest elements but into the next scale of subsystems.

>>Conclusion: the evolutionary explanation of hierarchy

Systems will evolve from stable intermediate forms faster than from basic elements to form hierarchies, the subsystems based on the intermediate forms. Hierarchies have the time to evolve.

>NEARLY DECOMPOSABLE SYSTEMS

A distinction can be made between the interactions within subsystems and between them. Their intensity and their frequency is different to orders of magnitude. Employees within the formal organization of a department have more and more intensive contacts than employees of different departments. The decomposable case can be used as a limit over a wide range. In the nearly decomposable case the interactions between the subsystems are weak but not negligible. From the latter case these can be proposed: a) the short run behavior of the subsystem is independent of that of the other subsystems and b) in the long run the behavior of a subsystem depends on the behavior of the others in the aggregate. This is illustrated with an insulated house within which there are somewhat insulated rooms within which there are hardly insulated cubicles. A change of the temperature in the rooms, induces a rapid change of the temperature between the cubicles, but a slow change of temperature between the rooms. If a complex system can be described with a nearly decomposable matrix then the system has the properties a) and b) above.

>>Near decomposability of social systems

Most of the communication channels in formal organizations are between employees and a very limited number of other employees. The departmental boundaries are assumed to assume the same role as the walls in the thermal insulation example.

>>Physico-Chemical examples

The theory of the thermodynamics of irreversible processes, for example, requires the assumption of macroscopic disequilibrium and microscopic equilibrium, exactly the situation described in our heat-exchange example’ [p 476]. DPB: how does this work?

>>Some observations on hierarchic span

Suppose that the elements of a system have properties for stronger bonds and for weaker bonds and that the stronger bonds exhaust through the bonding. Subsystems form through the strong bonds until they are exhausted. Then the subsystems will be linked by the weaker second-order bonds into larger systems. In social systems the number of interactions is limited by the serial character of human communication (one at the time) and limitation on the time consumption involved in a role and hence of the number of roles one can handle (one can have a group of friends consisting of a dozen but not hundreds).

>>Summary: Hierarchies tend to be near-decomposable.

>THE DESCRIPTION OF COMPLEXITY

People draw complex objects in a hierarchical way. The information about the object is arranged hierarchically in memory, like a topical outline. DPB: re active association. When information is presented in this way the relations between a subpart and another subpart can be presented and between subsubparts within each. Information about reations between subsubparts of different subparts is lost.

>>Near decomposability and comprehensibility

By representing parts hierarchically little information is lost (re b, aggregate effect above). Many complex systems have a near-decomposable, hierarchical structure. That enables us to see them. If complex systems exist that are not so structured then they are unobserved and not understood. ‘I shall not try to settle which is chicken and which is egg: whether we are able to understand the world because it is hierarchic, or whether it appears hierarchic because those aspects which are not elude our understanding and observation’ [Simon p 478]. DPB: the processes that brought forth our powers of perception and the processes in nature are fundamentally the same.

>>Simple descriptions of complex systems

There is no conservation law that prescribes that the description of a complex system should be as complex as the system itself. Example of how such a system can be described economically, or, in other words, how it can be reduced. This is only possible if there are redundancies in the system. If it is completely unredundant then the system is its own simplest (most economical) description and it cannot be reduced. DPB: this notion of reduction is exactly the opposite of the notion used by Ashby. He uses reduction to indicate the opposite of organization. That which is not organized can be reduced (away) until organization remains. This is the mathematical definition of reduction. Here it is the opposite: whatever is redundant leaves room, or in other words can be reduced to, a rule. Three forms of redundancy are: a) hierarchic system is often assembled from a few kinds of different subsystems in various arrangements. DPB: this is a form of repetition of the components used. b) ‘Hierarchic systems are often nearly decomposable. Hence only aggregative properties of their parts enter into the description of the interaction of those parts. Not the lower level properties of the composing elements of the parts play a part in the interactions of the components at the higher levels. A generalization of the notion of near composability might be called the ‘empty world hypothesis’: most things are only weakly connected with most other things’ [Simon 478]. DPB: This means that some properties of the subcomponents of a complex hierarchical system, which are themselves built of subcomponents, enable interaction with other subcomponents, and form the complex hierarchical system that they are a subcomponent of. But those enabling properties of the subcomponents are properties of, or based on properties of the subcomponents of the subcomponents that form the complex hierarchical system. ‘The children are not allowed to participate in the discussion between the family elders’. Given that emptiness can be described by the absence of a description it can be described economically. DPB: how is this a form of repetition? The aggregative properties of the subcomponents of the system repeat, and they are based on repeating or comparable properties of the sub-sbcomponents. c) Redundancy can originate in a constant relation between the state of a system and a later state of it. DPB: this is a form of repetition of the behavior of the system. It can be a literal repetition or the lingering of a system like a kind of an after image of the previous state. In any case the current state can be compared with the previous one and with the next state also. Cognition is the application of its powers to compare, identification of redundancy, so as to perform a (re)cognition of recurrence of coherence, namely pattern. On a devised continuum of cognition ‘a suspicion’ is on the one extreme, where a pattern merely reminds of something such that it cannot be predicted for what it ‘is’ or whether it will occur with any certainty. ‘Knowing’ is at the other extreme, where the pattern is known and its occurrence can be predicted with a high level of certainty.

>>State description and process description

State -: a circle is an object of which all points are equidistant from one point. Process -: hold one arm of the compass in place, rotate the other arm until is is back at the initial point. ‘These two modes of experience are the warp and weft of our experience. .. The former characterize the world as sensed; they provide the criteria for identifying objects, often by modeling the objects themselves. The latter characterize the world as acted upon; they provide the means for producing or generating objects having the desired characteristics. The distinction between the world as sensed and the world as acted upon defines the basic condition for the survival of adaptive organisms. The organism must develop correlations between goals in the sensed world and actions in the world of process. When they are made conscious and verbalized, these correlations correspond to what we usually call means-end analysis. Given a desired state of affairs and an existing state of affairs, the task of an adaptive organism is to find the difference between these two states, and then to find the correlating process that will erase the difference. Thus, problem solving requires continual translation between the state and process descriptions of the same complex reality’ [Simon 1962 p 479]. DPB: this is my equalizing of differences. It refers to adaptive organisms that is autopoietic systems. The translation between state and the process are then the same as the recurring consequence of structure and operations: the description of what it is and the description of what it does, &c. Refer to this in the main theory. ‘We pose a problem by giving the state description of the solution. The task is to discover a sequence of processes that will produce the goal state from an initial state. Translation from the process description to the state description enables us to recognize when we have succeeded’ [Simon 1962 p 479]. DPB: can this be coupled to challenge propagation?

>> Ontogeny recapitulates phylogeny

If genetic material is seen as a program, it: a) is self-reproducing, b) developed by Darwinian evolution. A human develops gills and then use them for other purposes. Instruct a 20th century workman to build a car by what he knows: start with a cart, remove the singletree then build a motor onto it, then a transmission. &c. DPB: this does not necessarily apply to the memetic instructionset of a firm. Or is it, sometimes routines are in place that stem from previous versions of work instructions that are no longer in place: ‘The generalization that in evolving systems whose descriptions are stored in a process language, we might expect ontogeny partially to recapitulate phylogeny has applications outside the realm of biology. It can be applied as readily, for example, to the transmission of knowledge in the educational process. In most subjects, particularly in the rapidly advancing sciences, the progress from elementary to advanced courses is to a considerable extent a progress through the conceptual history of the science itself. Fortunately, the recapitulation is seldom literal – any more than it is in the biological case. .. But curriculum revisions that rid us of the accumulations of the past are infrequent and painful’ [Simon 1962 p 481]. DPB: this is an important thought concerning the execution, namely the enactment of memes ad how the are restricted by the actual state of affairs, when the firm is operational.

Ashby Principles of the self-organizing system

Ashby WR . Principles of the Self-Organizing System . Principles of Self-Organization: Transactions of the University of Illinois Symposium, H. Von Foerster and G.W. Zopf, jr editors . Pergamon Press London UK pp. 255-278 . 1962

What is organization?

The hard core of the concept (of organization DPB) is, in my opinion, that of ‘conditionality’. As soon as the relation between two entities A and B becomes conditional on C’s value or state then a necessary component of ‘organization’ is present. Thus the theory of organization is partly co-extensive with the theory of functions of more than one variable’ [Ashby 1962 p 256, emphasis of the author]. DPB: this is my example of the chess board FIND CHESS and, apparently, how the pieces are organized by the conditions of the others. Refer to this text there. The converse of ‘conditional on’ is ‘not conditional on’: the converse of ‘organization’ is separability or reducibility. See below.In a mathematical sense this means that some parts of a function of many variables do not depend on some other parts of it. In a mechanical sense it means that some components of a machine work independent of other components of that machine. DPB: the outcome of the function or the machine depend on the workings of the reducible variables in a simple way. The converse of conditionality is reducibility. DPB: conditionality implies organization. Reducibility implies a lack of organization. This is the opposite of what I thought because whatever is organized is repetitive, a pattern, and it can be reduced away, because it can be summarized in a rule.

In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. “Harder” means having a higher estimate of the required computational resources in a given context (e.g., higher time complexity, greater memory requirement, expensive need for extra hardware processor cores for a parallel solution compared to a single-threaded solution, etc.). We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction). First, we find ourselves trying to solve a problem that is similar to a problem we’ve already solved. In these cases, often a quick way of solving the new problem is to transform each instance of the new problem into instances of the old problem, solve these using our existing solution, and then use these to obtain our final solution. This is perhaps the most obvious use of reductions. Second: suppose we have a problem that we’ve proven is hard to solve, and we have a similar new problem. We might suspect that it is also hard to solve. We argue by contradiction: suppose the new problem is easy to solve. Then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard. In mathematics, a topological space is called separable if it contains a countable, dense subset [Wikipedia].

The treatment of ‘conditionality’ (whether by functions of many variables, by correlation analysis, by uncertainty analysis, or by other ways) makes us realize that the essential idea is that there is first a product space – that of the possibilities – within which some sub-set of points indicates the actualities. This way of looking at ‘conditionality’ makes us realize that it is related to that of ‘communication’; and it is, of course, quite plausible that we should define parts as being ‘organized’ when ‘communication’ (in some generalized sense) occurs between them. (Again the natural converse is that of independence, which represents non-communication.)’ [Ashby 1962 p 257 emphasis of the author]. DPB: the fist sentence bears a relation to the virtual-actual-real. The second sentence can be read as the existence of some sort of a relation between the organized parts. And hence a kind of communication takes place between them. When there is no communication, then A and B can be wherever on the chess board, and there is no constraint between them, and hence no organization: ‘This the presence of ‘organization’ between variables is equivalent to the existence of a constraint in the product-space of the possibilities. I stress this point, because while, in the past, biologists have tended to think of organization as something extra, something added to the elementary variables, the modern theory, based on the logic of communication, regards organization as a restriction or constraint [Ashby p 257 emphasis of the author]

DPB: This is much like the chess example: Organization comes from the elements, and it is not imposed from somewhere else. The product space of a system is its Idea. ‘Whence comes this product space? Its chief peculiarity is that it contains more than actually exists in the real physical world, for it is the latter that gives us the actual, constrained subset’ [Ashby p 257]. DPB: I have explained this in terms of individuation: the virtual+actual makes the real. Refer to this quote above at the chess game section!

The real world gives the subset of what is; the product space represents the uncertainty of the observer’ [Ashby 1962 p 258]. DPB: this is relevant too, because it related to the virtual: everything it could be in the focus of the observer, its space of possibilities. The space changes when the observer changes and two observers can have different spaces: ‘The ‘constraint’ is thus a relation between observer and thing; the properties of any particular constraint will depend in both the real thing and on the observer. It follows that a substantial part of the theory of organization will be concerned with properties that are not intrinsic to the thing but are relational between observer and thing’ [Ashby p 258]. Re: OBSERVER SUBJECT / OBJECT

Whole and Parts

In regards the concept of ‘organization’ it is assumed that there is a whole that is composed of parts: a) fx= x1 + x2+..+ xn means that there are n parts in this system. b) S1, S2, .. means that there are states of a system S without mention of its parts if any. The point is that a system can show dynamics without reference to parts, and that does therefore not refer to the concept of organization: the concepts are independent. This emphasizes the idea that organization is in the eye of the observer: ‘..I will state the proposition that: given a whole with arbitrarily given behavior, a great variety of arbitrary ‘parts’ can be seen in it; for all that is necessary, when the arbitrary part is proposed, is that we assume the given part to be coupled to another suitably related part, so that the two together form a whole isomorphic with the whole that was given’ [Ashby 1962 p 259]. DPB: isomorphic means invertible mathematical mapping. Does this mean that A and B are the structure that forms C which is the whole under a set of relations between A and B? ‘Thus, subject only to certain requirements (e.g. that equilibria map into equilibria) any dynamic system can be made to display a variety of arbitrarily assigned ‘parts’, simply by a change in the observer’s view point’ [Ashby 1962 p 260 amphasis of the author]. DPB: dit is een belangrijke opmerking die past bij het Deleuze / Luhmann verhaal over de observer. Also the pattern ‘versus’ coherence section. Re OBSERVER

Machines in general

The question is whether general systems theory deals with mathematical systems, in which case they need only be internally consistent) or with physical systems also, in which case they are tied to what the real world offers. Machines need not be material and reference to energy is irrelevant. ‘A ‘machine’ is that which behaves in a machine-like way, namely, that its internal state, and the state of its surroundings, defines uniquely the next state it will go to’ [Ashby 1962 p 261]. This definition was originally proposed in [Ashby W.R. . The Physical origin of adaptation by trial and error . G. Gen. Psychol., 32, pp. 13-25 . 1945]. DPB: this is much applicable to FIND INDIVIDUATION. See how to incorporate it there as a quote. I is the set of input state, S is the set of internal states, f is a mapping IxS into S. The ‘organization’ of a machine is f: change f and the organization changes. ‘In other words, the possible organizations between the parts can be set into one-one correspondence with the set of possible mapings of IxS into S. ‘Thus ‘organization’ and ‘mapping’ are two ways of looking at the same thing – the organization being noticed by the observer of the actual system, and the mapping being recorded by the person who represents the behavior in mathematical or other symbolism’ [Ashby p 262]. DPB: I referred to the organization as per Ashby observed as a pattern, which is the result of a coherence of the system in focus, Ashby says the actual system. Re COHERENCE PATTERN

‘Good’ organization

Whether an ‘organization’ is good depends on its usefulness. Biological systems have often come to be useful (DPB: preserving something, rendering it irreversible) under the pressure of natural selection. Engineered systems are often not useful: a) most organizations are bad ones b) the good ones have to be sought for c) what is meant with ‘good’ must be clearly defined, explicitly if necessary, in every case. What is meant with a ‘good’ organization of a brain? In the case of organisms this is the case if it supports its survival. In general: an organization can be considered ‘good’ if it keeps the values of a set of (essential) variables within their particular limits. These are mechanisms for homeostasis: the organization is ‘good’ if it makes the system stable around an equilibrium. The essence of the idea is that a number of variables so interacts as to achieve some given ‘focal condition’. But:’ .. what I want to say here – there is no such thing as ‘good organization’ in any absolute sense. Always it is relative; and an organization that is good in one context or under one criterion may be bad under another’ [Ashby 1962 p 263 emphasis of the author]. DPB: the OUTBOARD ENGINE is good to produce exhaust fumes and to consume toxic fossil materials and not good at driving boats. Every faculty of a brain is conditional because it can be handicapped in at least one environment by precisely that faculty: ’.. whatever that faculty or organization achieves, let that be not in the focal conditions’ [p 264 emphasis of the author]. There is no faculty (property, organization) of the brain that cannot be (become) undesirable, even harmful under certain circumstances. ‘Is it not good that a brain should have memory? Not at all, I reply – only when the environment is of a type in which the future often copies the past; should he future often be the inverse of the past, memory is actually disadvantageous. .. Is it not good that a brain should have its parts in rich functional connection? I say NO – not in general; only when the environment is itself richly connected. When the environment’s parts are not richly connected (when it is highly reducible in other words), adaptation will go faster if the brain is also highly reducible, i.e. if its connectivity is small (Ashby 1960, d)’ [Ashby 1962 pp. 264-5]. DPB: this is relevant for the holes that Vid can observe where others are. re VID Ashby refers to Sommerhof: a set of disturbances must be given as well as a focal condition. The disturbances threaten to drive the outcome outside of the focal condition. The ‘good’ organization is the relation between the set of disturbances and the goal (the focal condition): change the circumstances and the outcome will not lead to the goal and be evaluated ‘bad’.

Self-Organizing Systems

Two meanings of the concept: a) Changing from parts separated to parts joined (‘Changing from unorganized to organized’), and this concept can also be covered with the concept of self-connecting b) ‘Changing from a ‘bad’ organization to a ‘good’ one’ [Ashby 1962 p 267]. DPB: do I address this somewhere in regards the self-organization I guess I talk only about the first meaning? The last one refers to the case where the organization changes itself from showing bad behavior to showing good behavior. ‘..no machine can be self-organizing in this sense’ [Ashby 1962 p 267]. f: I x S = S. f is defined as a set of couples such that si leads to sj by the internal drive of the system. To allow f to be a function of the state is to make nonsense of the whole concept. DPB: but this is exactly what individuation does! ‘Were f in the machines to be some function of the state S, we would have to redefine our machine’ [Ashby 1962 p 268]. DPB: the function does not depend on the set S, because then all of the states, past and present could be occurring simultaneously, hence the reference to the new machine. But, given the concept of individuation, it should depend on the present in S? ‘We start with the set S of states, and assume that f changes, to g say. So we really have a variable, a(t) say, a function of time that had at first the value f and later the value g. This change, as we have just seen, cannot be ascribed to any cause in the set S; so it must have come from some outside agent, acting on the system S as input. If the system is to be in some sense ‘self-organizing’, the ‘self’ must be enlarged to include this variable a, and, to keep the whole bounded, the cause of a’s change must be in S (or a). Thus the appearance of of being ‘self-organizing’ can be given only by the machine S being coupled to another machine (of one part)..’ [p 269]. DPB: Big surpise. How to deal with this? Through individuation, and I feel the use of time t as an independent is confusing. So what happens is that that a is in the milieu. Therefore a is not in S. Therefore the Monad can only exist in the Nomad &c. Re INDIVIDUATION, MILIEU

The spontaneous generation of organization

.. every isolate determinate dynamic system obeying unchanging laws will develop ‘organisms’ that are adapted to their ‘environments. The argument is simple enough in principle. We start with the fact that systems in general go to equilibrium. Now most of a system’s states are non-equilibrial (if we exclude the extreme case of the systems in neutral equilibrium). So in going from any state to one of the equilibria, the system is going from a larger number of states to a smaller. In this way it is performing a selection, in the purely objective sense that it rejects some states, by leaving them, and retains some other state, by sticking to it. Thus, as every determinate system goes to equilibrium, so does it select. ## tot zo ver? We have heard ad nauseam the dictum that a machine cannot select; the truth is just the opposite: every machine, as it goes to equilibrium, performs the corresponding act of selecting##. Now, equilibrium in simple systems is usually trivial and uninteresting … when the system is more complex, the and dynamic, equilibrium, and the stability around it, can be much more interesting. .. What makes the change, from trivial to interesting, is simply the scale of the events. ‘Going to equilibrium’ is trivial in the simple pendulum, for the equilibrium is no more than a single point. But when the system is more complex; when, say, a country’s economy goes back from wartime to normal methods then the stable region is vast, and much more interesting activity can occur within it’ [Ashby 1962 pp. 270-1]. DPB: this is useful in regards the selective mechanisms of individuation re machines.

Competition

So the answer to the question:How can we generate intelligence synthetically? Is as follows. Take a dynamic systems whose laws are unchanging and single-valued, and whose size is so large that after it has gone to an equilibrium that involves only a small fraction of its total states, this small fraction is still large enough to allow room for a good deal of change and behavior. Let it go on for a long enough time to get to such an equilibrium. Then examine the equilibrium in detail. You will find that the states or forms now in being are peculiarly able to survive against the disturbances induced by the laws. Split the equilibrium in two, call one part ‘organism’ and the other part ‘environment’: you will find that this ‘organism’ is peculiarly able to survive the disturbances from this ‘environment’. The degree of adaptation and complexity that this organism can develop is bounded only by the size of the whole dynamic system and by the time over which it is allowed to progress towards equilibrium. Thus, as I said, every isolated determinate system dynamic system will develop organisms that are adapted to their environments. .. In this sense, then, every machine can be thought of as ‘self-organizing’, for it will develop , to such a degree as its size and complexity allow, some functional structure homologous with an ‘adapted organism’ [Ashby 1962 p 272]. DPB: I know this argument and I’ve quoted it before, I seem to remember in Design for a Brain or else the article about Requisite Variety. FIND NOMAD MONAD The point seems to be that the environment serves as the a, but is is not an extension of the machine in the sense that it belongs to it, because it belongs to its environment and is by definition not a part of it. ‘To itself, its own organization will always, by definition, be good. .. But these criteria come after the organization for survival; having seen what survives we then see what is ‘good’ for that form. What emerges depends simply on what are the system’s laws and from what state it started; there is no implication that the organization developed will be ‘good’ in any absolute sense, or according to the criterion of any outside body such as ourselves’ [p 273]. DPB: this is the point of Wolfram that the outcome is only defined by the rules and the initial conditions.

Chemical Organization Theory and Autopoiesis

E-mail communication of Francis Heylighen on 29 May 2018:

Inspired by the notion of autopoiesis (“self-production”) that Maturana and Varela developed as a definition of life, I wanted to generalize the underlying idea of cyclic processes to other ill-understood phenomena, such as mind, consciousness, social systems and ecosystems. The difference between these phenomena and the living organisms analysed by Maturana and Varela is that the former don’t have a clear boundary or closure that gives them a stable identity. Yet, they still exhibit this mechanism of “self-production” in which the components of the system are transformed into other components in such a way that the main components are eventually reconstituted.

This mechanism is neatly formalized in COT’s notion of “self-maintenance” of a network of reactions. I am not going to repeat this here but refer to my paper cited below. Instead, I’ll give a very simple example of such a circular, self-reproducing process:

A -> B,

B -> C,

C -> A

The components A, B, C are here continuously broken down but then reconstituted, so that the system rebuilds itself, and thus maintains an invariant identity within a flux of endless change.

A slightly more complex example:

A + X -> B + U

B + Y -> C + V

C + Z -> A + W

Here A, B, and C need the resources (inputs, or “food”) X, Y and Z to be reconstituted, while producing the waste products U, V, and W. This is more typical of an actual organism that needs inputs and outputs while still being “operationally” closed in its network of processes.

In more complex processes, several components are being simultaneously consumed and produced, but so that the overall mixture of components remains relatively invariant. In this case, the concentration of the components can vary the one relative to the other, so that the system never really returns to the same state, only to a state that is qualitatively equivalent (having the same components but in different amounts).

One more generalization is to allow the state of the system to also vary qualitatively: some components may (temporarily) disappear, while others are newly added. In this case, we  no longer have strict autopoiesis or [closure + self-maintenance], i.e. the criterion for being an “organization” in COT. However, we still have a form of continuity of the organization based on the circulation or recycling of the components.

An illustration would be the circulation of traffic in a city. Most vehicles move to different destinations within the city, but eventually come back to destinations they have visited before. However, occasionally vehicles leave the city that may or may not come back, while new vehicles enter the city that may or may not stay within. Thus, the distribution of individual vehicles in the city changes quantitatively and qualitatively while remaining relatively continuous, as most vehicle-position pairs are “recycled” or reconstituted eventually. This is what I call circulation.

Most generally, what circulates are not physical things but what I have earlier called challenges. Challenges are phenomena or situations that incite some action. This action transforms the situation into a different situation. Alternative names for such phenomena could be stimuli (phenomena that stimulate an action or process), activations (phenomena that are are active, i.e. ready to incite action) or selections (phenomena singled out as being important, valuable or meaningful enough to deserve further processing). The term “selections” is the one used by Luhmann in his autopoietic model of social systems as circulating communications.

I have previously analysed distributed intelligence (and more generally any process of self-organization or evolution) as the propagation of challenges: one challenge produces one or more other challenges,  which in turn produce further challenges, and so on. Circulation is a special form of propagation in which the initial challenges are recurrently reactivated, i.e. where the propagation path is circular, coming back to its origins.

This to me seems a better model of society than Luhmann’s autopoietic social systems. The reason is that proper autopoiesis does not really allow the system to evolve, as it needs to exactly rebuild all its components, without producing any new ones. With circulating challenges, the main structure of society is continuously rebuilt, thus ensuring the continuity of its organization, however while allowing gradual changes in which old challenges (distinctions, norms, values…) dissipate and new ones are introduced.

Another application of circulating challenges are ecosystems. Different species and their products (such as CO2, water, organic material, minerals, etc.) are constantly recycled, as the one is consumed in order to produce the other, but most are eventually reconstituted. Yet, not everything is reproduced: some species may become extinct, while new species invade the ecosystem. Thus the ecosystem undergoes constant evolution, while being relatively stable and resilient against perturbations.

Perhaps the most interesting application of this concept of circulation is consciousness. The “hard problem” of consciousness asks why information processing in the brain does not just function automatically or unconsciously, the way we automatically pull back our hand from a hot surface, before we even have become conscious of the pain of burning. The “global workspace” theory of consciousness says that various subconscious stimuli enter the global workspace in the brain (a crossroad of neural connections in the prefrontal cortext), but that only a few are sufficiently amplified to win the competition for workspace domination. The winners are characterized by much stronger activation and their ability to be “broadcasted” to all brain modules (instead of remaining restricted to specialized modules functioning subconsciously). These brain modules can then each add their own specific interpretation to the “conscious” thought.

In my interpretation, reaching the level of activation necessary to “flood” the global workspace means that activation does not just propagate from neuron to neuron, but starts to circulate so that a large array of neurons in the workspace are constantly reactivated. This circulation keeps the signal alive long enough for the different specialized brain modules to process it, and add their own inferences to it. Normally, activation cannot stay in place, because of neuronal fatigue: an excited neuron must pass on its “action potential” to connected neurons, it cannot maintain activation. To maintain an activation pattern (representing a challenge) long enough so that it can be examined and processed by disparate modules that pattern must be stabilized by circulation.

But circulation, as noted, does not imply invariance or permanence, merely a relative stability or continuity that undergoes transformations by incoming stimuli or on-going processing. This seems to be the essence of consciousness: on the one hand, the content of our consciousness is constantly changing (the “stream of consciousness”), on the other hand that content must endure sufficiently long for specialized brain processes to consider and process it, putting part of it in episodic memory, evaluating part of it in terms of its importance, deciding to turn part of it into action, or dismissing or vetoing part of it as inappropriate.

This relative stability enables reflection, i.e. considering different options implied by the conscious content, and deciding which ones to follow up, and which ones to ignore. This ability to choose is the essence of “free will“. Subconscious processes, on the other hand, just flow automatically and linearly from beginning to end, so that there is no occasion to interrupt the flow and decide to go somewhere else. It is because the flow circulates and returns that the occasion is created to interrupt it after some aspects of that flow have been processed and found to be misdirected.

To make this idea of repetition with changes more concrete, I wish to present a kind of “delayed echo” technique used in music. One of the best implementation is Frippertronics, invented by avant-garde rock guitarist Robert Fripp (of King Crimson): https://en.wikipedia.org/wiki/Frippertronics

The basic implementation consist of an analogue magnetic tape on which the sounds produced by a musician are recorded. However, after having passed the recording head of the tape recorder, the tape continues moving until it is read by another head that reads and plays the recorded sound. Thus, the sound recorded at time t is played back at time t + T, where the interval T depends on the distance between the recording and playback heads. But while the recorded sound in played back, the recording head continues recording all the sound, played by either the musician(s) or the playback head, on the same tape. Thus, the sound propagates from musician to recording head, from where is is transported by tape to the playback head, from where it is propagated in the form of a sound wave back to the recording head, thus forming a feedback loop.

If T is short, the effect is like an echo, where the initial sound is repeated a number of times until it fades away (under the assumption that the playback is slightly less loud than the original sound). For a longer T, the repeated sound may not be immediately recognized as a copy of what was recorded before given that many other sounds have been produced in the meantime. What makes the technique interesting is that while the recorded sounds are repeated, the musician each time adds another layer of sound to the layers already on the recording. This allows the musician to build up a complex, multilayered, “symphonic” sound, where s/he is being accompanied by her/his previous performance. The resulting music is repetitive, but not strictly so, since each newly added sound creates a new element, and these elements accumulate so that they can steer the composition in a wholly different direction.

This “tape loop” can be seen as a simplified (linear or one-dimensional) version of what I called circulation, where the looping or recycling maintains a continuity, while the gradual fading of earlier recordings and the addition of new sounds creates an endlessly evolving “stream” of sound. My hypothesis is that consciousness corresponds to a similar circulation of neural activation, with the different brain modules playing the role of the musicians that add new input to the circulating signal. A differences is probably that the removal of outdated input does not just happen by slow “fading” but by active inhibition, given that the workspace can only sustain a certain amount of circulating activation, so that strong new input tends to suppress weaker existing signals. This and the complexity of circulating in several directions of a network may explain why conscious content appears much more dynamic than repetitive music.

Chemical Organization Theory as a Modeling Tool

Heylighen, F., Beigi, S. and Veloz, T. . Chemical Organization Theory as a modeling framework for self-organization, autopoiesis and resilience . Paper to be submitted based on working paper 2015-01.

Introduction

Complex systems consist of many interacting elements that self-organize: coherent patterns of organization or form emerge from their interactions. There is a need of theoretical understanding of self-organization and adaptation: our mathematical and conceptual tools are limited for the description of emergence and interaction. The reductionist approach analyzes a system into its constituent static parts and their variable properties; the state of the system is determined by the values of these variable properties and processes are transitions between states; the different possible states determine an a priori predefined state-space; only after introducing all these static elements and setting up a set of conditions for the state-space can we study the evolution of the system in that state-space. This approach makes it difficult to understand a system property such as emergent behavior. Process metaphysics and action ontology assume that reality is not constituted from things but from processes or actions; the difficulty is to represent these processes in a precise, simple, and concrete way. This paper aims to formalize these processes as reaction networks of chemical organization theory; here the reactions are the fundamental elements, the processes are primary; states take the second place as the changing of the ingredients as the processes go on; the molecules are not static objects but raw materials that are produced and consumed by the reactions. COT is a process ontology; it can describe processes in any sphere and hence in scientific discipline; ‘.. method to define and construct organizations, i.e. self-sustaining networks of interactions within a larger network of potential interactions. .. suited to describe self-organization, autopoiesis, individuation, sustainability, resilience, and the emergence of complex, adaptive systems out of simpler components’ [p 2]. DPB: this reminds me of the landscape of Jobs; all the relevant aspects are there. It is hoped that this approach helps to answer the question: How does a system self-organize; how are complex wholes constructed out of simpler elements?

Reaction Networks

A reaction network consists of resources and reactions. The resources are distinguishable phenomena in some shared space, a reaction vessel, called the medium. The reactions are elementary processes that create or destroy resources. RN = <R,M>, where RM is a reaction network, R is a reaction, M is a resource: M = {a,b,c,…} and R is a subset of P(M) x P(M), where P is the power set (set of all subsets) of M and each reaction transforms a subset Input of M into a subset Output of M; the resources in I are the reactants and the resources in O are the products; I and O are multisets meaning that resources can occur more than once. R:x1+x2+x3+..→y1+y2+… The + in the left term means a conjunction of necessary resources x: if all are simultaneously present in I(r) then the reaction takes place and produces the products y.

Reaction Networks vs. Traditional Networks

The system <M,R> forms a network because the resources in M are linked by the reactions in R transforming one resource into another. What is specific for COT is that a reaction represents the transform from a multiplicity of resources into another multiplicity of them: a set I transforms to a set O. DPB: this reminds me of category theory. My principal question at this point is whether the problem of where organization is produced is not relocated: first the question was how to tweak static object into self-organization, now it is which molecules in which quantities and combination to conjuncture to get them to produce other resources and showing patterns at it. In RN theory the transform of resources can occur through a disjunction or a conjunction: the disjunction is represented by the juxtaposed reaction formulae, the conjunction by the + within a reaction formula.

Reaction Networks and Propositional Logic

Conjunction: AND: &; Disjunction: OR: new reaction line; Implication: FOLLOWS: →; Negation: NOT: -. For instance: a&b&c&..x. But the resources at the I side are not destroyed by the process then formally a&b&..→a&b&x&… Logic is static because no propositions are destroyed: new implications can be found, but nothing new is created. Negation can be thought of as the production of the absence of a resource: a+bc+ d = ac+ d – b. I and O can be empty and a resource can be created from nothing (affirmation, a) or a resource can create nothing (elimination, aor →-a). Another example is aa and hence a+(-a) = a-aand a-a: the idea is that a particle and its anti-particle annihilate one another, but they can be created together from nothing.

Competition and cooperation

The concept of negative resources allow the expression of conflict, contradiction or inhibition: a→-b what is the same as a+b0 (empty set): the more of a produced, the less of b is present: the causal relation is negative. The relation “a inhibits b” holds if: : a is required to consume but not produce b. The opposite “a promotes b” means that a is required to produce but not to consume b. When the inhibiting and promoting relations are symmetrical, a and b inhibit (a and b competitors) or promote (a and b cooperators) each other, but they do not need to be. Inhibition is a negative causality and promotion is a positive influence. If only positive influences or an even number of negative influences are included in a cycle then negative feedback occurs. When the number of negative influences is uneven then a positive feedback occurs. Negative feedback leads to stabilization or oscillation, positive feedback leads to exponential growth. In a social network a particular message can be promoted, suppressed or inhibited by another. Interaction sin the network occur through their shared resources.

Organizations

In COT and organization is defined as a self-sustaining reaction system: produced and consumed resources are the same: ‘This means that although the system is intrinsically dynamic or process-based, constantly creating or destroying its own components, the complete set of its components (resources) remains invariant, because what disappears in one reaction is recreated by another on, while no qualitatively new components are added’ [p 8]. DPB: I find this an appealing idea. But I find it also hard to think of the basic components that would make up a particular memeplex, even using the connotations. What in other words would the resources have to be and what the reactions to construct a memeplex from them? If the resource is an idea then one idea leads to another, which matches my theory. But this method would have to cater for reinforcement: and the idea itself does not much change, it does get reinforced as it is repeated. And in addition how would the connotation be attached to them: or must it be seen as an ‘envelope’ that contains the address &c, and that ‘arms’ the connoted idea (meme) to react (compare) with others such that the ranking order in the mind of the person is established? And such that stable network of memes is established such that they form a memeplex. The property of organization above, is central to the theory of autopoiesis, but, as stated in the text, without the boundary of a living system. But I don’t agree with this: the RC church has a very strong boundary that separates it from everything that is not the RC church. And so the RN model should cater for more complexity than only the forming of molecules (‘prior to the first cell’). The organization of a subRN <M’,R> of a larger RN <M,R> is defined by these characteristics: 1. closure: when I(r) is a part of M’ then O(r) is a part of M’ for all resources 2. semi-self-maintenance: no existing resource is removed, each resource consumed by some reaction is produced again by some other reaction working on the same starting set and 3. self-maintenance: each consumed resource x element of M’ is produced by some reaction in <M’,R> in at least the same amount as the amount consumed (this is a difficult one, because a ledger is required over the existence of the system to account for the quantities of each resource). ‘We are now able to define the crucial concept of organization: a subset of resources and reactions <M’,R> is an organization when it is closed and self-maintaining. This basically means that while the reactions in R are processing the resources in set M’, they leave the set M’ invariant: no resources are added (closure) and no resources are removed (Self-maintenance)’( emphasis of the author) [p 9]. The difference with other models is that the basic assumption is that everything changes, but this concept of organization means that stability can arise while everything changes continually, in fact this is the definition of autopoiesis.

Some examples

If a resource appears in both the I and the O then it is a catalyst.

Extending the model

A quantitative shortcoming, a possible extension, is the absence of relative proportions and of the relative speeds of the reactions. To extend quantitatively the model can be detailed to encompass all the processes that make up some particular ecology of reactions.

Self-organization

If we apply the rules for closure and maintenance we can know how organization emerges. If a reaction is added, a source for some resource is added which interrupts closure, or a sink is added which interrupts the self-maintenance. In general a starting set of resources will not be closed; their reactions will lead to new resources and so on; but the production of new ones will stop if no new resources are possible given the resources in the system; at that point closure is reached: ‘Thus, closure can be seen as an attractor of the dynamics defined by resource addition: it is the end point of the evolution, where further evolution stops’ [p 12]. In regards to self-maintenance, starting at the closed set, some of the resources will be consumed but not produced in sufficient amounts to replace the used amounts; these will disappear from the set; this does not affect closure because loss of resources cannot add new resources; resources now start to disappear one by one from the set; this process stops when the remaining resources only depend on the remaining ones (and not the disappeared ones): ‘Thus, self-maintenance too can be seen as an attractor of the dynamics defined by resource removal. The combination of resource addition ending in closure followed by resource removal ending in self-maintenance produces an invariant set of resources and reactions. This unchanging reaction network is by definition an organization’ [p 12]. Every dynamic system will end up in a attractor, namely a stationary regime that the system cannot leave: ‘In the attractor regime the different components of the system have mutually adapted, in the sense that the one no longer threatens to extinguish the other they have co-evolved to a “symbiotic”state, where they either peacefully live next to each other, or actively help one another to be produced, thus sustaining their overall interaction’ [p 12]. DPB: from the push and pull of these different attractors emerges (or is selected) an attractor that manages the behavior of the system.

Sustainability and resilience

An organization in the above sense is by definition self-maintaining and therefore sustainable. Many organizations grow because they produce more resources than they consume (e.g. positive feedback of resources: overproduced). Sustainability means the ability of an organization to grow without outside interference. Resilience means the ability to maintain the essential organization in the face of outside disturbances; a disturbance can be represented by the injection or the removal of a resource that reacts with others in the system. Processes of control are: buffering, negative feedback, feedforward (neutralizing the disturbance before it has taken effect). The larger the variety of controls the systems sports, the more disturbances it can handle, an implementation of Asby’s law of requisite variety. Arbitrary networks of reactions will self-organize to produce sustainable organizations, because an organization is an attractor of their dynamics. DPB: this attractor issue and bearing in mind the difficulties with change management, this reminds me of the text about the limited room an attracted system takes up in state-space (containment) explains why a system once it is ‘attracted’ it will not change to another state without an effort of galactic proportions. ‘However, evolutionary reasoning shows that resilient outcomes are more likely in the long run than fragile ones. First, any evolutionary process starts from some arbitrary point in the state space of the system, while eventually reaching some attractor region within that space. Attractors are surrounded by basins, from which all states lead into the attractor (Heylighen, 2001). The larger the basin of an attractor, the larger the probability that the starting point is in that basin. Therefore, the system is more likely to end up in an attractor with a large basin than in one with a small basin. The larger the basin, the smaller the probability that a disturbance pushing the system out of its attractor would also push it out of the basin, and therefore the more resilient the organization corresponding to the attractor. Large basins normally represent stable systems characterized by negative feedback, since the deviation from the attractor is automatically counteracted by the descent back into the attractor. .. However, these unstable attractors will normally not survive long, as nearly any perturbation will push the system out of that attractor’s basin into the basin of a different attractor. . This very general, abstract reasoning makes it plausible that systems that are regularly perturbed will eventually settle down in a stable, resilient organization’ [p 15].

Metasystem transitions and topological structures

A metasystem transition = a major evolutionary transition = the emergence of a higher order organization from lower order organizations. COT can be understood in this way if an organization S (itself a system of elements, albeit organized) behaves like a resource of the catalyst type: invariant under reactions but it has an input of resources it consumes I(S) and an output of resources it produces O(S), resulting in this higher order reaction: I(S) + S S + O(S), assume that I(S) = {a,b} and O(S) = {c,d,e}, then this can be rewritten as a+b+S S+c+d+e. S itself constitutes of organized elements and it behaves like a black box processing some input to an output. If S is resilient it can even respond to changes in its input with a changed output. Now the design space of meta-systems can be widened to include catalyst resources of the type S, organizations that are self-maintaining and closed.

Concrete applications

It is possible to mix different kinds of resources; this enables the modeling of complex environments; this is likely to make the ensuing systems’ organizations more stable. ‘Like all living systems, the goal or intention of an organizatrion is to maintain and grow. To achieve this, it needs to produce the right actions for the right conditions (e.g. produce the right resource to neutralize a particular disturbance). This means that it implicitly contains a series of “condition-action rules” that play the role of the organization’s “knowledge”on how to act in its environment. The capability of selecting the right (sequence of) action(s) to solve a given problem constitutes the organization’s “intelligence”. To do this, it needs to perceive what is going on in its environment, i.e. to sense particular conditions (the presence or absence of certain resources) that are relevant to its goals. Thus, an organization can be seen as a rudimentary “intelligence” or “mind”’ [p 20]. DPB: I find this interesting because of the explanation of how such a model would work: the resources are the rules that the organization needs to sort out and to put in place at the right occasion.

Stigmergy as a universal Coordination Mechanism (II)

Heylighen, F. . Stigmergy as a universal coordination mechanism II: Varieties and Evolution . Cognitive Systems Research (Elsevier) 38 . pp. 50-59. 2016

Abstract

One application is cognition, which can be viewed as an interiorization of the individual stigmergy that helps an agent to a complex project by registering the state of the work in the trace, thus providing an external memory’[p 50]. DPB: I understand this as: according to this hypothesis, stigmergy exists prior to cognition; this means that natural but non-living processes use stigmergy on an external medium; once they are alive they are (in addition) capable of internalizing stigmergy, namely by internalizing the medium. The process of internalization of individual stigmergy is the same as (the development of?) cognition. This is another way of saying that the scope of a system changes so as to encompass the (previously external) medium on which the stigmergy takes place. The self-organization is now internalized. Cognition is now internalized. How does this view on the concept of cognition relate to the concept of individuation as a view on cognition?

1. Introduction

To bring some order to these phenomena, the present paper will develop a classification scheme for the different varieties of stigmergy. We will do this by defining fundamental dimensions or aspects, i.e. independent parameters along which stigmergic systems can vary. The fact that these aspects are continuous (“more or less”) rather than dichotomous (“present or absent”) may serve to remind us that the domain of stigmergic mechanisms is essentially connected: however different its instances may appear, it is not a collection of distinct classes, but a space of continuous variations on a single theme – the stimulation of actions by their prior results’ [p 50]. DPB: this reminds me of the landscape of Jobs: at the connection of the memes and the minds, there is a trace of the meme left on the brain and a trace of the brain is added to the meme, leaving the meme and the brain damaged. This means that from the viewpoint of the brain the memeplex is the medium and from the viewpoint of the meme the brain is the medium. The latter is more obvious to see: traces can be left in individuals’ brains. The former implies that changes are imposed on the memeplex; but the memeplex is represented by the expression of ideas in the real and in the mind; the real is an external medium, accessible through first order observations; the expression of the memeplex existing in the mind is an external medium, because it exists in other persons’ minds and in versions of the Self, both accessible through second-order observations. Back to the landscape: it is there anyhow, the difference in states is how the Jobs are connected and as a consequence how they are bounded and how they individuate.

2. Individual vs. collective stigmergy

Ants do not require a memory, because the present stage of the work is directly discernible by the same ant, and also by a different ant. Because they have no memory, the work can be continued by the same ant, but by another just as well.

3. Sematectonic vs. marker-based stigmergy

Sematectonic means that the results of the work itself are the traces that signify the input for the next ant and the next state (Wilson Sociobiology, 1975). Marker-based means that the stigmergic stimulation occurs through traces in the shape of markers such as pheromones left by other individuals (ants, termites!) before them, and not by traces of the work itself indicating a particular stage (Parunak, H.V.D., A survey of environments and mechanisms for human-human stigmergy, In Environments for multi-agent systems II (Weyns, Parunak, Michel (Eds.), 2006). Marker signals represent symbols, while sematectonic signals the concrete thing. But this is not straightforward: the territory boundary indicated with urine markers are an indication of the fact that there is an animal claiming this territory, while the urine contains additional information specific for that animal. To spread urine evenly around the claimed area and to interpret the information contained by it is useful for both the defender and the visitor in order to manage a potential conflict. And hence to reduce the uncertainties from the environment for both. The point is that the urine represents both information about the object and about the context.

4. Transient vs. persistent traces

We have conceptualized the medium as the passive component of the stigmergic system, which undergoes shaping by the actions, but does not participate in the activity itself’ (emphasis of the author) [p 52]. But a medium is bound to dissipate and decay, unless the information is actively maintained and reconstructed; without ongoing updates it will become obsolete, especially as the situation changes. No sharp distinction can be made between transient and persistent traces, they are the extremes of a continuum. A persistent trace does not require the simultaneous presence of the agent, while a purely transient trace does require their simultaneous presence. Synchronous stigmergy means to broadcast some signal, releasing information not directed at any one in particular. ‘A human example would be the self-organization of traffic, where drivers continuously react to the traffic conditions they perceive’ [p 53]. DPB: the gist of this example is that the behavior of the drivers is the signal: it is synchronous, not directed at anyone in particular, and it is sematectonic, because it represents the state of the system.

5. Quantitative vs. qualitative stigmergy

Quantitative stigmergy means that stronger conditions imply more forceful action to follow, or, in practical terms: the stronger conditions imply a higher probability of action. Qualitative stigmergy refers to conditions and actions that differ in kind rather than in degree: thís trace leads to thát action. There is no clear distinction of these two categories.

6. Extending the mind

Traditionally, cognition has been viewed as the processing of information inside the brain. More recent approaches, however, not that both the information and the processing often reside in the outside world (Clark, 1998; Dror & Harnad, 2008; Hollan, Hutchins & Kirsh 2000) – or what we have called the medium. .. Thus the human mind extends into the environment (Clark & Chalmers, 1998), “outsourcing” some of its functions to external support systems. .. In fact, our mental capabilities can be seen as an interiorization of what were initially stigmergic interactions with the environment’ (emphasis of the author) [p 54]. DPB: beetje brakke quote. This reminds me of the idea that a brain would not have been required if the environment was purely random. Just because it is not, and hence patterns can be cognized, it is relevant to avail of the instrument for just that: a brain embodying a set of condition-action rules to generate an action from the state of the environment sensed by it. Stigmergic activity lacks a memory: its state represents its memory as it reflects its every experience. But now the system is dependent on the contingencies of the part of the environment that is the medium: in order to detach itself from the uncertainties of the environment it internalizes memory and information processing.

7. The evolution of cooperation

In a stigmergic situation the defector does not weaken the cooperator: the cost of a trace is sunk.

Stigmergy as a Universal Coordination Mechanism (I)

Heylighen, F. . Stigmergy as a universal coordination mechanism I: Definition and components . Cognitive Systems Research (Elsevier) 38 . pp. 4-13. 2016

1. Past, present and future of the “stigmergy” concept

The concept is introduced by Pierre-Paul Grassé 1959 to describe a coordination mechanism used by insects: the work of one leaves traces in the environment that stimulates subsequent work by that insect or by others: ‘This mediation via the environment ensures that tasks are executed in the right order, without any need for planning, control, or direct interaction between agents’ [p 4]. DPB: how can execution in the right order be assured: it is not sure in what order the other agents will encounter the traces and hence in what order they will be motivated to act? From the examples in the text it appears that the stage in which work is left by the previous worker is input for the decision rules of a later worker; this implies that the stage of the work can be recognized. This is not the same as the agents assessing the stage of the work in the sense of attributing a meaning to it, or as in distinguishing this earlier stage from that later stage, because in that case the agent would have to have an idea of the finalized work and to what extent it would have to be complete in relation to the finalized work. Another example is pheromone trails left by insects and that are followed by others. These ideas can in some cases explain self-organization in social systems aka swarm intelligence (Deneubourg 1977). Conceptually a next step is computer supported collaboration between human agents, in particular via the www; another example is the establishing of a price on a market: a price emerges from the myriad of interactions between people that then serves as a reference for their decisions thereafter. DPB: anchoring means that once one has become used to some mark, it serves as a frame of reference thereafter, priming means that once a reference price was given, this serves as a frame of reference thereafter; are these stigmergic effects of a Luhmannian communication on the human mind; is spoken human language an example also, because it damages the direct environment and it only lasts as a damage in the minds of the people involved in the conversation; is written language an example in a kind of slow and long lasting way: once written its damaging effects remain forever; in that way, language (spoken or written can be deframed and reframed and be assigned a new meaning). Understood in this sense stigmergy is ubiquitous and it can clarify many things: ‘Stigmergy in the most general sense does not require either markers or quantities. Another, even more common misunderstanding is that stigmergy only concerns groups or swarms consisting of many agents. As we will show, stigmergy is just as important for understanding the behavior of a single individual’ [p 5]. The notion that an unintentional trace in a passive medium is far removed from the notion of a direct influence of the behavior of one agent on the behavior of another agent.

2 From etymology to definition

Stigmergy is derived from the Greek stigma which means mark or puncture and ergon which means work, the product of work or action: as a joint concept it was originally as a goad or prod or spur (prikkel): ‘Thus (Grassé, 1959) defined stigmergy as ‘the stimulation of workers by the very performances they have achieved (from the original English abstract)’ [p 6]. More recently it was understood as follows: ‘if we understand stigma as “mark” or “sign”and ergon as “action”, then stigmergy is “the notion that an agent’s actions leave signs in the environment, signs that it and other agents sense and that determine their subsequent actions”’ [p 6]. DPB: the understanding of Grassé is that stigmergy means motivation by the work (of others) and the understanding of Parunak is motivation by marks left by the work. Suppose an uttered word already leaves a mark on the mind of some people in a network that is the environment of someone, then the difference between the two is that in the notion of Grassé one has to be present and in the notion of Parunak one does not. DPB: the expression of a meme leads to other expressions of it. ; ‘Stigmergy is an indirect, mediated mechanism of coordination between actions, in which the trace of an action left on a medium stimulates the performance of a subsequent action’ [p 6]. Also the picture is interesting:

In the medium: a mark: which stimulates >>

In the agent: an action: which produces >> [p 6].

DPB: this is my Logistical Model exactly! Using memes it is: an expression of a meme produces a mark in a medium and a perception of that mark stimulates an action in an agent. But what I find missing here is the effect of a meme in the internal, the mark that is left within the agent. That is a difference; let’s see how the stigmergy is defined later on, and whether it includes the mind of the agent when it is included in a social system.

3. Basic components of stigmergy

Action is defined as a causal process that produces a change in the world (real). Agent is defined as a goal-directed autonomous system: this concept is not necessary because actions of a single unspecified agent can be coordinated by stigmergy (but it is useful if more than one agent is involved with different kinds of actions: stigmergy is the coordinator of actions that are merely events or (agentless) processes. This can be represented by a condition-action rule: the condition specifies the state of the world inducing the action, and the action specifies the subsequent transformation of that state. This can also be written as: a+b+c+.. >> x+y+.., where the + indicaes the conjunction of the conditions and of the actions. Chemical Organization Theory (Dittrich & Winter, 2008) show how collections of these simple reactions tend to become coordinated by acting on a shared medium (reaction vessel), where they produce an evolving trace expressed by the concentrations of the different ‘molecules’ (a,b,..). This coordinated pattern of activity defines an organization: a self-sustaining, dynamic network of interacting ‘molecules’. The relation is causal but not deterministic: the probability that an action takes place increases if the conditions are met (P (action I condition) > P (action). DPB: the medium is the whole of the environment that can contain (be damaged to show) data in the sense of a signal whether fast or slow to disappear and widely or narrowly distributed, e.g. a tombstone (in the real) or a change in the state of the mind of one’s interlocutor caused by the irritation of one’s words (in the virtual of Simondon). In the latter case the minds of the interlocutors are a part of the environment of the person: ‘The medium is that part of the world that undergoes changes through the actions and whose states are sensed as conditions for further actions’ [p 7]. The medium is an aspect of the environment: ‘First, .. , the environment is not in general perceivable an controllable. Second, the environment normally denotes everything outside the system or agent under consideration. However, stigmergy can also make use of an internal medium’ (emphasis by the author) [p 7]. DPB: waarvan acte! As a consequence aspects of the agent system are controllable by elements in the environment and hence they belong to the medium. The environment is that part of the world with which an agent interacts; phenomena perceivable and controllable are different for each agent and hence every agent has a different environment; ‘When we consider stigmergic coordination between different agents, we need to define the medium as that part of the world that is controllable and perceivable by all of them’ [p 7]. DPB: this reminds of the discourse / population idea, where a multitude of people included by a communication (the discourse) is defined as a population. This is different because in the discourse people are included that find themselves to be attracted as a result of their life experience and because of the selections of the communication. The medium is a broader and wider concept because it is determined by what people can perceive and control, but that does not necessarily attract them because of their life experience so far. The role of the medium is to allow interaction between different actions to take place, and thus, indirectly, between different agents; this mediating function is the true power of stigmergy. A final component of a stigmergic system is a trace or a mark; it is the result of an action and as such it contains information about the action that produced it: ’We might see the trace as a message, deposited in a medium, through which the pattern of activity communicates with itself, while maintaining a continuously updated “memory” of its achievements. From the point of view of an individual agent, on the other hand, the trace is a challenge: a situation that incites action, in order to remedy a perceived problem or shortcoming, or to exploit an opportunity for advancement (Heylighen, 2012)’ (emphasis by the author) [p 8]. DPB: I think in the Logistical Model the medium is the mind of the person as well as the communication: both are simultaneously and differently damaged through their mutual irritations.

4. Coordination

According to the Oxford Dictionary, coordination can be defined as the organization of the different elements of a complex body or activity so as to enable them to work together effectively’ (emphasis by the author) [p 8]. In the case of stigmergy the ‘elements’ are actions or agents; ‘effectively’ means that a goal is pursued; ‘working together’ means that the agents or actions are harmonious or synergetic ‘the one rather helping than hindering the other’ [p 8]. ‘Organization’ means a structure with a function, where ‘function’ is the achievement of the intended effect and ‘structure’ is the way agents or actions are connected such that they form a coherent whole. ‘This brings the focus on the connections that integrate the actions into a synergetic, goal-oriented whole’ (emphasis of the author) [p 8]. DPB: this reminds me of autopoietic systems: the properties of the elements of a systems determine the relations between them. The goal-orientation and the synergy (or harmony) of the elements (or rather of the body they form) is per definition dedicated to their autopoiesis.

5. The benefits of stigmergy

Stigmergic organization limits the gap between planning, instructions and reality; it is robust to contingency and shock; it is less prone to error of communication and errors of control than traditional forms of organization; it is less dependent on the number of agents or actions involved or the dependencies between them. The only requirement is that the agents have access to the medium and that they can recognize the conditions to start their actions. There is no need for: planning, memory, direct communication, mutual awareness, simultaneous presence, imposed sequence, imposed dividion of labor, commitment, centralized control.

6. Self-organization through negative feedback

Error-controlled regulation means that a deviation from the goal of an agent implies a change of behavior of the agent such that a compensatory action suppresses the effect of the deviation, the error. The agent must be capable to sense the error and to execute a compensatory action. In regards to the establishment of effective collective action, the only additional assumption is that the goals of the agents are not contradictory, but the goals are not necessary the same for it. ‘We may assume that agents have acquired their condition-action rules (and thus their implicit goals) through natural selection of instinctual behavior or differential reinforcement of learned behavior. This means that their condition-action rules are generally appropriate to the local environment, including the other agents with which they regularly interact’ [p 11]. DPB: the entire system maintains its autopoiesis and its parts maintain theirs; the entire systems develops (evolutionarily) in its environment of other systems and its parts develop in their environments of other parts; the parts develop autopoietically within the conditions of the autopoiesis of the entire system. Their ‘goals’ are their autopoiesis as it is trained to the requirements of their (local) context.

7. Self-organization through positive feedback

This is the amplification of movements towards an existing goal; they can be called diversions because they divert action from its ongoing course.

8. Conclusion

Virtually all evolved processes that require coordination between actions seem to rely at some level on stigmergy, in the sense that subsequent actions are stimulated by the trace left by previous actions in some observable and manipulable medium. The trace functions like a registry and map, indicating which actions have been performed and which still need to be performed. It is shared by all agents that have access to the medium, thus allowing them to coordinate their actions without need for agent-to agent communication. It even allows the coordination of “agentless” actions, as investigated e.g. by Chemical Organization Theory (Dittrich & Fenizio, 2007)’ [p 12]. DPB: I disagree with the ‘that require coordination’ phrase: what about a wandering discussion, where the medium involves the brains of the the other participants. This does not require coordination as such but it is coordinated.

Social Systems as Parasites

Seminar 1 December 2017, Francis Heylighen

Social Systems as Parasites

The power of a social system

1. In an experiment concerning punishment, people obey an instruction to administer others electric shocks. People tend to be obedient / “God rewards obedience” / “Whom should I obey first?” 2. When asked to point out which symbol is equal to another, people select the one they believe is equal, but when they are confronted with the choices of the other contestants, they tend to change their selection to what the others have chosen. Social systems in this way determine our worldview, namely the social construction of reality by specifying what is real.

Social systems suppress self-actualization

Social systems don’t ‘want’ you to think for yourself, but to replicate their information instead; social systems suppress non-conformist thought, namely they suppress differences in thought, and thereby they do not allow the development of unique (human) personalities: they suppress self-actualization. Examples of rules: 1. A Woman Should Be A Housewife >> If someone is a woman then, given that she shows conformist behavior, she will become a housewife and not a mathematician &c. Suppose Anna has a knack for math: If she complies then she becomes a housewife and she is likely to become frustrated; If she does not comply then she will become a mathematician (or engineer &c) and she is likely to become rebellious and suffer from doubts &c.2. To Be Gay is Unacceptable >> If someone is gay then, given that she shows conformist behavior, she will suppress gay behavior, but show a behavior considered normal instead; Suppose Anna is gay: If she complies she will be with a man and become frustrated; If she does not comply then she is likely to become rebellious, she will exhibit gay behavior, be with a woman, and suffer from doubts &c.

Social Systems Programming

People obey social rules unthinkingly and hence their self-actualization is limited (by them). This is the same as to say that social systems have a control over people. The emphasis on the lack of thinking is by the authors. The social system consists of rules that assists the thinking. And only thinking outside of those rules (thinking while not using those rules) would allow a workaround, or even a replacement of the rules, temporary or ongoing. This requires thinking without using pre-existing patterns or even thinking sans-image (new to the world).

Reinforcement Learning

1. Behaviorist: action >> reward (rat and shock) 2. socialization: good behavior and bad behavior (child and smile). This was a sparse remark: I guess the development of decision-action rules in children by socialization (smiling) is the same as the development of behavioral rules in rats by a behaviorist approach (shock).

Social systems as addictions

Dopamine is a neurotransmitter producing pleasure. A reward releases dopamine; Dopamine is addictive; Rewards are addictive. Social systems provide (ample) sources for rewards; Participating in social systems is a source of dopamine and hence it is addictive (generates addiction) and it maintains the addiction.

Narratives

Reinforcement need NOT be immediate NOR material (e.g. heaven / hell). Narratives can describe virtual penalties and rewards: myth, movies, stories, scriptures.

Conformist transmission

When more people transmit a particular rule then more people will transmit it. DPB: this reminds me of the changes in complex systems as a result of small injected change: many small changes and fewer large ones: the relation between the size of the shifts and their frequency is a power law.

Cognitive Dissonant

Entertaining mutually inconsistent beliefs is painful: the person believes it is bad to kill other people. As a soldier he now kills other people. This conflict can be resolved by replacing the picture of a person to be killed by the picture of vermin. The person thinks it is ok to kill vermin.

Co-opting emotions

Emotions are immediate strong motivators that bypass rational thought. Social systems use emotions to reinforce the motivation to obey their rules. 1. Fear: the anticipation of a particular outcome and the desire to avoid it 2. Guilt: fear of a retribution (wraak) and the desire to redeem (goedmaken); this can be exploited by the social system because there can be a deviation from its rules without a victim and it works on imaginary misdeeds: now people want to redeem vis-a-vis the social system 3. Shame: Perceived deficiency of the self because one is not fulfilling the norms of the social system: one feels weak, vulnerable and small and wishes to hide; the (perceived) negative judgments of others (their norms) are internalized. PS: Guilt refers to a wrong action implying a change of action; Shame refers to a wrong self and implies the wish for a change of (the perception of) self 4. Disgust: Revulsion of (sources of) pollution such as microbes, parasites &c. The Law of Contagion implies that anything associated with contagion is itself contagious.

Social System and disgust

The picture of a social system is that it is clean and pure and that it should not be breached. Ideas that do not conform to the rules of the social system (up to and including dogma and taboo) are like sources of pollution; these contagious ideas lead to reactions of violent repulsion by the ones included by the social system.

Vulnerability to these emotions

According to Maslow people who self-actualize are more resistant to these emotions of fear, shame, guilt and disgust.

DPB: 1. how do variations in the sensitivity to neurotransmitters affect the sensitivity to reinforcing? I would speculate that a higher sensitivity to dopamine leads to a more eager reaction to a positive experience, hence leading to a stronger reinforcement of the rule in the brain 2. how do higher or lower sensitivity to risk (the chance that some particular event occurs and the impact when it does) affect their abiding by the rules? I would speculate that sensitivity to risk depends on the power to cognize it and to act in accordance with it. A higher sensitivity to risk leads to attempting to follow (conformist) rules more precisely and more vigorously; conversely a lesser sensitivity to risk leaves space for interpretation of the rule, its condition or its enactment.

How Social System Program Human Behavior

Heylighen, F., Lenartowicz, M., Kingsbury, K., Beigi, S., Harmsen, T. . Social Systems Programming I: neural and behavioral control mechanisms

Abstract

Social systems can be defined as autopoietic networks of distinctions and rules that specify which actions should be performed under which conditions. Social systems have an enormous power over human individuals, as they can “program” them, ..’ [draft p 1]. DPB: I like the summary ‘distinctions and rules’, but I’m not sure why (maybe it is the definitiveness of this very small list). I also like the phrase ‘which actions .. under which conditions’: this is interesting because social systems are ‘made of’ communication, which in turn is ‘made of’ signals, which in turns are built up from selections of utterances &c., understandings and information. The meaning is that information depends on its frame, namely its environment. And so this phrase above makes the link between the communication, rule-based systems and the assigning of meaning by (in) a system. Lastly these social mechanisms hold a strong influence over humans, even up to the point of damaging themselves. This paper is about the basic neural and behavioral mechanisms used for programming in social systems. This should be important for my landscape of the mind, and familiarization.

Introduction

Humans experience a large influence from many different social systems on a daily basis: ‘Our beliefs, thoughts and emotions are to an important extent determined by the norms, culture and morals that we acquired via processes of education, socialization and communication’ [p 1]. DPB: this resonates with me, because of the choice of the words ‘beliefs’ and ‘thoughts’: these must nicely match the same words in my text, where I explain how these mechanisms operate. In addition I like this phrase because of the concept of acquisition, although I doubt that the word ‘communication’ above is used in the sense of Luhmann. This is not easy to critique or even to realize that these processes are ‘social construction’ and difficult to understand them to be so (the one making a distinction cannot talk about it). Also what is reality in this sense: is it what would have been without the behavior based on these socialized rules or the behavior as-is (the latter I guess)? ‘Social systems can be defined as autopoietic networks of distinctions and rules that govern the interactions between individuals’ (I preferred this one from the abstract: which actions should be performed under which conditions, DPB). The distinctions structure reality into a number of socially sanctioned categories of conditions, while ignoring phenomena that fall outside these categories. The rules specify how the individuals should act under the thus specified conditions. Thus, a social system can be modeled as a network of condition-action rules that directs the behavior of individuals agents. These rules have evolved through the repeated reinforcement of certain types of social actions’ [p 2]. DPB: this is a nice summary of how I also believe things work: rule- based systems – distinctions (social categories) – conditions per distinction – behavior as per the condition-action rules – rules evolve through repeated reinforcement of social actions. ‘Such a system of rules tends to self-organize towards a self-perpetuating configuration. This means that the actions or communications abiding by these rules engender other actions that abide by these same general rules. In other words, the network of social actions or communications perpetually reproduces itself. It is closed in the sense that it does not generate actions of a type that are not already part of the system; it is self-maintaining in the sense that all the actions that deifne parts of the system are eventually produced again (Dittrich & Winter, 2008). This autopoiesis turns the social system into an autonomous, organism-like agent, with its own ideintity that separates it from its environment. This identity or “self” is preserved by the processes taking place inside the system, aand therefore actively defended against outside or “non-self” influences that may endanger it’ [p 2]. DPB: this almost literally explains how cultural evolution takes place. This might be a good quote to include and cut a lot of grass in one go! Social systems wield a powerful influence over people, up to the point of acting against their own health. The workings of social systems is likened to parasites such as the rabies virus which ‘motivates’ its host to become aggressive and bite others such as to spread the virus. ‘We examine the simple neural reinforcement mechanism that is the basis for the process of conditioning whilst also ensuring self-organization of social systems’ (emphasis by the author) [p 3]. DPB: very important: this is at the pivot where the human mind is conditioned such that it incites (motivates) it to act in a specific way and where the self-organization of the social system occurs. This is how my bubbles / situations / jobs work! An element of this process is familiarization: the neural reinforcement mechanism.

The Power of Social Systems

In the hunter gatherer period, humans lived in small groups and individuals could come and go as they wanted to join or form a new group [p 3]. DPB: I question whether free choice was involved in those decisions to stay or leave – or whether they were rather kicked out – and if it was a smooth transfer to other bands – or whether they lost standing and had to settle for a lower rank in a new group. ‘These first human groupings were “social” in the sense of forming a cooperative, caring community, but they were not yet consolidated into autopoietic systems governed by formal rules, and defined by clear boundaries’ [p 4]. DPB: I have some doubts because it sounds too idealistic / normal; however, if taken for face value then this is a great argument to illustrate the developing positions of Kev and Gav against. In sharp contrast are the agricultural communities: they set themselves apart from nature and other social systems, everything outside of their domain fair game for exploitation, hierarchically organized, upheld with symbolic order: authorities, divinities paid homage to with offerings, rituals, prescriptions and taboos. In the latter society it is dangerous to not live by the rules: ‘Thus, social systems acquired a physical power over life and death. As they evolved and refined their network of rules, this physical power engendered a more indirect moral or symbolic power that could make people obey the norms with increasingly less need for physical coercion’ [p 4]. DPB: I always miss the concept of ‘autopolicing’ in the ECCO texts. Individuation of a social system: 1. a contour forms from first utterances in a context (mbwa!) 2. these are mutually understood and get repeated 3. when outside the distinction (norm) there will be a remark 4. autopolicing. Our capacity to cognize depend on the words our society offer to describe what we perceive: ‘More fundamentally, what we think and understand is largely dependent on the concepts and categories provided y the social systems, and by the rules that say which category is associated with which other category of expectations or actions’ [p 5]. DPB: this adds to my theory the idea that not only the rules for decision making and for action depend on the belief systems, namely the memeplexes, but also people’s ‘powers of perception’.

How Social Systems Impede Self-actualization

Social rules govern the whole of our worldview, namely our picture of reality and our role within it (emphasis DPB re definition worldview): ‘They tell us which are the major categories of existence (e.g. mind vs. body, duty vs. desire), what properties these categories have (e.g. mind is insubstantial, the body is inert and solid, duty is real and desire is phantasmagoric), and what our attitudes and behaviors towards each of these categories should be (e.g. the body is to be ignored and despised, desire is to be suppressed)’ [p 5]. DPB: I like this because it gives some background to motivations; however, I believe they are more varied than this and that they do not only reflect the major categories but everything one can know (or rather believe). They are just-so in the sense that they can be (seen or perceived as) useful for something like human well-being or limiting for it. They are generally tacit and believed to be universal and so it is difficult to know which of the above they are. ‘.. these rules have self-organized out of distributed social interactions. Therefore, there is no individual or authority that has the power to change them or announce them obsolete. This means that in practice we are enslaved by the autopoietic social system: we are programmed to obey its rules without questioning’ [ p6]. DPB: I agree, there is no other valid option than that from a variety of just-so stories a few are selected that are more fitting with the existing ones. For people it may now appear that these are the more useful ones, but the used arguments serve a mere narrative that explains why people do stuff, lest they appear to do stuff without knowing why. And as a consequence the motivation to do things only if they serve a purpose is itself meme that tells us to act in this way especially vis a vis others, namely to construct a narrative such that this behavior is explained. The rules driving behavior can be interpreted more or less strictly: ‘Moreover, some rules (like covering the feet) tend to be enforced much less strictly than others (like covering the genitals)‘ [p 6]. DPB: hahaa: Fokke & Sukke. Some of the rules that govern a society are allowed some margin of interpretation and so a variety of them exist; others are assumed to be generally valid, and hence they are more strictly interpreted, exhibiting less variety, leaving people unaware that they are in fact obeying a rule at all. As a consequence of a particular rule being part of a much larger system they cannot be easily changed, especially because the behavior of the person herself is – perhaps unknowingly – steered by that rule or system of rules. In this sense it can be said to hinder or impede people’s self-actualization. ‘The obstruction of societal change and self-actualization is not a mere side effect of the rigidity of social systems; it is an essential part of their identity. An autopoietic system aims at self-maintenance. Therefore, it will counteract any processes that threaten to perturb its organization (Maturana& Varela, 1980, Mingers, 1994). In particular, it will suppress anything that would put into question the rules that define it. This includes self-actualization, which is a condition generally characterized by openness to new ideas, autonomy, and enduring exploration (Heylighen, 1992; Maslow, 1970). Therefore, if we wish to promote self-actualization, we will need to better understand how these mechanisms of suppression used by social systems function’ [p 7]. DPB: I fully agree with the mechanism and I honestly wonder if it is at all possible to know one’s state of mind (what one has been familiarized with in one’s life experience so far, framed in the current environment), and hence if it is possible to self-actualize in a different way from what the actual state of mind (known or not) rules.

Reinforcement: reward and punishment

Conditioning, or reinforcement learning, is a way to induce a particular behavior. Behavior rewarded with a pleasant stimulus tends to be repeated, while behavior punished by an unpleasant stimulus tends to be suppressed. The more often a combination of the above occurs, the more will the relation be internalized, such that it can take the shape of a condition-action (stimulus-response) rule. This differential or selective reinforcement occurs in a process of socialization; the affirmation need to be a material reward, a simple acknowledgement and confirmation suffices (smile, thumbs up, like!); these signals suffice for the release of dopamine in the brain. ‘Social interaction is a nearly ubiquitous source of such reinforcing stimuli. Therefore, it has a wide-ranging power in shaping our categorizations, associations and behavior. Maintaining this dopamine-releasing and therefore rewarding stimulation requires continuing participation in the social system. That means acting according to the system’s rules. Thus, social systems program individuals in part through the same neural mechanisms that create conditioning and addiction. This ensures not only that these individuals automatically and uncritically follow the rules, but that they would feel unhappy if somehow prevented from participating in this on-going social reinforcement game. Immediate reward and punishment are only the simplest mechanisms of reinforcement and conditioning. Reinforcement can also be achieved through rewards or penalties that are anticipated, but that may never occur in reality’ (emphasis by the author) [ p 8].

The power of narratives

People are capable of symbolic cognition and they can conceive of situations that have never occurred (to them): ‘These imagined situations can function as “virtual” (but therefore not less effective) rewards that reinforce behavior’ [p 8]. Narratives (for instance tales) feature tales where the characters are punished or rewarded for their specific behavior. Social systems exploit people’s capacity of symbolic cognition using narratives, and hence build on the anticipatory powers of people to maintain and spread. ‘Such narratives have the advantage that they are easy to grasp, remember and communicate, because they embed abstract norms, rules and values into sequences of concrete events experienced by concrete individuals with whom the audience can easily empathize (Bruner, 1991; Heylighen, 2009; Oatley, 2002). In this way, virtual rewards that in practice are unreachably remote (like becoming a superstar, president of the USA, or billionaire) become easy to imagine as realities’ (emphasis by the author) [p 9]. Narratives can become more believable when communicated via media, celebrities, scripture deemed holy, &c.

Conformist transmission

Reinforcement is more effective when it is repeated more often. Given that social systems are self-reproducing networks of communications (Luhmann, 1995), the information they contain will be heard time and again. Conformist transmission means that you are more liable to adopt an idea, behavior or a narrative if you are communicated it by more other individuals; once adopted you are more likely to convert others to it and to confirm it when others express it. DPB: I agree and I never thought of this in this way: once familiarized with it, then not only can one become more convinced of an idea, but also can one become more evangelical about it. In that way an idea spreads quicker if it is more familiar to more people who then talk about it simultaneously. Now it can become a common opinion; and at that point it becomes more difficult to retain other ideas, up to the point that direct observation can be overruled. Sinterklaas and Zwarte Piet exist!

Cognitive dissonance and institutionalized action

People have a preference for coherence in thought and action: ‘When an individual has mutually inconsistent beliefs, this creates an unpleasant tension, known as cognitive dissonance; this can be remedied by rejecting or ignoring some of these thoughts, so that the remaining ones are consistent. This can be used by the social systems to suppress non-conformist ideas by having a person act in accordance with the rules of the social system but conflicting with the person’s rules: the conformist actions cannot be denied and now the person must cull the non-conformist ideas to release tensions [p 10]. ‘This mechanism becomes more effective when the actions that confirm the social norms are formalized, ritualized or institutionalized, so that they are repeatedly and unambiguously reinforced’ [p 10]. DPB: an illustration is given from [Zizek 2010]: by performing the rituals one becomes religious, because the rituals are the religion. This is an example of a meme: an expression of the core idea; conversely by repeating the expression one repeats the core idea also, and thereby familiarizes oneself with that idea as it becomes reinforced in one’s mind. But that reminds me of the idea of the pencil between the lips making a person happier (left to right) or unhappy (sticking forward). And to top it off: ‘Indeed, the undeniable act of praying to God can only be safeguarded from cognitive dissonance by denying any doubts you may have about the existence of God. This creates a coherence between inner beliefs and socially sanctioned actions, which now come to mutually reinforce each other in an autopoietic closure’ [p 10]. DPB: this is the role of dogma in any belief system: the questions that cannot be asked, the nogo areas, &c.

Distributed Intelligence

Heylighen, F. and Beigi, S. . mind outside brain: a radically non-dualist foundation for distributed cognition . Socially Extended Epistemology (Eds. Carter, Clark, Kallestrup, Palermos, Pritchard) . Oxford University Press . 2016

Abstract

We approach the problem of the extended mind from a radically non-dualist perspective. The separation between mind and matter is an artefact of the outdated mechanistic worldview, which leaves no room for mental phenomena such as agency, intentionality, or experience. [DPB: the rationale behind this is the determinism argument: if everything is determined by the rules of physics (nature) then nothing can be avoided and the future is determined. There can be no agency because there is nothing to choose, there can be no intentionality because people’s choices are determined by the rules of physics (it appears to be their intention but it is physics talking) and there can be no personal experience because which events a person encounters is indifferent from the existence of the (physical) person]. We propose to replace it by an action ontology, which conceives mind and matter as aspects of the same network of processes. By adopting the intentional stance, we interpret the catalysts of elementary reactions as agents exhibiting desires, intentions, and sensations. [DPB: I agree with the idea that mind and body are ‘functions of the same processes’. The intentional stance implies the question: What would I desire, want, feel in his place in this circumstance, and hence what can I be expected to do?] Autopoietic networks of reactions constitute more complex superagents, which moreover exhibit memory, deliberation and sense-making. In the specific case of social networks, individual agents coordinate their actions via the propagation of challenges. [DPB: for the challenges model: see the article Evo mailed]. The distributed cognition that emerges from this interaction cannot be situated in any individual brain. [DPB: this is important and I have discussed this in the section about the Shell operator, who cannot physically be aware of the processes out of his own scope of professional activities]. This non-dualist, holistic view extends and operationalizes process metaphysics and Eastern philosophies. It is supported by both mindfulness experiences and mathematical models of action, self-organization, and cognition. [DPB: I must decide how to apply the concepts of individuation, virtual/real/present, process ontology and/or action ontology, distributed cognition and distributed intelligence (do I need that?), and computation/thinking/information processing in my arguments].

Introduction

Socially extended knowledge is a part of the philosophical theory of the extended mind (Clark & Chalmers, 1998; Palermos & Pritchard, 2013; Pritchard, 2010): mental phenomena such as memory, knowledge and sensation extend outside the individual human brain, and into the material and social environment. DPB: this reminds of the Shell narrative. The idea is that human cognition is not confined to information processing within the brain, but depends on phenomena external to the brain: ‘These include the body, cognitive tools such as notebooks and computers, the situation, the interactions between agent and environment, communications with other agents, and social systems. We will summarize this broad scale of “extensions” under the header of distributed cognition (Hutchins, 2000), as they all imply that cognitive content and processes are distributed across a variety of agents, objects and actions. Only some of those are located inside the human brain; yet all of them contribute to human decisions by providing part of the information necessary to make these decisions’ [pp. 1-2]. The aim of this paper is to propose a radical resolution to this controversy (between processes such as belief, desire and intention are considered mental and other such as information transmission and processing, and storage as mechanical): we assume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011)’ (emphasis DPB: this statement is similar to (analogous to?) the statement that all processes in nature are computational processes or that all processes are cognitive and individuating processes) [p 2].

From dualism to action ontology

Descartes argued that people are free to choose: therefore the human mind does not follow physical laws. But since all matter follows such laws, the mind cannot be material. Therefore the mind must be independent, belonging to a separate, non-material realm. This is illustrated by the narrative that the mind leaves the body when a person dies. But a paradox rises: if mind and matter are separate then how can one affect the other? Most scientists agree that the mind ‘supervenes’ on the matter of the brain and it cannot exist without it. But many still reserve some quality that is specific for the mind, thereby leaving the thinking dualist. An evolutionary worldview explains the increasing complexity: elements and systems are interconnected and the mind does not need to be explained as a separate entity, but as a ‘.. mind appears .. as a natural emanation of the way processes and networks self-organize into goal-directed, adaptive agents’ [p 5], a conception known as process metaphysics. The thesis here is that the theory of the mind can be both non-dual AND analytic. To that end the vagueness of the process metaphysics is replaced with action ontology: ‘That will allow us to “extend” the mind not just across notebooks and social systems, but across the whole of nature and society’ [p 5].

Agents and the intentional stance

Action ontology is based on reactions as per COT. Probability is a factor and so determinism does not apply. Reactions or processes are the pivot in action ontology and states are secondary: ‘States can be defined in terms of the reactions that are possible in that state (Heylighen, 2011; Turchin, 1993)’ [p 7]. DPB: this reminds of the restrictions of Oudemans, the attractors and repellers that promote the probability that some states and restrict the probability that other states can follow from this particular one. In that sense it reminds also of the perception that systems can give to the observer that they are intentional. The list of actions that an agent can perform defines a dynamical system (Beer, 1995, 2000). The states that lead into an attractor define the attractor’s basin and the process of attaining that position in phase-space is called equifinality: different initial states produce the same final state (Bertalanffy, 1973). The attractor, the place the system tends to move towards is its ‘goal’ and the trajectory towards it as it is chosen by the agent at each consecutive state is its ‘course of action’ in order to reach that ‘goal’. The disturbances that might bring the agents off its course can be seen as challenges, which the agent does not control, but which the agent might be able to tackle by changing its course of action appropriately. To interpret the dynamics of a system as a goal-directed agent in an environment is the intentional stance (Dennett, 1989).

Panpsychism and the Theory of Mind

The “sensations” we introduced previously can be seen as rudimentary “beliefs” that an agent has about the conditions it is experiencing’ [p 10]. DPB: conversely beliefs can be seen as sensations in the sense of internalized I-O rules. ‘The prediction (of the intentional stance DPB) is that the agent will perform those actions that are most likely to realize its desires given its beliefs about the situation it is in’ [p 10]. DPB: and this is applicable to all kinds of systems. Indeed Dennett has designed different classes for physical systems, and I agree with the authors that there is no need for that, given that these systems are all considered to be agents (/ computational processes). Action ontology generalizes the application of the intentional stance to all conceivable systems and processes. To view non-human processes and systems in this way is in a sense ‘animistic’: all phenomena are sentient beings.

Organizations

In the action ontology a network of coupled reactions can be modeled: the output of one reaction forms the input for the next and so on. In this way it can be shown that a new level of coherence emerges. If such a network produces its own components including the elements required for its own reproduction it is autopoietic. In spite of ever changing states, its organization remains invariant. The states are characterized by the current configurations of the system’s elements, the states change as a consequence of the perturbations external to the system. Its organization lends the network system its (stable) identity despite the fact that it is in ongoing flux. The organization and its identity render it autonomous, namely independent of the uncertainties in its environment: ‘Still, the autopoietic network A interacts with the environment, by producing the actions Y appropriate to deal with the external challenges X. This defines the autopoietic organism as a higher-order agent: A+XA+Y. At the abstract level of this overall reaction, there is no difference between a complex agent, such as an animal or a human, and an elementary agent, such as a particle. The difference becomes clear when we zoom in and investigate the changing state of the network of reactions inside the agent’ [p 14]. DPB: this is a kind of a definition of the emergence of organization of a multitude of elements into a larger body. This relates to my black-box / transparency narrative. This line of thought is further elaborated on in the COT, where closure and self-maintenance are introduced to explain the notion of autopoiesis in networks. Closure means that eventually no new elements are produced, self-maintenance means that eventually all the elements are produced again (nothing is lost), and together they imply that all the essential parts are eventually recycled. This leads to states on an attractor. Also see COT article Francis. //INTERESTING!! In simple agents the input is directly transformed into an action: there is no internal state and these agents are reactive. In complex networks an input affects the internal state, the agent keeps an internal memory of previous experiences. That memory is determined by the sequence of sensations the agent has undergone. This memory together with its present sensations (perceptions of the environment) constitutes the agent’s belief system. A state is processed (to the next state) by the system’s network of internal reactions, the design of which depends on its autopoietic organization. A signal may or may not be the result of this processing and hence this process can be seen as a ‘deliberation’ or ‘sense-making’. Given the state of the environment, and given the memory of the system resulting from its previous experience, and given its propensity to maintain its autopoiesis, an input is processed (interpreted) to formulate an action to deal with the changed situation. If the action turns out to be appropriate then the action was justified and the rule leading up to it was true and the beliefs are knowledge: ‘This is equivalent to the original argument that autopoiesis necessarily entails cognition (Maturana & Varela, 1980), since the autopoietic agent must “know” how to act on a potentially perturbing situation in order to safeguard its autopoiesis’. This is connected to the notion of “virtue reliabilism”, that asserts that beliefs can be seen as knowledge when their reliability is evidenced by the cognitive capabilities (“virtues”) they grant the agent (Palermos, 2015; Pritchard, 2010) [p 15]. UP TO HERE //.

Socially distributed cognition

In our own approach to social systems, we conceive such processes as a propagation of challenges (Heylighen, 2014a). This can be seen as a generalization of Hutchins’s analysis of socially distributed cognition taking place through the propagation of “state” (Hutchins, 1995, 2000): the state of some agent determines that agentś action or communication, which in turn affects the state of the next agent receiving that communication or undergoing that action. Since a state is a selection out of a variety of potential states, it carries information. Therefore, the propagation of state from agent to agent is equivalent to the transmission and processing of information. This is an adequate model of distributed cognition if cognition is conceived as merely complex information processing. But if we want to analyze cognition as the functioning of a mind or agency, then we need to also include that agent’s desires, or more broadly its system of values and preferences. .. in how far does a state help to either help or hinder the agent in realizing its desires? This shifts our view of information from the traditional syntactic perspective of information theory (information as selection among possibilities) (Shannon & Weaver, 1963)) to a pragmatic perspective (information as trigger for goal-directed action (Gernert, 2006)(emphasis of DPB) [pp. 17-8]. DPB: this is an important connection to my idea that not only people’s minds process information, but the organization as such processes information also. This can explain how a multitude of people can be autonomous as an entity ‘an sich’. Distributed cognition is the cognition of the whole thing and in that sense the wording is not good, because the focus is no longer the human individual but the multitude as a single entity; a better word would be ‘integrated cognition’? It is propose to replace the terms “information” or “state” to “challenge”: a challenge is defined as a situation (i.e. a conjunction of conditions sensed by some agent) that stimulated the agent to act. DPB: Heylighen suggests that acting on this challenge brings benefit to the agent, I think it is more prosaic than that. I am not sure that I need the concept of a challenge. Below is an illustration of my Shell example: an individual know that action A leads to result B, but no one knows that U →Y, but the employees together know this: the knowledge is not in one person, but in the whole (the organization): John : U V, Ann : V→W, Barbara : W→X, Tom : X→Y. Each person recognizes the issue, does not know the (partial) answer, but knows (or finds out) who does; the persons are aware of their position in the organization and who else is there and (more or less) doing what. ‘Together, the “mental properties” of these human and non-human agents will determine the overall course of action of he organization. This course of action moves towards a certain “attractor”, which defines the collective desire or system of values of the organization’ [p 21]. DPB: if I want to model the organization using COT then this above section can be a starting point. I’m not sure I do want to, because I find it impracticable to identify the mix of the ingredients that should enter the concoction that is the initial condition to evolve into the memeplex that is a firm. How many of ‘get a job’ per what amount of ‘the shareholder is king’ should be in it?

Experiencing non-duality

Using the intentional stance it is possible to conceptualize a variety of processes as mind-like agencies. The mind does not reside in the brain, it sits in all kinds of processes in a distributed way.