Chemical Organization Theory as a Modeling Tool

Heylighen, F., Beigi, S. and Veloz, T. . Chemical Organization Theory as a modeling framework for self-organization, autopoiesis and resilience . Paper to be submitted based on working paper 2015-01.

Introduction

Complex systems consist of many interacting elements that self-organize: coherent patterns of organization or form emerge from their interactions. There is a need of theoretical understanding of self-organization and adaptation: our mathematical and conceptual tools are limited for the description of emergence and interaction. The reductionist approach analyzes a system into its constituent static parts and their variable properties; the state of the system is determined by the values of these variable properties and processes are transitions between states; the different possible states determine an a priori predefined state-space; only after introducing all these static elements and setting up a set of conditions for the state-space can we study the evolution of the system in that state-space. This approach makes it difficult to understand a system property such as emergent behavior. Process metaphysics and action ontology assume that reality is not constituted from things but from processes or actions; the difficulty is to represent these processes in a precise, simple, and concrete way. This paper aims to formalize these processes as reaction networks of chemical organization theory; here the reactions are the fundamental elements, the processes are primary; states take the second place as the changing of the ingredients as the processes go on; the molecules are not static objects but raw materials that are produced and consumed by the reactions. COT is a process ontology; it can describe processes in any sphere and hence in scientific discipline; ‘.. method to define and construct organizations, i.e. self-sustaining networks of interactions within a larger network of potential interactions. .. suited to describe self-organization, autopoiesis, individuation, sustainability, resilience, and the emergence of complex, adaptive systems out of simpler components’ [p 2]. DPB: this reminds me of the landscape of Jobs; all the relevant aspects are there. It is hoped that this approach helps to answer the question: How does a system self-organize; how are complex wholes constructed out of simpler elements?

Reaction Networks

A reaction network consists of resources and reactions. The resources are distinguishable phenomena in some shared space, a reaction vessel, called the medium. The reactions are elementary processes that create or destroy resources. RN = <R,M>, where RM is a reaction network, R is a reaction, M is a resource: M = {a,b,c,…} and R is a subset of P(M) x P(M), where P is the power set (set of all subsets) of M and each reaction transforms a subset Input of M into a subset Output of M; the resources in I are the reactants and the resources in O are the products; I and O are multisets meaning that resources can occur more than once. R:x1+x2+x3+..→y1+y2+… The + in the left term means a conjunction of necessary resources x: if all are simultaneously present in I(r) then the reaction takes place and produces the products y.

Reaction Networks vs. Traditional Networks

The system <M,R> forms a network because the resources in M are linked by the reactions in R transforming one resource into another. What is specific for COT is that a reaction represents the transform from a multiplicity of resources into another multiplicity of them: a set I transforms to a set O. DPB: this reminds me of category theory. My principal question at this point is whether the problem of where organization is produced is not relocated: first the question was how to tweak static object into self-organization, now it is which molecules in which quantities and combination to conjuncture to get them to produce other resources and showing patterns at it. In RN theory the transform of resources can occur through a disjunction or a conjunction: the disjunction is represented by the juxtaposed reaction formulae, the conjunction by the + within a reaction formula.

Reaction Networks and Propositional Logic

Conjunction: AND: &; Disjunction: OR: new reaction line; Implication: FOLLOWS: →; Negation: NOT: -. For instance: a&b&c&..x. But the resources at the I side are not destroyed by the process then formally a&b&..→a&b&x&… Logic is static because no propositions are destroyed: new implications can be found, but nothing new is created. Negation can be thought of as the production of the absence of a resource: a+bc+ d = ac+ d – b. I and O can be empty and a resource can be created from nothing (affirmation, a) or a resource can create nothing (elimination, aor →-a). Another example is aa and hence a+(-a) = a-aand a-a: the idea is that a particle and its anti-particle annihilate one another, but they can be created together from nothing.

Competition and cooperation

The concept of negative resources allow the expression of conflict, contradiction or inhibition: a→-b what is the same as a+b0 (empty set): the more of a produced, the less of b is present: the causal relation is negative. The relation “a inhibits b” holds if: : a is required to consume but not produce b. The opposite “a promotes b” means that a is required to produce but not to consume b. When the inhibiting and promoting relations are symmetrical, a and b inhibit (a and b competitors) or promote (a and b cooperators) each other, but they do not need to be. Inhibition is a negative causality and promotion is a positive influence. If only positive influences or an even number of negative influences are included in a cycle then negative feedback occurs. When the number of negative influences is uneven then a positive feedback occurs. Negative feedback leads to stabilization or oscillation, positive feedback leads to exponential growth. In a social network a particular message can be promoted, suppressed or inhibited by another. Interaction sin the network occur through their shared resources.

Organizations

In COT and organization is defined as a self-sustaining reaction system: produced and consumed resources are the same: ‘This means that although the system is intrinsically dynamic or process-based, constantly creating or destroying its own components, the complete set of its components (resources) remains invariant, because what disappears in one reaction is recreated by another on, while no qualitatively new components are added’ [p 8]. DPB: I find this an appealing idea. But I find it also hard to think of the basic components that would make up a particular memeplex, even using the connotations. What in other words would the resources have to be and what the reactions to construct a memeplex from them? If the resource is an idea then one idea leads to another, which matches my theory. But this method would have to cater for reinforcement: and the idea itself does not much change, it does get reinforced as it is repeated. And in addition how would the connotation be attached to them: or must it be seen as an ‘envelope’ that contains the address &c, and that ‘arms’ the connoted idea (meme) to react (compare) with others such that the ranking order in the mind of the person is established? And such that stable network of memes is established such that they form a memeplex. The property of organization above, is central to the theory of autopoiesis, but, as stated in the text, without the boundary of a living system. But I don’t agree with this: the RC church has a very strong boundary that separates it from everything that is not the RC church. And so the RN model should cater for more complexity than only the forming of molecules (‘prior to the first cell’). The organization of a subRN <M’,R> of a larger RN <M,R> is defined by these characteristics: 1. closure: when I(r) is a part of M’ then O(r) is a part of M’ for all resources 2. semi-self-maintenance: no existing resource is removed, each resource consumed by some reaction is produced again by some other reaction working on the same starting set and 3. self-maintenance: each consumed resource x element of M’ is produced by some reaction in <M’,R> in at least the same amount as the amount consumed (this is a difficult one, because a ledger is required over the existence of the system to account for the quantities of each resource). ‘We are now able to define the crucial concept of organization: a subset of resources and reactions <M’,R> is an organization when it is closed and self-maintaining. This basically means that while the reactions in R are processing the resources in set M’, they leave the set M’ invariant: no resources are added (closure) and no resources are removed (Self-maintenance)’( emphasis of the author) [p 9]. The difference with other models is that the basic assumption is that everything changes, but this concept of organization means that stability can arise while everything changes continually, in fact this is the definition of autopoiesis.

Some examples

If a resource appears in both the I and the O then it is a catalyst.

Extending the model

A quantitative shortcoming, a possible extension, is the absence of relative proportions and of the relative speeds of the reactions. To extend quantitatively the model can be detailed to encompass all the processes that make up some particular ecology of reactions.

Self-organization

If we apply the rules for closure and maintenance we can know how organization emerges. If a reaction is added, a source for some resource is added which interrupts closure, or a sink is added which interrupts the self-maintenance. In general a starting set of resources will not be closed; their reactions will lead to new resources and so on; but the production of new ones will stop if no new resources are possible given the resources in the system; at that point closure is reached: ‘Thus, closure can be seen as an attractor of the dynamics defined by resource addition: it is the end point of the evolution, where further evolution stops’ [p 12]. In regards to self-maintenance, starting at the closed set, some of the resources will be consumed but not produced in sufficient amounts to replace the used amounts; these will disappear from the set; this does not affect closure because loss of resources cannot add new resources; resources now start to disappear one by one from the set; this process stops when the remaining resources only depend on the remaining ones (and not the disappeared ones): ‘Thus, self-maintenance too can be seen as an attractor of the dynamics defined by resource removal. The combination of resource addition ending in closure followed by resource removal ending in self-maintenance produces an invariant set of resources and reactions. This unchanging reaction network is by definition an organization’ [p 12]. Every dynamic system will end up in a attractor, namely a stationary regime that the system cannot leave: ‘In the attractor regime the different components of the system have mutually adapted, in the sense that the one no longer threatens to extinguish the other they have co-evolved to a “symbiotic”state, where they either peacefully live next to each other, or actively help one another to be produced, thus sustaining their overall interaction’ [p 12]. DPB: from the push and pull of these different attractors emerges (or is selected) an attractor that manages the behavior of the system.

Sustainability and resilience

An organization in the above sense is by definition self-maintaining and therefore sustainable. Many organizations grow because they produce more resources than they consume (e.g. positive feedback of resources: overproduced). Sustainability means the ability of an organization to grow without outside interference. Resilience means the ability to maintain the essential organization in the face of outside disturbances; a disturbance can be represented by the injection or the removal of a resource that reacts with others in the system. Processes of control are: buffering, negative feedback, feedforward (neutralizing the disturbance before it has taken effect). The larger the variety of controls the systems sports, the more disturbances it can handle, an implementation of Asby’s law of requisite variety. Arbitrary networks of reactions will self-organize to produce sustainable organizations, because an organization is an attractor of their dynamics. DPB: this attractor issue and bearing in mind the difficulties with change management, this reminds me of the text about the limited room an attracted system takes up in state-space (containment) explains why a system once it is ‘attracted’ it will not change to another state without an effort of galactic proportions. ‘However, evolutionary reasoning shows that resilient outcomes are more likely in the long run than fragile ones. First, any evolutionary process starts from some arbitrary point in the state space of the system, while eventually reaching some attractor region within that space. Attractors are surrounded by basins, from which all states lead into the attractor (Heylighen, 2001). The larger the basin of an attractor, the larger the probability that the starting point is in that basin. Therefore, the system is more likely to end up in an attractor with a large basin than in one with a small basin. The larger the basin, the smaller the probability that a disturbance pushing the system out of its attractor would also push it out of the basin, and therefore the more resilient the organization corresponding to the attractor. Large basins normally represent stable systems characterized by negative feedback, since the deviation from the attractor is automatically counteracted by the descent back into the attractor. .. However, these unstable attractors will normally not survive long, as nearly any perturbation will push the system out of that attractor’s basin into the basin of a different attractor. . This very general, abstract reasoning makes it plausible that systems that are regularly perturbed will eventually settle down in a stable, resilient organization’ [p 15].

Metasystem transitions and topological structures

A metasystem transition = a major evolutionary transition = the emergence of a higher order organization from lower order organizations. COT can be understood in this way if an organization S (itself a system of elements, albeit organized) behaves like a resource of the catalyst type: invariant under reactions but it has an input of resources it consumes I(S) and an output of resources it produces O(S), resulting in this higher order reaction: I(S) + S S + O(S), assume that I(S) = {a,b} and O(S) = {c,d,e}, then this can be rewritten as a+b+S S+c+d+e. S itself constitutes of organized elements and it behaves like a black box processing some input to an output. If S is resilient it can even respond to changes in its input with a changed output. Now the design space of meta-systems can be widened to include catalyst resources of the type S, organizations that are self-maintaining and closed.

Concrete applications

It is possible to mix different kinds of resources; this enables the modeling of complex environments; this is likely to make the ensuing systems’ organizations more stable. ‘Like all living systems, the goal or intention of an organizatrion is to maintain and grow. To achieve this, it needs to produce the right actions for the right conditions (e.g. produce the right resource to neutralize a particular disturbance). This means that it implicitly contains a series of “condition-action rules” that play the role of the organization’s “knowledge”on how to act in its environment. The capability of selecting the right (sequence of) action(s) to solve a given problem constitutes the organization’s “intelligence”. To do this, it needs to perceive what is going on in its environment, i.e. to sense particular conditions (the presence or absence of certain resources) that are relevant to its goals. Thus, an organization can be seen as a rudimentary “intelligence” or “mind”’ [p 20]. DPB: I find this interesting because of the explanation of how such a model would work: the resources are the rules that the organization needs to sort out and to put in place at the right occasion.

Stigmergy as a universal Coordination Mechanism (II)

Heylighen, F. . Stigmergy as a universal coordination mechanism II: Varieties and Evolution . Cognitive Systems Research (Elsevier) 38 . pp. 50-59. 2016

Abstract

One application is cognition, which can be viewed as an interiorization of the individual stigmergy that helps an agent to a complex project by registering the state of the work in the trace, thus providing an external memory’[p 50]. DPB: I understand this as: according to this hypothesis, stigmergy exists prior to cognition; this means that natural but non-living processes use stigmergy on an external medium; once they are alive they are (in addition) capable of internalizing stigmergy, namely by internalizing the medium. The process of internalization of individual stigmergy is the same as (the development of?) cognition. This is another way of saying that the scope of a system changes so as to encompass the (previously external) medium on which the stigmergy takes place. The self-organization is now internalized. Cognition is now internalized. How does this view on the concept of cognition relate to the concept of individuation as a view on cognition?

1. Introduction

To bring some order to these phenomena, the present paper will develop a classification scheme for the different varieties of stigmergy. We will do this by defining fundamental dimensions or aspects, i.e. independent parameters along which stigmergic systems can vary. The fact that these aspects are continuous (“more or less”) rather than dichotomous (“present or absent”) may serve to remind us that the domain of stigmergic mechanisms is essentially connected: however different its instances may appear, it is not a collection of distinct classes, but a space of continuous variations on a single theme – the stimulation of actions by their prior results’ [p 50]. DPB: this reminds me of the landscape of Jobs: at the connection of the memes and the minds, there is a trace of the meme left on the brain and a trace of the brain is added to the meme, leaving the meme and the brain damaged. This means that from the viewpoint of the brain the memeplex is the medium and from the viewpoint of the meme the brain is the medium. The latter is more obvious to see: traces can be left in individuals’ brains. The former implies that changes are imposed on the memeplex; but the memeplex is represented by the expression of ideas in the real and in the mind; the real is an external medium, accessible through first order observations; the expression of the memeplex existing in the mind is an external medium, because it exists in other persons’ minds and in versions of the Self, both accessible through second-order observations. Back to the landscape: it is there anyhow, the difference in states is how the Jobs are connected and as a consequence how they are bounded and how they individuate.

2. Individual vs. collective stigmergy

Ants do not require a memory, because the present stage of the work is directly discernible by the same ant, and also by a different ant. Because they have no memory, the work can be continued by the same ant, but by another just as well.

3. Sematectonic vs. marker-based stigmergy

Sematectonic means that the results of the work itself are the traces that signify the input for the next ant and the next state (Wilson Sociobiology, 1975). Marker-based means that the stigmergic stimulation occurs through traces in the shape of markers such as pheromones left by other individuals (ants, termites!) before them, and not by traces of the work itself indicating a particular stage (Parunak, H.V.D., A survey of environments and mechanisms for human-human stigmergy, In Environments for multi-agent systems II (Weyns, Parunak, Michel (Eds.), 2006). Marker signals represent symbols, while sematectonic signals the concrete thing. But this is not straightforward: the territory boundary indicated with urine markers are an indication of the fact that there is an animal claiming this territory, while the urine contains additional information specific for that animal. To spread urine evenly around the claimed area and to interpret the information contained by it is useful for both the defender and the visitor in order to manage a potential conflict. And hence to reduce the uncertainties from the environment for both. The point is that the urine represents both information about the object and about the context.

4. Transient vs. persistent traces

We have conceptualized the medium as the passive component of the stigmergic system, which undergoes shaping by the actions, but does not participate in the activity itself’ (emphasis of the author) [p 52]. But a medium is bound to dissipate and decay, unless the information is actively maintained and reconstructed; without ongoing updates it will become obsolete, especially as the situation changes. No sharp distinction can be made between transient and persistent traces, they are the extremes of a continuum. A persistent trace does not require the simultaneous presence of the agent, while a purely transient trace does require their simultaneous presence. Synchronous stigmergy means to broadcast some signal, releasing information not directed at any one in particular. ‘A human example would be the self-organization of traffic, where drivers continuously react to the traffic conditions they perceive’ [p 53]. DPB: the gist of this example is that the behavior of the drivers is the signal: it is synchronous, not directed at anyone in particular, and it is sematectonic, because it represents the state of the system.

5. Quantitative vs. qualitative stigmergy

Quantitative stigmergy means that stronger conditions imply more forceful action to follow, or, in practical terms: the stronger conditions imply a higher probability of action. Qualitative stigmergy refers to conditions and actions that differ in kind rather than in degree: thís trace leads to thát action. There is no clear distinction of these two categories.

6. Extending the mind

Traditionally, cognition has been viewed as the processing of information inside the brain. More recent approaches, however, not that both the information and the processing often reside in the outside world (Clark, 1998; Dror & Harnad, 2008; Hollan, Hutchins & Kirsh 2000) – or what we have called the medium. .. Thus the human mind extends into the environment (Clark & Chalmers, 1998), “outsourcing” some of its functions to external support systems. .. In fact, our mental capabilities can be seen as an interiorization of what were initially stigmergic interactions with the environment’ (emphasis of the author) [p 54]. DPB: beetje brakke quote. This reminds me of the idea that a brain would not have been required if the environment was purely random. Just because it is not, and hence patterns can be cognized, it is relevant to avail of the instrument for just that: a brain embodying a set of condition-action rules to generate an action from the state of the environment sensed by it. Stigmergic activity lacks a memory: its state represents its memory as it reflects its every experience. But now the system is dependent on the contingencies of the part of the environment that is the medium: in order to detach itself from the uncertainties of the environment it internalizes memory and information processing.

7. The evolution of cooperation

In a stigmergic situation the defector does not weaken the cooperator: the cost of a trace is sunk.

Social Systems as Parasites

Seminar 1 December 2017, Francis Heylighen

Social Systems as Parasites

The power of a social system

1. In an experiment concerning punishment, people obey an instruction to administer others electric shocks. People tend to be obedient / “God rewards obedience” / “Whom should I obey first?” 2. When asked to point out which symbol is equal to another, people select the one they believe is equal, but when they are confronted with the choices of the other contestants, they tend to change their selection to what the others have chosen. Social systems in this way determine our worldview, namely the social construction of reality by specifying what is real.

Social systems suppress self-actualization

Social systems don’t ‘want’ you to think for yourself, but to replicate their information instead; social systems suppress non-conformist thought, namely they suppress differences in thought, and thereby they do not allow the development of unique (human) personalities: they suppress self-actualization. Examples of rules: 1. A Woman Should Be A Housewife >> If someone is a woman then, given that she shows conformist behavior, she will become a housewife and not a mathematician &c. Suppose Anna has a knack for math: If she complies then she becomes a housewife and she is likely to become frustrated; If she does not comply then she will become a mathematician (or engineer &c) and she is likely to become rebellious and suffer from doubts &c.2. To Be Gay is Unacceptable >> If someone is gay then, given that she shows conformist behavior, she will suppress gay behavior, but show a behavior considered normal instead; Suppose Anna is gay: If she complies she will be with a man and become frustrated; If she does not comply then she is likely to become rebellious, she will exhibit gay behavior, be with a woman, and suffer from doubts &c.

Social Systems Programming

People obey social rules unthinkingly and hence their self-actualization is limited (by them). This is the same as to say that social systems have a control over people. The emphasis on the lack of thinking is by the authors. The social system consists of rules that assists the thinking. And only thinking outside of those rules (thinking while not using those rules) would allow a workaround, or even a replacement of the rules, temporary or ongoing. This requires thinking without using pre-existing patterns or even thinking sans-image (new to the world).

Reinforcement Learning

1. Behaviorist: action >> reward (rat and shock) 2. socialization: good behavior and bad behavior (child and smile). This was a sparse remark: I guess the development of decision-action rules in children by socialization (smiling) is the same as the development of behavioral rules in rats by a behaviorist approach (shock).

Social systems as addictions

Dopamine is a neurotransmitter producing pleasure. A reward releases dopamine; Dopamine is addictive; Rewards are addictive. Social systems provide (ample) sources for rewards; Participating in social systems is a source of dopamine and hence it is addictive (generates addiction) and it maintains the addiction.

Narratives

Reinforcement need NOT be immediate NOR material (e.g. heaven / hell). Narratives can describe virtual penalties and rewards: myth, movies, stories, scriptures.

Conformist transmission

When more people transmit a particular rule then more people will transmit it. DPB: this reminds me of the changes in complex systems as a result of small injected change: many small changes and fewer large ones: the relation between the size of the shifts and their frequency is a power law.

Cognitive Dissonant

Entertaining mutually inconsistent beliefs is painful: the person believes it is bad to kill other people. As a soldier he now kills other people. This conflict can be resolved by replacing the picture of a person to be killed by the picture of vermin. The person thinks it is ok to kill vermin.

Co-opting emotions

Emotions are immediate strong motivators that bypass rational thought. Social systems use emotions to reinforce the motivation to obey their rules. 1. Fear: the anticipation of a particular outcome and the desire to avoid it 2. Guilt: fear of a retribution (wraak) and the desire to redeem (goedmaken); this can be exploited by the social system because there can be a deviation from its rules without a victim and it works on imaginary misdeeds: now people want to redeem vis-a-vis the social system 3. Shame: Perceived deficiency of the self because one is not fulfilling the norms of the social system: one feels weak, vulnerable and small and wishes to hide; the (perceived) negative judgments of others (their norms) are internalized. PS: Guilt refers to a wrong action implying a change of action; Shame refers to a wrong self and implies the wish for a change of (the perception of) self 4. Disgust: Revulsion of (sources of) pollution such as microbes, parasites &c. The Law of Contagion implies that anything associated with contagion is itself contagious.

Social System and disgust

The picture of a social system is that it is clean and pure and that it should not be breached. Ideas that do not conform to the rules of the social system (up to and including dogma and taboo) are like sources of pollution; these contagious ideas lead to reactions of violent repulsion by the ones included by the social system.

Vulnerability to these emotions

According to Maslow people who self-actualize are more resistant to these emotions of fear, shame, guilt and disgust.

DPB: 1. how do variations in the sensitivity to neurotransmitters affect the sensitivity to reinforcing? I would speculate that a higher sensitivity to dopamine leads to a more eager reaction to a positive experience, hence leading to a stronger reinforcement of the rule in the brain 2. how do higher or lower sensitivity to risk (the chance that some particular event occurs and the impact when it does) affect their abiding by the rules? I would speculate that sensitivity to risk depends on the power to cognize it and to act in accordance with it. A higher sensitivity to risk leads to attempting to follow (conformist) rules more precisely and more vigorously; conversely a lesser sensitivity to risk leaves space for interpretation of the rule, its condition or its enactment.

Distributed Intelligence

Heylighen, F. and Beigi, S. . mind outside brain: a radically non-dualist foundation for distributed cognition . Socially Extended Epistemology (Eds. Carter, Clark, Kallestrup, Palermos, Pritchard) . Oxford University Press . 2016

Abstract

We approach the problem of the extended mind from a radically non-dualist perspective. The separation between mind and matter is an artefact of the outdated mechanistic worldview, which leaves no room for mental phenomena such as agency, intentionality, or experience. [DPB: the rationale behind this is the determinism argument: if everything is determined by the rules of physics (nature) then nothing can be avoided and the future is determined. There can be no agency because there is nothing to choose, there can be no intentionality because people’s choices are determined by the rules of physics (it appears to be their intention but it is physics talking) and there can be no personal experience because which events a person encounters is indifferent from the existence of the (physical) person]. We propose to replace it by an action ontology, which conceives mind and matter as aspects of the same network of processes. By adopting the intentional stance, we interpret the catalysts of elementary reactions as agents exhibiting desires, intentions, and sensations. [DPB: I agree with the idea that mind and body are ‘functions of the same processes’. The intentional stance implies the question: What would I desire, want, feel in his place in this circumstance, and hence what can I be expected to do?] Autopoietic networks of reactions constitute more complex superagents, which moreover exhibit memory, deliberation and sense-making. In the specific case of social networks, individual agents coordinate their actions via the propagation of challenges. [DPB: for the challenges model: see the article Evo mailed]. The distributed cognition that emerges from this interaction cannot be situated in any individual brain. [DPB: this is important and I have discussed this in the section about the Shell operator, who cannot physically be aware of the processes out of his own scope of professional activities]. This non-dualist, holistic view extends and operationalizes process metaphysics and Eastern philosophies. It is supported by both mindfulness experiences and mathematical models of action, self-organization, and cognition. [DPB: I must decide how to apply the concepts of individuation, virtual/real/present, process ontology and/or action ontology, distributed cognition and distributed intelligence (do I need that?), and computation/thinking/information processing in my arguments].

Introduction

Socially extended knowledge is a part of the philosophical theory of the extended mind (Clark & Chalmers, 1998; Palermos & Pritchard, 2013; Pritchard, 2010): mental phenomena such as memory, knowledge and sensation extend outside the individual human brain, and into the material and social environment. DPB: this reminds of the Shell narrative. The idea is that human cognition is not confined to information processing within the brain, but depends on phenomena external to the brain: ‘These include the body, cognitive tools such as notebooks and computers, the situation, the interactions between agent and environment, communications with other agents, and social systems. We will summarize this broad scale of “extensions” under the header of distributed cognition (Hutchins, 2000), as they all imply that cognitive content and processes are distributed across a variety of agents, objects and actions. Only some of those are located inside the human brain; yet all of them contribute to human decisions by providing part of the information necessary to make these decisions’ [pp. 1-2]. The aim of this paper is to propose a radical resolution to this controversy (between processes such as belief, desire and intention are considered mental and other such as information transmission and processing, and storage as mechanical): we assume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011)’ (emphasis DPB: this statement is similar to (analogous to?) the statement that all processes in nature are computational processes or that all processes are cognitive and individuating processes) [p 2].

From dualism to action ontology

Descartes argued that people are free to choose: therefore the human mind does not follow physical laws. But since all matter follows such laws, the mind cannot be material. Therefore the mind must be independent, belonging to a separate, non-material realm. This is illustrated by the narrative that the mind leaves the body when a person dies. But a paradox rises: if mind and matter are separate then how can one affect the other? Most scientists agree that the mind ‘supervenes’ on the matter of the brain and it cannot exist without it. But many still reserve some quality that is specific for the mind, thereby leaving the thinking dualist. An evolutionary worldview explains the increasing complexity: elements and systems are interconnected and the mind does not need to be explained as a separate entity, but as a ‘.. mind appears .. as a natural emanation of the way processes and networks self-organize into goal-directed, adaptive agents’ [p 5], a conception known as process metaphysics. The thesis here is that the theory of the mind can be both non-dual AND analytic. To that end the vagueness of the process metaphysics is replaced with action ontology: ‘That will allow us to “extend” the mind not just across notebooks and social systems, but across the whole of nature and society’ [p 5].

Agents and the intentional stance

Action ontology is based on reactions as per COT. Probability is a factor and so determinism does not apply. Reactions or processes are the pivot in action ontology and states are secondary: ‘States can be defined in terms of the reactions that are possible in that state (Heylighen, 2011; Turchin, 1993)’ [p 7]. DPB: this reminds of the restrictions of Oudemans, the attractors and repellers that promote the probability that some states and restrict the probability that other states can follow from this particular one. In that sense it reminds also of the perception that systems can give to the observer that they are intentional. The list of actions that an agent can perform defines a dynamical system (Beer, 1995, 2000). The states that lead into an attractor define the attractor’s basin and the process of attaining that position in phase-space is called equifinality: different initial states produce the same final state (Bertalanffy, 1973). The attractor, the place the system tends to move towards is its ‘goal’ and the trajectory towards it as it is chosen by the agent at each consecutive state is its ‘course of action’ in order to reach that ‘goal’. The disturbances that might bring the agents off its course can be seen as challenges, which the agent does not control, but which the agent might be able to tackle by changing its course of action appropriately. To interpret the dynamics of a system as a goal-directed agent in an environment is the intentional stance (Dennett, 1989).

Panpsychism and the Theory of Mind

The “sensations” we introduced previously can be seen as rudimentary “beliefs” that an agent has about the conditions it is experiencing’ [p 10]. DPB: conversely beliefs can be seen as sensations in the sense of internalized I-O rules. ‘The prediction (of the intentional stance DPB) is that the agent will perform those actions that are most likely to realize its desires given its beliefs about the situation it is in’ [p 10]. DPB: and this is applicable to all kinds of systems. Indeed Dennett has designed different classes for physical systems, and I agree with the authors that there is no need for that, given that these systems are all considered to be agents (/ computational processes). Action ontology generalizes the application of the intentional stance to all conceivable systems and processes. To view non-human processes and systems in this way is in a sense ‘animistic’: all phenomena are sentient beings.

Organizations

In the action ontology a network of coupled reactions can be modeled: the output of one reaction forms the input for the next and so on. In this way it can be shown that a new level of coherence emerges. If such a network produces its own components including the elements required for its own reproduction it is autopoietic. In spite of ever changing states, its organization remains invariant. The states are characterized by the current configurations of the system’s elements, the states change as a consequence of the perturbations external to the system. Its organization lends the network system its (stable) identity despite the fact that it is in ongoing flux. The organization and its identity render it autonomous, namely independent of the uncertainties in its environment: ‘Still, the autopoietic network A interacts with the environment, by producing the actions Y appropriate to deal with the external challenges X. This defines the autopoietic organism as a higher-order agent: A+XA+Y. At the abstract level of this overall reaction, there is no difference between a complex agent, such as an animal or a human, and an elementary agent, such as a particle. The difference becomes clear when we zoom in and investigate the changing state of the network of reactions inside the agent’ [p 14]. DPB: this is a kind of a definition of the emergence of organization of a multitude of elements into a larger body. This relates to my black-box / transparency narrative. This line of thought is further elaborated on in the COT, where closure and self-maintenance are introduced to explain the notion of autopoiesis in networks. Closure means that eventually no new elements are produced, self-maintenance means that eventually all the elements are produced again (nothing is lost), and together they imply that all the essential parts are eventually recycled. This leads to states on an attractor. Also see COT article Francis. //INTERESTING!! In simple agents the input is directly transformed into an action: there is no internal state and these agents are reactive. In complex networks an input affects the internal state, the agent keeps an internal memory of previous experiences. That memory is determined by the sequence of sensations the agent has undergone. This memory together with its present sensations (perceptions of the environment) constitutes the agent’s belief system. A state is processed (to the next state) by the system’s network of internal reactions, the design of which depends on its autopoietic organization. A signal may or may not be the result of this processing and hence this process can be seen as a ‘deliberation’ or ‘sense-making’. Given the state of the environment, and given the memory of the system resulting from its previous experience, and given its propensity to maintain its autopoiesis, an input is processed (interpreted) to formulate an action to deal with the changed situation. If the action turns out to be appropriate then the action was justified and the rule leading up to it was true and the beliefs are knowledge: ‘This is equivalent to the original argument that autopoiesis necessarily entails cognition (Maturana & Varela, 1980), since the autopoietic agent must “know” how to act on a potentially perturbing situation in order to safeguard its autopoiesis’. This is connected to the notion of “virtue reliabilism”, that asserts that beliefs can be seen as knowledge when their reliability is evidenced by the cognitive capabilities (“virtues”) they grant the agent (Palermos, 2015; Pritchard, 2010) [p 15]. UP TO HERE //.

Socially distributed cognition

In our own approach to social systems, we conceive such processes as a propagation of challenges (Heylighen, 2014a). This can be seen as a generalization of Hutchins’s analysis of socially distributed cognition taking place through the propagation of “state” (Hutchins, 1995, 2000): the state of some agent determines that agentś action or communication, which in turn affects the state of the next agent receiving that communication or undergoing that action. Since a state is a selection out of a variety of potential states, it carries information. Therefore, the propagation of state from agent to agent is equivalent to the transmission and processing of information. This is an adequate model of distributed cognition if cognition is conceived as merely complex information processing. But if we want to analyze cognition as the functioning of a mind or agency, then we need to also include that agent’s desires, or more broadly its system of values and preferences. .. in how far does a state help to either help or hinder the agent in realizing its desires? This shifts our view of information from the traditional syntactic perspective of information theory (information as selection among possibilities) (Shannon & Weaver, 1963)) to a pragmatic perspective (information as trigger for goal-directed action (Gernert, 2006)(emphasis of DPB) [pp. 17-8]. DPB: this is an important connection to my idea that not only people’s minds process information, but the organization as such processes information also. This can explain how a multitude of people can be autonomous as an entity ‘an sich’. Distributed cognition is the cognition of the whole thing and in that sense the wording is not good, because the focus is no longer the human individual but the multitude as a single entity; a better word would be ‘integrated cognition’? It is propose to replace the terms “information” or “state” to “challenge”: a challenge is defined as a situation (i.e. a conjunction of conditions sensed by some agent) that stimulated the agent to act. DPB: Heylighen suggests that acting on this challenge brings benefit to the agent, I think it is more prosaic than that. I am not sure that I need the concept of a challenge. Below is an illustration of my Shell example: an individual know that action A leads to result B, but no one knows that U →Y, but the employees together know this: the knowledge is not in one person, but in the whole (the organization): John : U V, Ann : V→W, Barbara : W→X, Tom : X→Y. Each person recognizes the issue, does not know the (partial) answer, but knows (or finds out) who does; the persons are aware of their position in the organization and who else is there and (more or less) doing what. ‘Together, the “mental properties” of these human and non-human agents will determine the overall course of action of he organization. This course of action moves towards a certain “attractor”, which defines the collective desire or system of values of the organization’ [p 21]. DPB: if I want to model the organization using COT then this above section can be a starting point. I’m not sure I do want to, because I find it impracticable to identify the mix of the ingredients that should enter the concoction that is the initial condition to evolve into the memeplex that is a firm. How many of ‘get a job’ per what amount of ‘the shareholder is king’ should be in it?

Experiencing non-duality

Using the intentional stance it is possible to conceptualize a variety of processes as mind-like agencies. The mind does not reside in the brain, it sits in all kinds of processes in a distributed way.

Individuation of Social Systems

Lenartowicz, A., Weinbaum, D., Braathen, P. . The Individuation of Social Systems: A Cognitive Framework . Procedia Computer Science (Elsevier), vol. 88 (pp 15-20) . Doi: 10.1016/j.procs.2016.07.400 . 2016

Abstract

Starting point is formed by the Theory of Individuation (Simondon 1992), Enactive Theory of Cognition (Paolo e.a. 2010) and the Theory of Social Systems Luhmann 1996). The objective is to identify how AI integrates into human society.

1. Introduction

Social systems influence cognitive activities. It is argued that social systems operate as cognitive systems: ‘.. autonomous, self-organizing loci of agency and cognition, which are distinct from human minds and manifesting behaviors that are irreducible to their aggregations’ [p 15]. DPB: I like this (in bold, to end all others) way to formulate the behavior specific to the whole, as opposed to the behavior specific to the individuals therein. It is argued here that these systems individuate in the same way, and their mode of operation is analogous to, other processes of life. This paper does not follow some others that take a narrow approach to cognition starting at the architecture of the individual human mind; instead it presents a perspective of cognition that originates from a systemic sociological view, leading to a socio-human cognitive architecture; the role of the individual human being in the establishing of networks and their operation thereafer is reduced. The theory if based on the view of Heraklitus that ontologically reality is a sequence of processes instead of objects and with Simondon’s theory of individuation: ‘This results in an understanding of social systems as complex sequences of occurrences of communication (emphasis of the authors), which are capable of becoming consolidated to the degree in which they start to display an emergent adaptive dynamics characteristic to cognitive systems – and to exert influence over their own mind-constituted environment’ [p 16]. DPB: this reminds of my understanding of the landscape of Jobs, where Situations and Interactions take place as sequences of signals uttered and perceived.

2. Individuation of Cognitive Agents

The basis is a shift from an Aristotelian object oriented ontology to an Heraklitian process oriented ontology (or rather an ontogenesis); not individuals but individuation are the center-piece; no individual is assumed to precede these processes; all transformations are secondary to individuation: ‘Individuation is a primary formative activity whereas individuals are regarded as merely intermediate and metastable entities, undergoing a continuous process of change’ [p 16]. In this view the individual is always changing, and ‘always pregnant with with not yet actualized and not yet known potentialities of change’ [p 16]. DPB: His reminds me of the monadic character of systems: they are very near completion, yet never quite finished and always ready to fight the previous war. Local and contingent interactions achieve ever higher levels of coordination between their constitutive elements; the resulting entities become ever more complex and can have agency. Cognition can be seen as a process of sense-making; cognition can facilitate the formation of boundaries (distinctions). This is explained by the theory of enactive cognition that treats sense-making as a primary activity of cognition (Varela, Thompson & Rosch 1992; Stewart, Gapenne &Di Paolo 2010; De Jaegher & Di Paolo 2007). This idea is radicalized in this paper: sense-making is assumed to be bringing forth distinctions, objects and relations; sense-making precedes subjects and objects and it is necessary for their emergence; sense-making precedes the existence of consolidated cognitive agents to whom the activity itself would conventionally be attributed. DPB: this firstly reminds me of the phrase ‘love is in the air, even if there is nobody there yet’; ‘processes of individuation constitute a distributed and progressively more coherent (as boundaries and distinctions are formed) loci of autonomous cognitive activity’ [pp. 16-7]; also the process of individuation precedes the process of autopoiesis: the latter cannot exist as a work in progress, but individuation occurs also without autopoiesis; and so autopoiesis can only be a design condition of a process that has already individuated. In this way individuation is taken from its narrow psychological context and projected to a general systems application: ‘Sense-making entails crossing the boundary between the unknown and the known through the formation of tentative perceptions and actions consolidating them together into more or less stable conceptions (emphasis by the author)’ [p 17]. DPB: this is a useful working definition of sense-making; these processes are relevant not just for psychic and social processes, I believe they have their root (and started in some form once) as chemical and physical processes, for which the above terminology does not seem fully suitable; from that point on, these multitudes of elements ‘grew up’ together and became ever more complex. ‘Individuation as an on-going formative process, manifests in the co-determining interactions taking place within the heterogeneous populations of interacting agents. These populations are the ‘raw materials’ from which new individuals emerge. The sense-making activities are distributed over the population and have no center of regulated activity of synchrony. Coordination – the recurrent mutual regulation of behaviors is achieved via interactions that are initially contingent. These interactions are necessary for the consolidation of any organized entity or system’ [p 17].

3 Social Systems as Cognitive Individuals

By a social system is meant any meta-stable form of social activity. DPB: but what is meant with meta-stable. This is the Luhmann understanding of a social system. This paper demonstrates 1. the individuation of social systems and 2. identify social systems as the metastable individuals. Events that are the building blocks for social reality happen as single occurrences of communication, each consisting of: 1. a selection of information, 2. selection of the utterance, and 3. the selection of the understanding. DPB: this is as per my Logistical Model. If and only if the three selections are combined do they form a unity of a communicative event, ‘a temporary individual’. ‘This means that it distinguishes itself from its environment (i.e. any other processes or events) by the means of three provisional boundaries, which the event sets forth: (a) an ‘information-making boundary’ between the marked and the unmarked side of the distinction being made (Spencer Brown, 1994), i.e. delineating the selected information (marked – M) and the non-selected one (unmarked – Un-M), (b) a ‘semiotic boundary’ (Lotman, 2001) between the thus created signified (SD) and a particular signifier selected to carry the information (SR), and (c) a ‘sense-making’ boundary between thus created sign (SGN) and the context (CX), i.e. delineating the understanding of information within its situation (Lenartowicz, Weinbaum & Braaten, 2016)’ [p 17]. DPB: I am not sure what to do with those three selections; I have not used them and instead I am working with selection of some piece of information, while it is uttered, and while it is also perceived (made sense of). I must figure out whether (and how) to use this. Maybe ask ML to clarify how they connect to my logistical model, and especially the E and the B operators. It is important because it is a chain-link in a chain of events: ‘The three selections and corresponding boundaries of an event make the communication available to interact with or to be referred to by another communicative event constituted by another triple selection’ [p 17]. DPB: all this sounds a bit artificial and procedural and mechanical: how can this process come about in a natural way? Once recorded and remembered these elements become available for endless re-use independent of space, time and context (frame). In closed networks of communication, however, they have a tendency to converge into recurrent self-reinforcing patterns, such that the become established and difficult not to be associated with, even if in a negative form or critique. From the associations of these selected simple forms can arise complex individuated sequences, social systems. Through their interactions these systems gain and maintain coherence; as they recur the probability that the same pattern is repeated is higher than the probability that a completely new pattern is selected. Initially contingent boundaries become self-reinforced and stable. ‘On account of their repetition, a social system can be said to develop perceptions (i.e. reappearing selections of information and understanding), actions (i.e. reappearing selections of utterance) and conceptions (percept – action associations) that dynamically bind them. Each such assemblage thus becomes a locus of identifiable cognitive activity, temporarily stabilized within a flux of communication’ [p 18].

4. The Role of Human Cognition

The (three) selections individuating social systems are performed by other cognitive individuated systems. In a social system that is individuated to a level of stability and coherence, emerging patterns in that system further orient the selections made by people. And reciprocally the the psychic environment of the people facilitates the individuation of the social system by selecting new instances of communication that somehow fit the existing parts. Human beings are indispensable for the continuation of communication and hence for the maintenance of a social system, but they are incapable of influencing the social system in the sense that one seedling is incapable of influencing the amount of water in a lake. Only when a social system is at the early stages of its individuation and taking shape can it be influenced by individual people: a pattern of a large social system is confirmed by many other communications and also one different communication, that does not follow the pattern, doesn’t hold sufficient weight to change its course. ‘Taking into account a variety of powerful factors that guide all the linguistic activities of humans: (a) the relative simplicity, associative coherence, frequent recurrence of the cognitive operations once they become consolidated in a social system, (b) the rarity of context-free (e.g. completely exploratory and poetic) communications that is reinforced by the density and entanglement of all “language games” in which contemporary humans are all immersed in, and (c) the high level of predictability of human selection-making inputs observable from the sociological standpoint; it will be reasonable to set the boundaries of our modeling of he general phenomena of human cognition in such a way, which delineates the dynamics of two different kinds of individuating cognitive agencies operating at different scales: the human individual and the social system. Instead of reducing all cognitive activities to the human individual we can clearly distinguish cognitive agencies operating at different scales’ [p 19]. DPB: I like the three arguments above for the likelihood of patterns to appear in communication and also that human cognition is to some extent built with the(extensive) help of social systems, such that human cognition cannot be fully reduced to the individual itself, but also to the social systems in the environment of the individual.

Social Systems and Autopoiesis

Lenartowicz, M. . Linking Social Communication to Individual Cognition: Communication Science between Social Constructionism and Radical Constructivism . Constructivist Foundations vol. 12 No 1 . 2016

I wish to differentiate between between a social species in the organic, animalistic sense and the interconnectivity of social personas in social science’s sense. While the former expresses its sense structures, co-opting language and other available symbolic tools towards its own autopoietic self-perpetuation and survival, the latter (personas) self-organize out of the usages of these tools – and aggregate up into larger self-organizing social constructs’ [p 50]. DPB: I find this important because it adds a category of behavior to the existing ones: biological (love of kin &c.), the social (altruism) of the category that improves the probability that the organisms survives, and added is now externally directed behavior that produces self-organization in their aggregate. ‘If we agree to approach social systems as cognitive agents per se, we must assume that there will be instances, or aspects, of human expression that are rather pulled by the “creatures of the semiosphere”, as I call the autopoietic constructs of the social (Lenartowicz 2016), for the sake of their own self-perpetuation, than pushed by the sense-structures of the human self’ [p 50]. DPB: I like this idea of the human mind being attracted by some aspects of social systems (and / or repelled by others); a term that is much used in ECCO is whether ‘something resonates with someone’. The argument above is that a push and a pull exist and that in the case of the social, the semiotic creatures have the upper hand, over the proffered biological motivations. ‘The RC (radical constructivist) approach to human consciousness must, then, be balanced by the RC view of the social as an individuated, survival-seeking locus of cognition. The difference between the two kinds of organic and symbolic expressions of sociality, which are here suggested as perpetuating the two distinct autopoietic systems, .. has finally settled the long-standing controversy about whether social systems are autopoietic (..), demonstrating that both sides were right. They were simply addressing two angles of the social. Maturana’s objections originated from his understanding of social relatedness as a biological phenomenon (the organic social), whereas the position summarized by Cadenas and Arnold-Cathalifaud was addressing the social as it is conceived by the social sciences (the symbolic social). The difference here is not in the different disciplinary lenses being applied to the same phenomenon. Rather, it is between two kinds of phenomena, stemming from the cognitive operation of two kinds of autopoietic embodiments. For one, the social is an extension, or an expression, of the organic, physical embodiment of a social species. It does not form an operational closure itself. For the other, the social has happened to self-organize and evolve in a manner that has led it to spawn autonomous, autopoietic and individuating cognitive agents – the “social systems” about which Luhmann wrote’ [p 50]. DPB: this is a long quote with some important elements. First the dichotomy is explained between the social aspects of humans. Second the reason why Maturana was, of all people, opposed to the applicability of autopoiesis to social systems. Now it seems clear why. Third, embodiment is introduced: for the organic social, the social is an extension of the physical embodiment of the individual, but without the autonomy; for the other the social ís the embodiment, namely it self-organizes and evolves into autonomous systems. I like that: the organization at the scale of the human and the organization at the level of the aggregate of the humans.