Distributed Intelligence

Heylighen, F. and Beigi, S. . mind outside brain: a radically non-dualist foundation for distributed cognition . Socially Extended Epistemology (Eds. Carter, Clark, Kallestrup, Palermos, Pritchard) . Oxford University Press . 2016

Abstract

We approach the problem of the extended mind from a radically non-dualist perspective. The separation between mind and matter is an artefact of the outdated mechanistic worldview, which leaves no room for mental phenomena such as agency, intentionality, or experience. [DPB: the rationale behind this is the determinism argument: if everything is determined by the rules of physics (nature) then nothing can be avoided and the future is determined. There can be no agency because there is nothing to choose, there can be no intentionality because people’s choices are determined by the rules of physics (it appears to be their intention but it is physics talking) and there can be no personal experience because which events a person encounters is indifferent from the existence of the (physical) person]. We propose to replace it by an action ontology, which conceives mind and matter as aspects of the same network of processes. By adopting the intentional stance, we interpret the catalysts of elementary reactions as agents exhibiting desires, intentions, and sensations. [DPB: I agree with the idea that mind and body are ‘functions of the same processes’. The intentional stance implies the question: What would I desire, want, feel in his place in this circumstance, and hence what can I be expected to do?] Autopoietic networks of reactions constitute more complex superagents, which moreover exhibit memory, deliberation and sense-making. In the specific case of social networks, individual agents coordinate their actions via the propagation of challenges. [DPB: for the challenges model: see the article Evo mailed]. The distributed cognition that emerges from this interaction cannot be situated in any individual brain. [DPB: this is important and I have discussed this in the section about the Shell operator, who cannot physically be aware of the processes out of his own scope of professional activities]. This non-dualist, holistic view extends and operationalizes process metaphysics and Eastern philosophies. It is supported by both mindfulness experiences and mathematical models of action, self-organization, and cognition. [DPB: I must decide how to apply the concepts of individuation, virtual/real/present, process ontology and/or action ontology, distributed cognition and distributed intelligence (do I need that?), and computation/thinking/information processing in my arguments].

Introduction

Socially extended knowledge is a part of the philosophical theory of the extended mind (Clark & Chalmers, 1998; Palermos & Pritchard, 2013; Pritchard, 2010): mental phenomena such as memory, knowledge and sensation extend outside the individual human brain, and into the material and social environment. DPB: this reminds of the Shell narrative. The idea is that human cognition is not confined to information processing within the brain, but depends on phenomena external to the brain: ‘These include the body, cognitive tools such as notebooks and computers, the situation, the interactions between agent and environment, communications with other agents, and social systems. We will summarize this broad scale of “extensions” under the header of distributed cognition (Hutchins, 2000), as they all imply that cognitive content and processes are distributed across a variety of agents, objects and actions. Only some of those are located inside the human brain; yet all of them contribute to human decisions by providing part of the information necessary to make these decisions’ [pp. 1-2]. The aim of this paper is to propose a radical resolution to this controversy (between processes such as belief, desire and intention are considered mental and other such as information transmission and processing, and storage as mechanical): we assume that mind is a ubiquitous property of all minimally active matter (Heylighen, 2011)’ (emphasis DPB: this statement is similar to (analogous to?) the statement that all processes in nature are computational processes or that all processes are cognitive and individuating processes) [p 2].

From dualism to action ontology

Descartes argued that people are free to choose: therefore the human mind does not follow physical laws. But since all matter follows such laws, the mind cannot be material. Therefore the mind must be independent, belonging to a separate, non-material realm. This is illustrated by the narrative that the mind leaves the body when a person dies. But a paradox rises: if mind and matter are separate then how can one affect the other? Most scientists agree that the mind ‘supervenes’ on the matter of the brain and it cannot exist without it. But many still reserve some quality that is specific for the mind, thereby leaving the thinking dualist. An evolutionary worldview explains the increasing complexity: elements and systems are interconnected and the mind does not need to be explained as a separate entity, but as a ‘.. mind appears .. as a natural emanation of the way processes and networks self-organize into goal-directed, adaptive agents’ [p 5], a conception known as process metaphysics. The thesis here is that the theory of the mind can be both non-dual AND analytic. To that end the vagueness of the process metaphysics is replaced with action ontology: ‘That will allow us to “extend” the mind not just across notebooks and social systems, but across the whole of nature and society’ [p 5].

Agents and the intentional stance

Action ontology is based on reactions as per COT. Probability is a factor and so determinism does not apply. Reactions or processes are the pivot in action ontology and states are secondary: ‘States can be defined in terms of the reactions that are possible in that state (Heylighen, 2011; Turchin, 1993)’ [p 7]. DPB: this reminds of the restrictions of Oudemans, the attractors and repellers that promote the probability that some states and restrict the probability that other states can follow from this particular one. In that sense it reminds also of the perception that systems can give to the observer that they are intentional. The list of actions that an agent can perform defines a dynamical system (Beer, 1995, 2000). The states that lead into an attractor define the attractor’s basin and the process of attaining that position in phase-space is called equifinality: different initial states produce the same final state (Bertalanffy, 1973). The attractor, the place the system tends to move towards is its ‘goal’ and the trajectory towards it as it is chosen by the agent at each consecutive state is its ‘course of action’ in order to reach that ‘goal’. The disturbances that might bring the agents off its course can be seen as challenges, which the agent does not control, but which the agent might be able to tackle by changing its course of action appropriately. To interpret the dynamics of a system as a goal-directed agent in an environment is the intentional stance (Dennett, 1989).

Panpsychism and the Theory of Mind

The “sensations” we introduced previously can be seen as rudimentary “beliefs” that an agent has about the conditions it is experiencing’ [p 10]. DPB: conversely beliefs can be seen as sensations in the sense of internalized I-O rules. ‘The prediction (of the intentional stance DPB) is that the agent will perform those actions that are most likely to realize its desires given its beliefs about the situation it is in’ [p 10]. DPB: and this is applicable to all kinds of systems. Indeed Dennett has designed different classes for physical systems, and I agree with the authors that there is no need for that, given that these systems are all considered to be agents (/ computational processes). Action ontology generalizes the application of the intentional stance to all conceivable systems and processes. To view non-human processes and systems in this way is in a sense ‘animistic’: all phenomena are sentient beings.

Organizations

In the action ontology a network of coupled reactions can be modeled: the output of one reaction forms the input for the next and so on. In this way it can be shown that a new level of coherence emerges. If such a network produces its own components including the elements required for its own reproduction it is autopoietic. In spite of ever changing states, its organization remains invariant. The states are characterized by the current configurations of the system’s elements, the states change as a consequence of the perturbations external to the system. Its organization lends the network system its (stable) identity despite the fact that it is in ongoing flux. The organization and its identity render it autonomous, namely independent of the uncertainties in its environment: ‘Still, the autopoietic network A interacts with the environment, by producing the actions Y appropriate to deal with the external challenges X. This defines the autopoietic organism as a higher-order agent: A+XA+Y. At the abstract level of this overall reaction, there is no difference between a complex agent, such as an animal or a human, and an elementary agent, such as a particle. The difference becomes clear when we zoom in and investigate the changing state of the network of reactions inside the agent’ [p 14]. DPB: this is a kind of a definition of the emergence of organization of a multitude of elements into a larger body. This relates to my black-box / transparency narrative. This line of thought is further elaborated on in the COT, where closure and self-maintenance are introduced to explain the notion of autopoiesis in networks. Closure means that eventually no new elements are produced, self-maintenance means that eventually all the elements are produced again (nothing is lost), and together they imply that all the essential parts are eventually recycled. This leads to states on an attractor. Also see COT article Francis. //INTERESTING!! In simple agents the input is directly transformed into an action: there is no internal state and these agents are reactive. In complex networks an input affects the internal state, the agent keeps an internal memory of previous experiences. That memory is determined by the sequence of sensations the agent has undergone. This memory together with its present sensations (perceptions of the environment) constitutes the agent’s belief system. A state is processed (to the next state) by the system’s network of internal reactions, the design of which depends on its autopoietic organization. A signal may or may not be the result of this processing and hence this process can be seen as a ‘deliberation’ or ‘sense-making’. Given the state of the environment, and given the memory of the system resulting from its previous experience, and given its propensity to maintain its autopoiesis, an input is processed (interpreted) to formulate an action to deal with the changed situation. If the action turns out to be appropriate then the action was justified and the rule leading up to it was true and the beliefs are knowledge: ‘This is equivalent to the original argument that autopoiesis necessarily entails cognition (Maturana & Varela, 1980), since the autopoietic agent must “know” how to act on a potentially perturbing situation in order to safeguard its autopoiesis’. This is connected to the notion of “virtue reliabilism”, that asserts that beliefs can be seen as knowledge when their reliability is evidenced by the cognitive capabilities (“virtues”) they grant the agent (Palermos, 2015; Pritchard, 2010) [p 15]. UP TO HERE //.

Socially distributed cognition

In our own approach to social systems, we conceive such processes as a propagation of challenges (Heylighen, 2014a). This can be seen as a generalization of Hutchins’s analysis of socially distributed cognition taking place through the propagation of “state” (Hutchins, 1995, 2000): the state of some agent determines that agentś action or communication, which in turn affects the state of the next agent receiving that communication or undergoing that action. Since a state is a selection out of a variety of potential states, it carries information. Therefore, the propagation of state from agent to agent is equivalent to the transmission and processing of information. This is an adequate model of distributed cognition if cognition is conceived as merely complex information processing. But if we want to analyze cognition as the functioning of a mind or agency, then we need to also include that agent’s desires, or more broadly its system of values and preferences. .. in how far does a state help to either help or hinder the agent in realizing its desires? This shifts our view of information from the traditional syntactic perspective of information theory (information as selection among possibilities) (Shannon & Weaver, 1963)) to a pragmatic perspective (information as trigger for goal-directed action (Gernert, 2006)(emphasis of DPB) [pp. 17-8]. DPB: this is an important connection to my idea that not only people’s minds process information, but the organization as such processes information also. This can explain how a multitude of people can be autonomous as an entity ‘an sich’. Distributed cognition is the cognition of the whole thing and in that sense the wording is not good, because the focus is no longer the human individual but the multitude as a single entity; a better word would be ‘integrated cognition’? It is propose to replace the terms “information” or “state” to “challenge”: a challenge is defined as a situation (i.e. a conjunction of conditions sensed by some agent) that stimulated the agent to act. DPB: Heylighen suggests that acting on this challenge brings benefit to the agent, I think it is more prosaic than that. I am not sure that I need the concept of a challenge. Below is an illustration of my Shell example: an individual know that action A leads to result B, but no one knows that U →Y, but the employees together know this: the knowledge is not in one person, but in the whole (the organization): John : U V, Ann : V→W, Barbara : W→X, Tom : X→Y. Each person recognizes the issue, does not know the (partial) answer, but knows (or finds out) who does; the persons are aware of their position in the organization and who else is there and (more or less) doing what. ‘Together, the “mental properties” of these human and non-human agents will determine the overall course of action of he organization. This course of action moves towards a certain “attractor”, which defines the collective desire or system of values of the organization’ [p 21]. DPB: if I want to model the organization using COT then this above section can be a starting point. I’m not sure I do want to, because I find it impracticable to identify the mix of the ingredients that should enter the concoction that is the initial condition to evolve into the memeplex that is a firm. How many of ‘get a job’ per what amount of ‘the shareholder is king’ should be in it?

Experiencing non-duality

Using the intentional stance it is possible to conceptualize a variety of processes as mind-like agencies. The mind does not reside in the brain, it sits in all kinds of processes in a distributed way.

Gepubliceerd door

DP

Complexity Scientist