Chemical Organization Theory and Autopoiesis

E-mail communication of Francis Heylighen on 29 May 2018:

Inspired by the notion of autopoiesis (“self-production”) that Maturana and Varela developed as a definition of life, I wanted to generalize the underlying idea of cyclic processes to other ill-understood phenomena, such as mind, consciousness, social systems and ecosystems. The difference between these phenomena and the living organisms analysed by Maturana and Varela is that the former don’t have a clear boundary or closure that gives them a stable identity. Yet, they still exhibit this mechanism of “self-production” in which the components of the system are transformed into other components in such a way that the main components are eventually reconstituted.

This mechanism is neatly formalized in COT’s notion of “self-maintenance” of a network of reactions. I am not going to repeat this here but refer to my paper cited below. Instead, I’ll give a very simple example of such a circular, self-reproducing process:

A -> B,

B -> C,

C -> A

The components A, B, C are here continuously broken down but then reconstituted, so that the system rebuilds itself, and thus maintains an invariant identity within a flux of endless change.

A slightly more complex example:

A + X -> B + U

B + Y -> C + V

C + Z -> A + W

Here A, B, and C need the resources (inputs, or “food”) X, Y and Z to be reconstituted, while producing the waste products U, V, and W. This is more typical of an actual organism that needs inputs and outputs while still being “operationally” closed in its network of processes.

In more complex processes, several components are being simultaneously consumed and produced, but so that the overall mixture of components remains relatively invariant. In this case, the concentration of the components can vary the one relative to the other, so that the system never really returns to the same state, only to a state that is qualitatively equivalent (having the same components but in different amounts).

One more generalization is to allow the state of the system to also vary qualitatively: some components may (temporarily) disappear, while others are newly added. In this case, we  no longer have strict autopoiesis or [closure + self-maintenance], i.e. the criterion for being an “organization” in COT. However, we still have a form of continuity of the organization based on the circulation or recycling of the components.

An illustration would be the circulation of traffic in a city. Most vehicles move to different destinations within the city, but eventually come back to destinations they have visited before. However, occasionally vehicles leave the city that may or may not come back, while new vehicles enter the city that may or may not stay within. Thus, the distribution of individual vehicles in the city changes quantitatively and qualitatively while remaining relatively continuous, as most vehicle-position pairs are “recycled” or reconstituted eventually. This is what I call circulation.

Most generally, what circulates are not physical things but what I have earlier called challenges. Challenges are phenomena or situations that incite some action. This action transforms the situation into a different situation. Alternative names for such phenomena could be stimuli (phenomena that stimulate an action or process), activations (phenomena that are are active, i.e. ready to incite action) or selections (phenomena singled out as being important, valuable or meaningful enough to deserve further processing). The term “selections” is the one used by Luhmann in his autopoietic model of social systems as circulating communications.

I have previously analysed distributed intelligence (and more generally any process of self-organization or evolution) as the propagation of challenges: one challenge produces one or more other challenges,  which in turn produce further challenges, and so on. Circulation is a special form of propagation in which the initial challenges are recurrently reactivated, i.e. where the propagation path is circular, coming back to its origins.

This to me seems a better model of society than Luhmann’s autopoietic social systems. The reason is that proper autopoiesis does not really allow the system to evolve, as it needs to exactly rebuild all its components, without producing any new ones. With circulating challenges, the main structure of society is continuously rebuilt, thus ensuring the continuity of its organization, however while allowing gradual changes in which old challenges (distinctions, norms, values…) dissipate and new ones are introduced.

Another application of circulating challenges are ecosystems. Different species and their products (such as CO2, water, organic material, minerals, etc.) are constantly recycled, as the one is consumed in order to produce the other, but most are eventually reconstituted. Yet, not everything is reproduced: some species may become extinct, while new species invade the ecosystem. Thus the ecosystem undergoes constant evolution, while being relatively stable and resilient against perturbations.

Perhaps the most interesting application of this concept of circulation is consciousness. The “hard problem” of consciousness asks why information processing in the brain does not just function automatically or unconsciously, the way we automatically pull back our hand from a hot surface, before we even have become conscious of the pain of burning. The “global workspace” theory of consciousness says that various subconscious stimuli enter the global workspace in the brain (a crossroad of neural connections in the prefrontal cortext), but that only a few are sufficiently amplified to win the competition for workspace domination. The winners are characterized by much stronger activation and their ability to be “broadcasted” to all brain modules (instead of remaining restricted to specialized modules functioning subconsciously). These brain modules can then each add their own specific interpretation to the “conscious” thought.

In my interpretation, reaching the level of activation necessary to “flood” the global workspace means that activation does not just propagate from neuron to neuron, but starts to circulate so that a large array of neurons in the workspace are constantly reactivated. This circulation keeps the signal alive long enough for the different specialized brain modules to process it, and add their own inferences to it. Normally, activation cannot stay in place, because of neuronal fatigue: an excited neuron must pass on its “action potential” to connected neurons, it cannot maintain activation. To maintain an activation pattern (representing a challenge) long enough so that it can be examined and processed by disparate modules that pattern must be stabilized by circulation.

But circulation, as noted, does not imply invariance or permanence, merely a relative stability or continuity that undergoes transformations by incoming stimuli or on-going processing. This seems to be the essence of consciousness: on the one hand, the content of our consciousness is constantly changing (the “stream of consciousness”), on the other hand that content must endure sufficiently long for specialized brain processes to consider and process it, putting part of it in episodic memory, evaluating part of it in terms of its importance, deciding to turn part of it into action, or dismissing or vetoing part of it as inappropriate.

This relative stability enables reflection, i.e. considering different options implied by the conscious content, and deciding which ones to follow up, and which ones to ignore. This ability to choose is the essence of “free will“. Subconscious processes, on the other hand, just flow automatically and linearly from beginning to end, so that there is no occasion to interrupt the flow and decide to go somewhere else. It is because the flow circulates and returns that the occasion is created to interrupt it after some aspects of that flow have been processed and found to be misdirected.

To make this idea of repetition with changes more concrete, I wish to present a kind of “delayed echo” technique used in music. One of the best implementation is Frippertronics, invented by avant-garde rock guitarist Robert Fripp (of King Crimson): https://en.wikipedia.org/wiki/Frippertronics

The basic implementation consist of an analogue magnetic tape on which the sounds produced by a musician are recorded. However, after having passed the recording head of the tape recorder, the tape continues moving until it is read by another head that reads and plays the recorded sound. Thus, the sound recorded at time t is played back at time t + T, where the interval T depends on the distance between the recording and playback heads. But while the recorded sound in played back, the recording head continues recording all the sound, played by either the musician(s) or the playback head, on the same tape. Thus, the sound propagates from musician to recording head, from where is is transported by tape to the playback head, from where it is propagated in the form of a sound wave back to the recording head, thus forming a feedback loop.

If T is short, the effect is like an echo, where the initial sound is repeated a number of times until it fades away (under the assumption that the playback is slightly less loud than the original sound). For a longer T, the repeated sound may not be immediately recognized as a copy of what was recorded before given that many other sounds have been produced in the meantime. What makes the technique interesting is that while the recorded sounds are repeated, the musician each time adds another layer of sound to the layers already on the recording. This allows the musician to build up a complex, multilayered, “symphonic” sound, where s/he is being accompanied by her/his previous performance. The resulting music is repetitive, but not strictly so, since each newly added sound creates a new element, and these elements accumulate so that they can steer the composition in a wholly different direction.

This “tape loop” can be seen as a simplified (linear or one-dimensional) version of what I called circulation, where the looping or recycling maintains a continuity, while the gradual fading of earlier recordings and the addition of new sounds creates an endlessly evolving “stream” of sound. My hypothesis is that consciousness corresponds to a similar circulation of neural activation, with the different brain modules playing the role of the musicians that add new input to the circulating signal. A differences is probably that the removal of outdated input does not just happen by slow “fading” but by active inhibition, given that the workspace can only sustain a certain amount of circulating activation, so that strong new input tends to suppress weaker existing signals. This and the complexity of circulating in several directions of a network may explain why conscious content appears much more dynamic than repetitive music.

How Social System Program Human Behavior

Heylighen, F., Lenartowicz, M., Kingsbury, K., Beigi, S., Harmsen, T. . Social Systems Programming I: neural and behavioral control mechanisms

Abstract

Social systems can be defined as autopoietic networks of distinctions and rules that specify which actions should be performed under which conditions. Social systems have an enormous power over human individuals, as they can “program” them, ..’ [draft p 1]. DPB: I like the summary ‘distinctions and rules’, but I’m not sure why (maybe it is the definitiveness of this very small list). I also like the phrase ‘which actions .. under which conditions’: this is interesting because social systems are ‘made of’ communication, which in turn is ‘made of’ signals, which in turns are built up from selections of utterances &c., understandings and information. The meaning is that information depends on its frame, namely its environment. And so this phrase above makes the link between the communication, rule-based systems and the assigning of meaning by (in) a system. Lastly these social mechanisms hold a strong influence over humans, even up to the point of damaging themselves. This paper is about the basic neural and behavioral mechanisms used for programming in social systems. This should be important for my landscape of the mind, and familiarization.

Introduction

Humans experience a large influence from many different social systems on a daily basis: ‘Our beliefs, thoughts and emotions are to an important extent determined by the norms, culture and morals that we acquired via processes of education, socialization and communication’ [p 1]. DPB: this resonates with me, because of the choice of the words ‘beliefs’ and ‘thoughts’: these must nicely match the same words in my text, where I explain how these mechanisms operate. In addition I like this phrase because of the concept of acquisition, although I doubt that the word ‘communication’ above is used in the sense of Luhmann. This is not easy to critique or even to realize that these processes are ‘social construction’ and difficult to understand them to be so (the one making a distinction cannot talk about it). Also what is reality in this sense: is it what would have been without the behavior based on these socialized rules or the behavior as-is (the latter I guess)? ‘Social systems can be defined as autopoietic networks of distinctions and rules that govern the interactions between individuals’ (I preferred this one from the abstract: which actions should be performed under which conditions, DPB). The distinctions structure reality into a number of socially sanctioned categories of conditions, while ignoring phenomena that fall outside these categories. The rules specify how the individuals should act under the thus specified conditions. Thus, a social system can be modeled as a network of condition-action rules that directs the behavior of individuals agents. These rules have evolved through the repeated reinforcement of certain types of social actions’ [p 2]. DPB: this is a nice summary of how I also believe things work: rule- based systems – distinctions (social categories) – conditions per distinction – behavior as per the condition-action rules – rules evolve through repeated reinforcement of social actions. ‘Such a system of rules tends to self-organize towards a self-perpetuating configuration. This means that the actions or communications abiding by these rules engender other actions that abide by these same general rules. In other words, the network of social actions or communications perpetually reproduces itself. It is closed in the sense that it does not generate actions of a type that are not already part of the system; it is self-maintaining in the sense that all the actions that deifne parts of the system are eventually produced again (Dittrich & Winter, 2008). This autopoiesis turns the social system into an autonomous, organism-like agent, with its own ideintity that separates it from its environment. This identity or “self” is preserved by the processes taking place inside the system, aand therefore actively defended against outside or “non-self” influences that may endanger it’ [p 2]. DPB: this almost literally explains how cultural evolution takes place. This might be a good quote to include and cut a lot of grass in one go! Social systems wield a powerful influence over people, up to the point of acting against their own health. The workings of social systems is likened to parasites such as the rabies virus which ‘motivates’ its host to become aggressive and bite others such as to spread the virus. ‘We examine the simple neural reinforcement mechanism that is the basis for the process of conditioning whilst also ensuring self-organization of social systems’ (emphasis by the author) [p 3]. DPB: very important: this is at the pivot where the human mind is conditioned such that it incites (motivates) it to act in a specific way and where the self-organization of the social system occurs. This is how my bubbles / situations / jobs work! An element of this process is familiarization: the neural reinforcement mechanism.

The Power of Social Systems

In the hunter gatherer period, humans lived in small groups and individuals could come and go as they wanted to join or form a new group [p 3]. DPB: I question whether free choice was involved in those decisions to stay or leave – or whether they were rather kicked out – and if it was a smooth transfer to other bands – or whether they lost standing and had to settle for a lower rank in a new group. ‘These first human groupings were “social” in the sense of forming a cooperative, caring community, but they were not yet consolidated into autopoietic systems governed by formal rules, and defined by clear boundaries’ [p 4]. DPB: I have some doubts because it sounds too idealistic / normal; however, if taken for face value then this is a great argument to illustrate the developing positions of Kev and Gav against. In sharp contrast are the agricultural communities: they set themselves apart from nature and other social systems, everything outside of their domain fair game for exploitation, hierarchically organized, upheld with symbolic order: authorities, divinities paid homage to with offerings, rituals, prescriptions and taboos. In the latter society it is dangerous to not live by the rules: ‘Thus, social systems acquired a physical power over life and death. As they evolved and refined their network of rules, this physical power engendered a more indirect moral or symbolic power that could make people obey the norms with increasingly less need for physical coercion’ [p 4]. DPB: I always miss the concept of ‘autopolicing’ in the ECCO texts. Individuation of a social system: 1. a contour forms from first utterances in a context (mbwa!) 2. these are mutually understood and get repeated 3. when outside the distinction (norm) there will be a remark 4. autopolicing. Our capacity to cognize depend on the words our society offer to describe what we perceive: ‘More fundamentally, what we think and understand is largely dependent on the concepts and categories provided y the social systems, and by the rules that say which category is associated with which other category of expectations or actions’ [p 5]. DPB: this adds to my theory the idea that not only the rules for decision making and for action depend on the belief systems, namely the memeplexes, but also people’s ‘powers of perception’.

How Social Systems Impede Self-actualization

Social rules govern the whole of our worldview, namely our picture of reality and our role within it (emphasis DPB re definition worldview): ‘They tell us which are the major categories of existence (e.g. mind vs. body, duty vs. desire), what properties these categories have (e.g. mind is insubstantial, the body is inert and solid, duty is real and desire is phantasmagoric), and what our attitudes and behaviors towards each of these categories should be (e.g. the body is to be ignored and despised, desire is to be suppressed)’ [p 5]. DPB: I like this because it gives some background to motivations; however, I believe they are more varied than this and that they do not only reflect the major categories but everything one can know (or rather believe). They are just-so in the sense that they can be (seen or perceived as) useful for something like human well-being or limiting for it. They are generally tacit and believed to be universal and so it is difficult to know which of the above they are. ‘.. these rules have self-organized out of distributed social interactions. Therefore, there is no individual or authority that has the power to change them or announce them obsolete. This means that in practice we are enslaved by the autopoietic social system: we are programmed to obey its rules without questioning’ [ p6]. DPB: I agree, there is no other valid option than that from a variety of just-so stories a few are selected that are more fitting with the existing ones. For people it may now appear that these are the more useful ones, but the used arguments serve a mere narrative that explains why people do stuff, lest they appear to do stuff without knowing why. And as a consequence the motivation to do things only if they serve a purpose is itself meme that tells us to act in this way especially vis a vis others, namely to construct a narrative such that this behavior is explained. The rules driving behavior can be interpreted more or less strictly: ‘Moreover, some rules (like covering the feet) tend to be enforced much less strictly than others (like covering the genitals)‘ [p 6]. DPB: hahaa: Fokke & Sukke. Some of the rules that govern a society are allowed some margin of interpretation and so a variety of them exist; others are assumed to be generally valid, and hence they are more strictly interpreted, exhibiting less variety, leaving people unaware that they are in fact obeying a rule at all. As a consequence of a particular rule being part of a much larger system they cannot be easily changed, especially because the behavior of the person herself is – perhaps unknowingly – steered by that rule or system of rules. In this sense it can be said to hinder or impede people’s self-actualization. ‘The obstruction of societal change and self-actualization is not a mere side effect of the rigidity of social systems; it is an essential part of their identity. An autopoietic system aims at self-maintenance. Therefore, it will counteract any processes that threaten to perturb its organization (Maturana& Varela, 1980, Mingers, 1994). In particular, it will suppress anything that would put into question the rules that define it. This includes self-actualization, which is a condition generally characterized by openness to new ideas, autonomy, and enduring exploration (Heylighen, 1992; Maslow, 1970). Therefore, if we wish to promote self-actualization, we will need to better understand how these mechanisms of suppression used by social systems function’ [p 7]. DPB: I fully agree with the mechanism and I honestly wonder if it is at all possible to know one’s state of mind (what one has been familiarized with in one’s life experience so far, framed in the current environment), and hence if it is possible to self-actualize in a different way from what the actual state of mind (known or not) rules.

Reinforcement: reward and punishment

Conditioning, or reinforcement learning, is a way to induce a particular behavior. Behavior rewarded with a pleasant stimulus tends to be repeated, while behavior punished by an unpleasant stimulus tends to be suppressed. The more often a combination of the above occurs, the more will the relation be internalized, such that it can take the shape of a condition-action (stimulus-response) rule. This differential or selective reinforcement occurs in a process of socialization; the affirmation need to be a material reward, a simple acknowledgement and confirmation suffices (smile, thumbs up, like!); these signals suffice for the release of dopamine in the brain. ‘Social interaction is a nearly ubiquitous source of such reinforcing stimuli. Therefore, it has a wide-ranging power in shaping our categorizations, associations and behavior. Maintaining this dopamine-releasing and therefore rewarding stimulation requires continuing participation in the social system. That means acting according to the system’s rules. Thus, social systems program individuals in part through the same neural mechanisms that create conditioning and addiction. This ensures not only that these individuals automatically and uncritically follow the rules, but that they would feel unhappy if somehow prevented from participating in this on-going social reinforcement game. Immediate reward and punishment are only the simplest mechanisms of reinforcement and conditioning. Reinforcement can also be achieved through rewards or penalties that are anticipated, but that may never occur in reality’ (emphasis by the author) [ p 8].

The power of narratives

People are capable of symbolic cognition and they can conceive of situations that have never occurred (to them): ‘These imagined situations can function as “virtual” (but therefore not less effective) rewards that reinforce behavior’ [p 8]. Narratives (for instance tales) feature tales where the characters are punished or rewarded for their specific behavior. Social systems exploit people’s capacity of symbolic cognition using narratives, and hence build on the anticipatory powers of people to maintain and spread. ‘Such narratives have the advantage that they are easy to grasp, remember and communicate, because they embed abstract norms, rules and values into sequences of concrete events experienced by concrete individuals with whom the audience can easily empathize (Bruner, 1991; Heylighen, 2009; Oatley, 2002). In this way, virtual rewards that in practice are unreachably remote (like becoming a superstar, president of the USA, or billionaire) become easy to imagine as realities’ (emphasis by the author) [p 9]. Narratives can become more believable when communicated via media, celebrities, scripture deemed holy, &c.

Conformist transmission

Reinforcement is more effective when it is repeated more often. Given that social systems are self-reproducing networks of communications (Luhmann, 1995), the information they contain will be heard time and again. Conformist transmission means that you are more liable to adopt an idea, behavior or a narrative if you are communicated it by more other individuals; once adopted you are more likely to convert others to it and to confirm it when others express it. DPB: I agree and I never thought of this in this way: once familiarized with it, then not only can one become more convinced of an idea, but also can one become more evangelical about it. In that way an idea spreads quicker if it is more familiar to more people who then talk about it simultaneously. Now it can become a common opinion; and at that point it becomes more difficult to retain other ideas, up to the point that direct observation can be overruled. Sinterklaas and Zwarte Piet exist!

Cognitive dissonance and institutionalized action

People have a preference for coherence in thought and action: ‘When an individual has mutually inconsistent beliefs, this creates an unpleasant tension, known as cognitive dissonance; this can be remedied by rejecting or ignoring some of these thoughts, so that the remaining ones are consistent. This can be used by the social systems to suppress non-conformist ideas by having a person act in accordance with the rules of the social system but conflicting with the person’s rules: the conformist actions cannot be denied and now the person must cull the non-conformist ideas to release tensions [p 10]. ‘This mechanism becomes more effective when the actions that confirm the social norms are formalized, ritualized or institutionalized, so that they are repeatedly and unambiguously reinforced’ [p 10]. DPB: an illustration is given from [Zizek 2010]: by performing the rituals one becomes religious, because the rituals are the religion. This is an example of a meme: an expression of the core idea; conversely by repeating the expression one repeats the core idea also, and thereby familiarizes oneself with that idea as it becomes reinforced in one’s mind. But that reminds me of the idea of the pencil between the lips making a person happier (left to right) or unhappy (sticking forward). And to top it off: ‘Indeed, the undeniable act of praying to God can only be safeguarded from cognitive dissonance by denying any doubts you may have about the existence of God. This creates a coherence between inner beliefs and socially sanctioned actions, which now come to mutually reinforce each other in an autopoietic closure’ [p 10]. DPB: this is the role of dogma in any belief system: the questions that cannot be asked, the nogo areas, &c.

Individuation of Social Systems

Lenartowicz, A., Weinbaum, D., Braathen, P. . The Individuation of Social Systems: A Cognitive Framework . Procedia Computer Science (Elsevier), vol. 88 (pp 15-20) . Doi: 10.1016/j.procs.2016.07.400 . 2016

Abstract

Starting point is formed by the Theory of Individuation (Simondon 1992), Enactive Theory of Cognition (Paolo e.a. 2010) and the Theory of Social Systems Luhmann 1996). The objective is to identify how AI integrates into human society.

1. Introduction

Social systems influence cognitive activities. It is argued that social systems operate as cognitive systems: ‘.. autonomous, self-organizing loci of agency and cognition, which are distinct from human minds and manifesting behaviors that are irreducible to their aggregations’ [p 15]. DPB: I like this (in bold, to end all others) way to formulate the behavior specific to the whole, as opposed to the behavior specific to the individuals therein. It is argued here that these systems individuate in the same way, and their mode of operation is analogous to, other processes of life. This paper does not follow some others that take a narrow approach to cognition starting at the architecture of the individual human mind; instead it presents a perspective of cognition that originates from a systemic sociological view, leading to a socio-human cognitive architecture; the role of the individual human being in the establishing of networks and their operation thereafer is reduced. The theory if based on the view of Heraklitus that ontologically reality is a sequence of processes instead of objects and with Simondon’s theory of individuation: ‘This results in an understanding of social systems as complex sequences of occurrences of communication (emphasis of the authors), which are capable of becoming consolidated to the degree in which they start to display an emergent adaptive dynamics characteristic to cognitive systems – and to exert influence over their own mind-constituted environment’ [p 16]. DPB: this reminds of my understanding of the landscape of Jobs, where Situations and Interactions take place as sequences of signals uttered and perceived.

2. Individuation of Cognitive Agents

The basis is a shift from an Aristotelian object oriented ontology to an Heraklitian process oriented ontology (or rather an ontogenesis); not individuals but individuation are the center-piece; no individual is assumed to precede these processes; all transformations are secondary to individuation: ‘Individuation is a primary formative activity whereas individuals are regarded as merely intermediate and metastable entities, undergoing a continuous process of change’ [p 16]. In this view the individual is always changing, and ‘always pregnant with with not yet actualized and not yet known potentialities of change’ [p 16]. DPB: His reminds me of the monadic character of systems: they are very near completion, yet never quite finished and always ready to fight the previous war. Local and contingent interactions achieve ever higher levels of coordination between their constitutive elements; the resulting entities become ever more complex and can have agency. Cognition can be seen as a process of sense-making; cognition can facilitate the formation of boundaries (distinctions). This is explained by the theory of enactive cognition that treats sense-making as a primary activity of cognition (Varela, Thompson & Rosch 1992; Stewart, Gapenne &Di Paolo 2010; De Jaegher & Di Paolo 2007). This idea is radicalized in this paper: sense-making is assumed to be bringing forth distinctions, objects and relations; sense-making precedes subjects and objects and it is necessary for their emergence; sense-making precedes the existence of consolidated cognitive agents to whom the activity itself would conventionally be attributed. DPB: this firstly reminds me of the phrase ‘love is in the air, even if there is nobody there yet’; ‘processes of individuation constitute a distributed and progressively more coherent (as boundaries and distinctions are formed) loci of autonomous cognitive activity’ [pp. 16-7]; also the process of individuation precedes the process of autopoiesis: the latter cannot exist as a work in progress, but individuation occurs also without autopoiesis; and so autopoiesis can only be a design condition of a process that has already individuated. In this way individuation is taken from its narrow psychological context and projected to a general systems application: ‘Sense-making entails crossing the boundary between the unknown and the known through the formation of tentative perceptions and actions consolidating them together into more or less stable conceptions (emphasis by the author)’ [p 17]. DPB: this is a useful working definition of sense-making; these processes are relevant not just for psychic and social processes, I believe they have their root (and started in some form once) as chemical and physical processes, for which the above terminology does not seem fully suitable; from that point on, these multitudes of elements ‘grew up’ together and became ever more complex. ‘Individuation as an on-going formative process, manifests in the co-determining interactions taking place within the heterogeneous populations of interacting agents. These populations are the ‘raw materials’ from which new individuals emerge. The sense-making activities are distributed over the population and have no center of regulated activity of synchrony. Coordination – the recurrent mutual regulation of behaviors is achieved via interactions that are initially contingent. These interactions are necessary for the consolidation of any organized entity or system’ [p 17].

3 Social Systems as Cognitive Individuals

By a social system is meant any meta-stable form of social activity. DPB: but what is meant with meta-stable. This is the Luhmann understanding of a social system. This paper demonstrates 1. the individuation of social systems and 2. identify social systems as the metastable individuals. Events that are the building blocks for social reality happen as single occurrences of communication, each consisting of: 1. a selection of information, 2. selection of the utterance, and 3. the selection of the understanding. DPB: this is as per my Logistical Model. If and only if the three selections are combined do they form a unity of a communicative event, ‘a temporary individual’. ‘This means that it distinguishes itself from its environment (i.e. any other processes or events) by the means of three provisional boundaries, which the event sets forth: (a) an ‘information-making boundary’ between the marked and the unmarked side of the distinction being made (Spencer Brown, 1994), i.e. delineating the selected information (marked – M) and the non-selected one (unmarked – Un-M), (b) a ‘semiotic boundary’ (Lotman, 2001) between the thus created signified (SD) and a particular signifier selected to carry the information (SR), and (c) a ‘sense-making’ boundary between thus created sign (SGN) and the context (CX), i.e. delineating the understanding of information within its situation (Lenartowicz, Weinbaum & Braaten, 2016)’ [p 17]. DPB: I am not sure what to do with those three selections; I have not used them and instead I am working with selection of some piece of information, while it is uttered, and while it is also perceived (made sense of). I must figure out whether (and how) to use this. Maybe ask ML to clarify how they connect to my logistical model, and especially the E and the B operators. It is important because it is a chain-link in a chain of events: ‘The three selections and corresponding boundaries of an event make the communication available to interact with or to be referred to by another communicative event constituted by another triple selection’ [p 17]. DPB: all this sounds a bit artificial and procedural and mechanical: how can this process come about in a natural way? Once recorded and remembered these elements become available for endless re-use independent of space, time and context (frame). In closed networks of communication, however, they have a tendency to converge into recurrent self-reinforcing patterns, such that the become established and difficult not to be associated with, even if in a negative form or critique. From the associations of these selected simple forms can arise complex individuated sequences, social systems. Through their interactions these systems gain and maintain coherence; as they recur the probability that the same pattern is repeated is higher than the probability that a completely new pattern is selected. Initially contingent boundaries become self-reinforced and stable. ‘On account of their repetition, a social system can be said to develop perceptions (i.e. reappearing selections of information and understanding), actions (i.e. reappearing selections of utterance) and conceptions (percept – action associations) that dynamically bind them. Each such assemblage thus becomes a locus of identifiable cognitive activity, temporarily stabilized within a flux of communication’ [p 18].

4. The Role of Human Cognition

The (three) selections individuating social systems are performed by other cognitive individuated systems. In a social system that is individuated to a level of stability and coherence, emerging patterns in that system further orient the selections made by people. And reciprocally the the psychic environment of the people facilitates the individuation of the social system by selecting new instances of communication that somehow fit the existing parts. Human beings are indispensable for the continuation of communication and hence for the maintenance of a social system, but they are incapable of influencing the social system in the sense that one seedling is incapable of influencing the amount of water in a lake. Only when a social system is at the early stages of its individuation and taking shape can it be influenced by individual people: a pattern of a large social system is confirmed by many other communications and also one different communication, that does not follow the pattern, doesn’t hold sufficient weight to change its course. ‘Taking into account a variety of powerful factors that guide all the linguistic activities of humans: (a) the relative simplicity, associative coherence, frequent recurrence of the cognitive operations once they become consolidated in a social system, (b) the rarity of context-free (e.g. completely exploratory and poetic) communications that is reinforced by the density and entanglement of all “language games” in which contemporary humans are all immersed in, and (c) the high level of predictability of human selection-making inputs observable from the sociological standpoint; it will be reasonable to set the boundaries of our modeling of he general phenomena of human cognition in such a way, which delineates the dynamics of two different kinds of individuating cognitive agencies operating at different scales: the human individual and the social system. Instead of reducing all cognitive activities to the human individual we can clearly distinguish cognitive agencies operating at different scales’ [p 19]. DPB: I like the three arguments above for the likelihood of patterns to appear in communication and also that human cognition is to some extent built with the(extensive) help of social systems, such that human cognition cannot be fully reduced to the individual itself, but also to the social systems in the environment of the individual.

SemioSphere and Cognition

Lenartowicz, M. . Creatures of the Semiosphere – A problematic third party in the ‘humans plus technology’ cognitive architecture of the future global superintelligence . Technological Forecasting and Social Change . January 2017

Abstract

Human beings can exert selective pressure on emerging new life-forms. The theory of the Global Brain argues that the foreseen collective and distributed super-intelligence will include humans as its key beneficiaries. The collective architecture will include both humans and such new technologies. DPB: the selective pressure is on signals, the basic unity of communication: namely on the ‘utterances’ &c., information and understanding. According to Luhmann a social system is autonomous and this includes AGI development and GB. Humans can attempt to nudge and irritate these systems to change course, but the outcome of the evolutionary process cannot be known in advance and is therefore uncertain. This article serves to offers a new combination of existing theories: theory of adjacent possible (Kauffmann), semiosphere (Lotman), social systems (Luhmann), Theory of Intelligence (Heylighen). The history of the human species can be re-interpreted such that it is not the individual human being but the social systems that are the more advanced human intelligence currently operating on Earth.

Locating the Crown of Creation

To assume that the human being is the final feat of evolution, is, given its other accomplishments, indefensible. Only our feeling of self-importance makes us believe that we should (and will) remain around forever. Exposing that and theorizing about what comes next is therefore justified. ‘It seems now that we are starting to abandon yet another undue anthropocentric belief that the Artificial (DPB: including AGI), which is passing through our hands, is in simple opposition to the Natural and, as such, is excluded from the workings of evolution’ [p 2]. Because why is the passing through human hands be fundamentally different from the passing through a chemical or a physical process? There is no design condition with regards to size: ‘While the idea does appear fantastic when applied to human beings, for nature such shifts between scales – called meta-system transitions – Turchin 1977, Heylighen 1995) are nothing new’ [p 3]. This is extensively formulated in the theory of the global brain. The crux is an ever thickening and complicating network of communication that humans contribute to and process. According to the global brain the next stage in the evolution of intelligence ‘belongs to a complex, adaptive, cognizing network of interconnected agents: humans and technological systems (Heylighen 2015). A thinking, computing, analysing and strategizing, problem-spotting and problem-solving organ of the planet Earth herself’ [p 3]. DPB: it appears that there is no environment for an evolutionary stage where the entire (surface of) the Earth is occupied with the same; who performs the three selecting processes? An additional question is whether the passed-on crown will still be in our hands. Anthropomorphism is a constrain when thinking about these long term questions. Hence an alternative hypothesis: the social systems are the most intelligent systems on Earth at this point.

An Empty Niche in Hunter-Gatherer’s Eden

Genetically we belong to Eden’ [p 4]. Heylighen assumes that the Environment of Evolutionary Adaptedness (a kind of a reference for the direction and level of adaptedness of human beings, the environment for which we are fit) is based on the hunter-gatherers era. Their fitness was supported by the development of language and other symbolic means of communication. These came about as a variation of the means for ‘exchanging useful information with others’ (Heylighen): ‘Thus, language has become a functional adaptation of the species and, by proving remarkably useful, it got selected to stay’ [p 5]. DPB: In this way language is a feat of biological evolution, adding to the fitness of people, namely through its usefulness. Luhmann’s view on language is that it serves a specific role in between the mind and the communication; that surely being one of his more foggy moments, language, from the moment the first ‘mbwa’ was repeated, came to be autonomous, and hence it was initially selected to stay because it added to the hunter-gatherer’s fitness, or, at least it was of some use and did not harm her enough to be selected away. But that provides sufficient space for language to develop itself in its particular evolutionary process (and not as per Luhmann’s special trajectory). The evolution of the swim bladder has had advantages for the fish and in addition it has created an ‘adjacent possible’, namely a new niche for particular bacteria. In the same vein, the development of symbolic means of communication have provided humans with a new feature, and has created an adjacent possible, ‘within which new designs of evolution could appear. And, what is most spectacular: this niche was created outside the biosphere, giving rise to what Yuri Lotman (2001, 2005) called the semiosphere’ (emphasis by the author) [p 6]. First, this proved to be a pragmatic form of signaling and coordinating of actions. Second it provided an increase of the representational capacity. Third, language enabled the building of relations between occurrences of communication, the semiosphere;’They could refer to, describe, interpret, and evaluate other occurrences of symbolic communication, which have happened before’ [p 6]. In that environment these components of communication, new evolutionary forms could assemble (DeLanda), individuate (Simondon 1992, Weinbaum&Veitas 2016), self-organize (Heylighen 1989, 2002) and evolve. And their evolution again created additional adjacent possibles to be occupied by yet other symbolic forms.

Individuation of the Semiospecies

Therefore, if we consider the development of language as giving rise to the (as yet) empty niche of the semiosphere, it would be the Luhmannian social systems what should be considered the newcomers – the novel forms of life, enabled to emerge and evolve by the adjacent possible’ [p 8]. DPB: I annoted here ‘sunfall’: sounds great but I forgot why. Otherwise it is a good quote to sum up what is explained in the previous paragraph. When it was empty, the semiosphere contained only individual instances of communication for single use, unentangled with other. ‘.. the ‘already not-empty’ semiosphere included also complex, lifelike entanglements of such instances, capable of the prolonged perpetuation of their own patterns and of exerting influence onto their own respective environments (Lenartowicz, Weinbaum & Braathen 2016)’ [p 8]. These entanglements take place as per the three selections of information, utterance and understanding (Luhmann 2002). When these selections are made then three distinctions are added to the semiosphere: the information making boundary between marked (the information that was selected to be included in the signal) and unmarked space (what could have been chosen but wasn’t; and remain available as an ‘adjacent possible’ for a next state), the semiotic boundary between the signified and the signifier (carrier of the information, the form or utterance) and the sense-making boundary between the created sign (and the context (the situation against which the understanding was selected, and harnessed because it was selected at the expense of other ways to understand it). DPB: whatever the signal is made of, once it is a sign (information uttered and understood) the next state of the communication is different from its previous state, but not so different that the communication stops. And hence it is individuating to ever more crystallize the communication monadically! The point Marta makes (and told me she introduced in the NASA article where I can’t find it back in) is that the concept of memes connects with this model: they are what it is that hooks the sequences of signals together to become a communication. I am trying to find a suitable example to illustrate this. ‘.. each of such couplings between two occurrences of communication may be seen as one occurrence ‘passing judgment’ – or projecting its own constitution – upon another. The combinatorial possibilities of how any single occurrence may be related to by a following one are multiple’ [p 10]. DPB: this reminds me of the idea that intention consists in fact of processes of attraction and repulsion. At every state the configuration of properties of the elements / parts is such that its relations seem to favor some and shy away from other possible future states, namely by causing an attraction to some and a repulsion from others. ‘In time, the interacting occurrences of communication form ever-complicating streams, in which each occurrence adheres to many others in multiple ways. Gaining in length, ‘mass’, and coherence, these strings form ‘metastable entities in the course of individuation whose defining characteristics change over time but without losing their long term intrinsic coherence and distinctiveness from their milieu’ (Lenartowicz, Weinbau & Braathen 2016)’ (emphasis by the authors) [p 11]. DPB: the remark about coherence reminds me firstly of the idea of connotations: loose, associative relations between signs. The semiosphere is the universe of all the occurrences of all the symbolic communication. It emerged at the first intentionally issued and understood symbol. DPB: can it be that this occurred at the first instance of 2nd order observation: the issuer of the signal observed and understood that her production of (what was turning out to be) a signal, brought about something in another person in the shape of a kind of behavior (or the lack of it: use your knife and fork!), remembered how to produce the signal, and hence deemed useful to do it again whenever that effect, namely the reaction in the other was desired by the issuer. Conversely now the perceiver understands that the issuer has a particular kind of behavioral reaction in mind whenever she issues that signal and so she remembers it also and when it it is perceived and understood in the future that kind of behavior can be produced (eat with knife and fork, but now very noisy). ‘But a semiosphere understood as a simple aggregate of all communicative occurrences happening in the world was bound to be ‘empty’, as a niche, as long as these communicative occurrences did not relate to one another. If they did not relate, they could not be conserved, and thus had to dissolve momentarily’ [p 11]. DPB: I interpret this as to mean that the the semiosphere could be filled only after it was possible to repeat the use of the signals, and I assume it also means that then it is required to start using them in each other’s context, such that they can be constructed by framing/deframing/reframing them (Luhmann 2002). The repetition allowed for individuation of language and communication to take place; stigmergy provided a memory for the objects and places of interest for the hunter-gatherers’ communities. ‘As a result, the boundaries of social systems were practically equal to the topological boundaries delineating the groups of people who were trained in their processing: if anyone was going to reinforce a certain communication by referring to it within the close circle of its eye and ear witnesses’ [p 13]. DPB: this is how we do things around here and if you act like this you surely can be only one of them. When the use of symbols occurred is uncertain, but at least prior to the earliest cave paintings 40ky ago.

A superintelligence which goes unnoticed

The above can be summarized in the statement that assemblages of symbols can self-organize and individuate into creatures of the semiosphere. Now the next step is the statement that these creatures behave intelligently, given that: ‘The thought experiment proposed here is different (to considering the preponderance of the intelligence of a group of people over that of a number of individuals, DPB). It is to consider the intelligence of the self-organizing streams communication delineated in such a way, which treats the human species as their environment’ [p 14]. DPB: I have referred to this condition of people in regards to their relation to communication or memeplexes as a substrate. Should I replace the more unfriendly substrate for environment? The definition of intelligence of Heylighen is used: ‘.. not abstract reasoning (agree DPB), thinking (this is Weaver’s approach, DPB), or computing (this is my approach, but meant in the sense of information processing). It is rather directing and coordinating the actions of an organism within its environment’ [p 14]. DPB: I am not sure of the relevance of the concept of intelligence for my research subject. As it is defined here it is similar to the capacity to anticipate, namely reduce the uncertainties from the environment. In the same vein it can be stated that intelligence is the processing of information from outside so as to steer the operations of a system so as to maintain its autopoiesis intact. The article refers to Heylighen 2014, who points at fitness, but I am not so sure about that concept: it is a constant: a level of performance of the internal operations which is required to have the smallest possible advantage in the real over the entities in the environment. I don’t know. The concept of environmental fitness might be explained by this model of three layers: 1. the environment which is referred to by the communication, 2. other occurrences of symbolic communication, and 3. substrate needed for the operating of the system, namely through uttering, memory, selection making, &c. ‘Once a communication is immortalized through writing, print, digitalization, or another for of recording, it may as well wait decades or centuries for its follower’ [p 16]. DPB: my annotations says stigmergy, but I don’t think that is intended with that concept. It reminds me of the way people can interact in my Logistical Model: there is no reason this should be ‘live’, or at the same location or even at the same time. In other words: to read a book is logically a way to interact with the author of the book. This admittedly feels asymmetrical, because it a one-way thing because you cannot talk back at the author to let her know your response. It is a signal that damages the reader but the not the other way around. And there is also no 2nd order observation in place. But: it is a signal, a meme changes state and so at least it is a bubble. ‘Symbols, narratives, context, and operational consequences can be always restored. This suggests that while, in the most general sense, the environmental fitness of any ‘semiocreature’ hinges on the ability to attract and tie successive occurrences of communication, this process does not have to be continuous nor instant’ [p 16]. DPB: I am curious about the ‘tying’: that is represented by my connotations. ‘What is less frequently realised is that the (re-)presentations are potentially stoppable at any time through a simple withdrawal of all reinforcing communication-making activity on the human side. But this seems to be about the only possible way of dismantling them, as occurrences of communication do reinforce the (re-)presentations of social systems even if they aim to criticize, challenge, or modify them. ‘Semiocreatures’ which are being spoken of are never dead’ [p 17]. DPB: this reminds of the saying that any publicity is good publicity. Also this is why some politicians remain popular for an unimaginable long time. Lastly this refers to the idea of familiarization: when referred to more often, an idea stays on top of mind, but if referred to less often it becomes less and less ‘readily available’ (paraat). Perhaps the idea is not realised so often (as per above) because according to Spinoza people can’t help themselves and they must talk. (With a reference to the ability to deal out stuff to people that are to the advantage of the dealer and not the person) ‘If intelligence is measured by the ability to safeguard and increase one’s own environmental fitness, when confronted with a ‘semiocreature’, we are quite fast to give it up’ [p 18].

The Way we are Free

‘The Way we are Free’ . David R. Weinbaum (Weaver) . ECCO . VUB . 2017

Abstract: ‘It traces the experience of choice to an epistemic gap inherent in mental processes due to them being based on physically realized computational processes. This gap weakens the grasp of determinism and allows for an effective kind of freedom. A new meaning of freedom is explored and shown to resolve the fundamental riddles of free will, ..’. The supposed train of thought from this summary:

  1. (Physically realized) computational processes underpin mental processes
  2. These computational processes are deterministic
  3. These computational processes are not part of people’s cognitive domain: there is an epistemic gap between them
  4. The epistemic gap between the deterministic computational processes and the cognitive processes weakens the ‘grasp of determinism’ (this must logically imply that the resulting cognitive processes are to some extent based on stochastic processes)
  5. The weakened grasp leads to an ‘effective kind of freedom’ (but what is an effective kind of freedom? Maybe it is not really freedom but it has the effect of it, a de facto freedom, or the feeling of freedom)?
  6. We can be free in a particular way (and hence the title).

First off: the concept of an epistemic gap resembles the concept of a moral gap. Is it the same concept?

p 3: ‘This gap, it will be argued, allows for a sense of freedom which is not epiphenomenal,..’ (a kind of a by-product). The issue is of course ‘a sense of freedom’, it must be something that can be perceived by the beholder. The question is whether this is real freedom or a mere sense of freedom, if there is a difference between these.

‘The thesis of determinism about actions is that every action is determined by antecedently sufficient causal conditions. For every action the causal conditions of the action in that context are sufficient to produce that action. Thus, where  actions are concerned, nothing could happen differently from the way it does in fact happen. The thesis of free will, sometimes called “libertarianism”, states that  some actions, at least, are such that antecedent causal conditions of the action are not causally sufficient to produce the action. Granted that the action did occur, and it did occur for a reason, all the same, the agent could have done something else, given the same antecedents of the action’ [Searle 2001]. In other (my, DPB) words: for all deterministic processes the direction of the causality is dictated by the cause and effect relation. But for choices produced from a state of free will other actions (decisions) are possible, because the causes are not sufficient to produce the action. Causes are typically difficult to deal with in a practical sense because some outcome must be related to its causes. This can only be done after the outcome has occurred. Usually the causes for that outcome are very difficult to identify, because the relation is  if and only if. In addition a cause is usually a kind of a scatter of processes within some given contour or pattern, one of which must then ‘take the blame’ as the cause.

There is no question that we have experiences of the sort that I have been calling experiences of the gap; that is, we experience our own normal voluntary actions
in such a way that we sense alternative possibilities of actions open to us, and we sense that the psychological antecedents of the action are not sufficient to fix the action. Notice that on this account the problem of free will arises only for consciousness, and it arises only for volitional or active consciousness; it does not arise for perceptual consciousness‘ [Searle 2001]. This means that a choice is made even though the psychological conditions to make ‘the perfect choice’ are not satisfied, information is incomplete or a frivolous choice is made: ‘should I order a pop-soda or chocolate milk?’. ‘The gap is a real psychological phenomenon, but if it is a real phenomenon that makes a difference in the world, it must have a neurobiological correlate’ [Searle 2001]. Our options seem to be equal to us and we can make a choice between various options on a just-so basis (‘god-zegene-de-greep’). Is it therefore not also possible that when people are aware of these limitations they have a greater sense of freedom  to make a choice within the parameters known and available to them?

It says that psychological processes of rational decision making do not really matter. The entire system is deterministic at the bottom level, and the idea that the top level has an element of freedom is simply a systematic illusion… If hypothesis 1 is true, then every muscle movement as well as every conscious thought, including the conscious experience of the gap, the experience of “free” decision making, is entirely fixed in advance; and the only thing we can say about psychological indeterminism at the higher level is that it gives us a systematic illusion of free will. The thesis is epiphenomenalistic in this respect: there is a feature of our conscious life, rational decision making and trying to carry out the decision, where we experience the gap and we experience the processes as making a causal difference to our behavior, but they do not in fact make any difference. The bodily movements were going to be exactly the same regardless of how these processes occurred‘ [Searle 2001]. The argument above presupposes a connection between determinism and inevitability, although the environment is not mentioned in the quote. This appears to be flawed because there is no such connection. I have discussed (ad-nauseam) in the Essay Free Will Ltd, borrowing amply from Dennett (i.a. Freedom Evolves). The above quote can be summarized as: if local rules are determined then the whole system is determined. Its future must be knowable, its behavior unavoidable and its states and effects inevitable. In that scenario our will is not free, our choices are not serious and the mental processes (computation) are a mere byproduct of deterministic processes. However, consider this argument that is relevant here developed by Dennett:

  • In some deterministic worlds avoiders exist that avoid damage
  • And so in some deterministic worlds some things are avoided
  • What is avoided is avoidable or ‘evitable’ (the opposite of inevitable)
  • And so in some deterministic worlds not everything is inevitable
  • And so determinism does not imply inevitability

Maybe this is how it will turn out, but if so, the hypothesis seems to me to run against everything we know about evolution. It would have the consequence
that the incredibly elaborate, complex, sensitive, and – above all – biologically expensive system of human and animal conscious rational decision making would actually make no difference whatever to the life and survival of the organisms’ [Searle 2001]. But the argument cannot logically be true and as a consequence nothing is wasted so far.

In the case that t2>t1, it can be said that a time interval T=t2-t1 is necessary for the causal circumstance C to develop (possibly through a chain of intermediate effects) into E. .. The time interval T needed for the process of producing E is therefore an integral part of the causal circumstance that necessitates the eventual effect E. .. We would like to think about C as an event or a compound set of events and conditions. The time interval T is neither an event nor a condition‘ [p 9-10]. This argument turns out to be a bit of a sideline, but I defend the position that time is not an autonomous parameter, but a derivative from ‘clicks’ of changes in relations with neighboring systems: this quote covers it perfectly: ‘Time intervals are measured by counting events‘ [p 9]. And this argues exactly the opposite: ‘Only if interval T is somehow filled by other events such as the displacement of the hands of a clock, or the cyclic motions of heavenly bodies, it can be said to exist‘ [p 9], because time is the leading parameter and the events such as the moving of the arm of a clock is the product. This appears to be the world explained upside down (the intentions seem right): ‘If these events are also regularly occurring and countable, T can even be measured by counting these regular events. If no event whatsoever can be observed to occur between t1 and t2, how can one possibly tell that there is a temporal difference between them, that any time has passed at all? T becoming part of C should mean therefore that a nonzero number N of events must occur in the course of E being produced from C’ [p. 9]. My argument is that if a number of events lead to the irreversible state E from C then apparently time period T has passed. Else, if nothing irreversible takes place, then no time passes, because time is defined by ‘clicks’ occurring, not the other way around. Note that the footnote 2 on page 9 explains the concept of a ‘click’ between systems in different words.

The concepts of Effective and Neutral T mean a state of a system developing from C to E while conditions from outside the system are injected, and where the system develops to E from its own initial conditions alone. Note that this formulation is different from Weaver’s argument because t is not a term. So Weaver arrives at the right conclusion, namely that this chain of events of Effective T leads to a breakdown of the relation between deterministic rules and predictability [p 10], but apparently for the wrong reasons. Note also that Neutral T is sterile because in practical terms it never occurs. This is probably an argument against the use of the argument of Turing completeness with regards to the modeling of organizations as units of computation: in reality myriad of signals is injected into (and from) a system, not a single algorithm starting from some set of initial conditions, but a rather messy and diffuse environment.

Furthermore, though the deterministic relation (of a computational process DPB) is understood as a general lawful relation, in the case of computational processes, the unique instances are the significant ones. Those particular instances, though being generally determined a priori, cannot be known prior to concluding their particular instance of  computation. It follows therefore that in the case of computational processes, determinism is in some deep sense unsatisfactory. The knowledge of (C, P) still  leaves us in darkness in regards to E during the time interval T while the  computation takes place. This interval represents if so an epistemic gap. A gap during which the fact that E is determined by (C, P) does not imply that E is known or can be known, inferred, implied or predicted in the same manner that  fire implies the knowledge of smoke even before smoke appears. It can be said if so that within the epistemic gap, E is determined yet actually it is unknown and  cannot be known‘ [p 13]. Why is this problematic? The terms are clear, there is no stochastic element, it takes time to compute but the solution is determined prior to the finalization of the computation. Only if the input or the rules changes during the computation, rendering it incomputable or irrelevant. In other words: if the outcome E can be avoided then E is avoidable and the future of the system is not determined.

.. , still it is more than plausible that mental states develop in time in correspondence to the computational processes to which they are correlated. In other words, mental processes can be said to be temporally aligned to the neural  processes that realize them‘ [p 14]. What does temporally aligned mean? I agree if it means that these processes develop following, or along the same sequence of events. I do not agree if  it means that time (as a driver of change) has the same effect on either of the processes, computational (physical) and mental (psychological): time has no effect.

During gap T the status of E is determined by conditions C and P but its specifics remain unknown by anyone during T (suppose it is in my brain then I of all people would be the one to know and I don’t). And at t2, T having passed, any freedom of choice is in retrospect, E now being known. T1 and t2 are in the article  defined as the begin state and the end state of some computational system. If t1 is defined as the moment when an external signal is perceived by the system and t2 is defined as the moment at which a response if communicated by the system to Self and to outside, then the epistemic gap is ‘the moral gap’. This phrase refers to the lapsed time between the perception of an input signal and the communicating of the decision to Self and others. The moral comes from the idea that the message was ‘prepared in draft’ and tested against a moral frame of reference before being communicated. The moral gap exists because the human brain needs time to compute and process the input information and formulate an answer. The Self can be seen as the spokesperson, functionally a layer on top of the other functions of the brain and it takes time to make the computation and formulate its communication to Self and to external entities.

After t1 the situation unfolds as: ‘Within the time interval T between t1 and t2, the status of the resulting mental event or action is unknown because, as explained, it is within the epistemic gap. This is true in spite the fact that the determining setup (C, P) is already set at time t1 (ftn 5) , and therefore it can be said that E is already determined at t1. Before time t2, however, there can be no knowledge whether E or its opposite or any other event in <E> would be the actual outcome of the process‘ [p 17]. E is determined but not known. But Weaver counter argues: ‘While in the epistemic gap, the person indeed is going through a change, a computation of a deliberative process is taking place. But as the change unfolds, either E or otherwise can still happen at time t2 and in this sense the outcome is yet to be determined (emphasis by the author). The epistemic gap is a sort of a limbo state where the outcome E of the mental process is both determined (generally) and not determined (particularly) [p 17]. The outcome E is determined but unknown to Self and to God; God knows it is determined, but Self is not aware of this. In this sense it can also be treated as a change of perspective, from the local observer to a distant more objective observer.

During the epistemic gap another signal can be input into the system and set up for computation. The second computation can interrupt the one running during the gap or the first one is paused or they run in parallel. However the case may be, it is possible that E never in fact takes place. While determined by C at t1 not E takes place at t2 but another outcome, namely of another computation that replaced the initial one. If C, E and P are specific for C and started by it then origination is an empty phrase, because now a little tunnel of information processing is started and nothing interferes. If they are not then new external input is required which specifies a C1, and so see the first part of the sentence and a new ‘tunnel’ is opened.

This I find interesting: ‘Moreover, we can claim that the knowledge brought forth by the person at t2 be it a mental state or an action is unique and original. This uniqueness and originality are enough to lend substance to the authorship of the person and therefore to the origination at the core of her choice. Also, at least in some sense, the author carrying out the process can be credited or held responsible to the mental state or action E, him being the agent without whom E could not be brought forth‘ [p 18]. The uniqueness of the computational procedure of an individual makes her the author and she can be held responsible for the outcome. Does this uphold even if it is presupposed that her thoughts, namely computational processes, are guided by memes? Is her interpretation of the embedded ideas and her computation of the rules sufficiently personal to mark them as ‘hers’?

This is the summary of the definition of the freedom argued here: ‘The kind of freedom argued for here is not rooted in .., but rather in the very mundane process of bringing forth the genuine and unique knowledge inherent in E that was not available otherwise. It can be said that in any such act of freedom a person describes and defines herself anew. When making a choice, any choice, a person may become conscious to how the choice defines who he is at the moment it is made. He may become conscious to the fact that the knowledge of the choice irreversibly changed him. Clearly this moment of coming to know one‟s choice is indeed a moment of surprise and wonderment, because it could not be known beforehand what this choice might be. If it was, this wouldn‟t be a moment of choice at all and one could have looked backward and find when the  actual choice had been made. At the very moment of coming to know the choice that was made, reflections such as „I could have chosen otherwise‟ are not valid  anymore. At that very moment the particular instance of freedom within the gap  disappears and responsibility begins. This responsibility reflects the manner by  which the person was changed by the choice made‘[pp. 18 -9]. The author claims that it is not a reduced kind of freedom, but a full version, because: ‘First, it is coherent and consistent with the wider understanding we have about the world involving the concept of determinism.  Second, it is consistent with our experience of freedom while we are in the process of deliberation. Third, we can now argue that our choices are effective in the world and not epiphenomenal. Furthermore, evolution in general and each person‟s unique experience and wisdom are critical factors in shaping the mental processes of deliberation‘ [p 19]. Another critique could be that this is a strictly personal experience of freedom, perhaps even in a psychological sense. What about physical and social elements, in other words: how would Zeus think about it?

This is why it is called freedom: ‘Freedom of the will in its classic sense is a confusion arising from our deeply ingrained need for control. The classic problem of free will is the problem of whether or not we are inherently able to control a given life situation. Origination in the classic sense is the ultimate control status. The sense of freedom argued here leaves behind the need for control. The meaning of being free has to do with (consciously observing) the unfolding of who we are while being in the gap, the transition from a state of not knowing into a state of knowing, that is. It can be said that it is not the choice being originated by me but  rather it is I, through choice, who is being continuously originated as the person that I am. The meaning of such freedom is not centered around control but rather around the novelty and uniqueness as they arise within each and every choice as one‟s truthful expression of being‘ [p 20]. But  in this sense there is no control over the situation, and given there is the need to control is relinquished, this fact allows one to be free.

‘An interesting result regarding freedom follows: a person‟s choice is free if and only if she is the first to produce E. This is why it is not an unfamiliar experience that when we are in contact with persons that are slower than us in reading the situation and computing proper responses, we experience an expansion of our freedom and genuineness, while when we are in contact with persons that are faster than us, we experience that our freedom diminishes.

Freedom can then be understood as a dynamic property closely related to computation means and distribution of information. A person cannot expect to be free in the same manner in different situations. When one‟s mental states and actions are often predicted in advance by others who naturally use these  predictions while interacting with him, one‟s freedom is diminished to the point where no genuine unfolding of his being is possible at all. The person becomes a  subject to a priori determined conditions imposed on him. He will probably experience himself being trapped in a situation that does not allow him any genuine expression. He loses the capacity to originate because somebody or something already knows what will happen. In everyday life, what rescues our freedom is that we are all more or less equally competent in predicting each other‟s future states and actions. Furthermore, the computational procedures that implement our theories of mind are far from accurate or complete. They are more like an elaborate guess work with some probability of producing accurate predictions. Within such circumstances, freedom is still often viable. But this may  soon radically change by the advent of neural and cognitive technologies. In fact it is already in a process of a profound change.

In simple terms, the combination of all these factors will make persons much more predictable to others and will have the effect of overall diminishing the number of instances of operating within an epistemic gap and therefore the  conditions favorable to personal freedom. The implications on freedom as described here are that in the future people able to augment their mental processes to enjoy higher computing resources and more access to information will become freer than others who enjoy less computing resources and access to information. Persons who will succeed to keep sensitive information regarding their minute to minute life happenings and their mental states secured and  private will be freer than those who are not. A future digital divide will be translated into a divide in freedom‘ [pp 23-6].

I too believe that our free will is limited, but for additional and different reasons, namely the doings of memes. I do believe that Weaver has a point with his argument of the experience of freedom in the gap (which I had come to know as the ‘Moral Gap’) and the consequences it can have for our dealings with AI. There my critique would be that the AI are assumed to be exactly the same as people, but with two exceptions: the argument made explicit that 1) they compute much faster than people and the argument 2) left implicit that people experience their unique make-up such that they are confirmed by it as per their every computation; this experience represents their freedom. Now people have a unique experience of freedom that an AI can never attain providing them a ticket to relevance among AI. I’m not sure that if argument 2 is true that argument 1 can be valid also.

I agree with this, also in the sense of the coevalness between individuals and firms. If firms do their homework and such that they prepare their interactions with the associated people, then they will come out better prepared. As a result people will feel small and objectivised. They are capable of computing the outcome before you do hence predicting your future and limiting you perceived possibilities. However, this is still a result of a personal and subjective experience and not an objective fact, namely that the outcome is as they say, not as you say.

Cultural Evolution of the Firm

Weeks, J. and Galunic, Ch. . A Theory of the Cultural Evolution of the Firm: The Intra-Organizational Ecology of Memes . Organization Studies 24(8): 1309-1352 Copyright 2003 SAGE Publications London, Thousand Oaks, CA & New Delhi) . 0170-8406[200310]24:8;1309-1352;036074 . 2013

A theory of the cultural evolution of the firm is proposed. Evolutionary and cultural thinking is applied to the questions: What are firms and why do they exist? It is argued that firms are best thought of as cultures, as ‘social distributions of modes of thought and forms of externalization’. This culture encompasses cultural modes of thought (ideas, beliefs, assumption, values, interpretative schema, and know-how). Members of a group enact the memes they have acquired as part of the culture. Memes spread from mind to mind as they are enacted; the resulting cultural patterns are observed and interpreted by others. This refers to the meeting of content and process: as memes are enacted the ‘physical’ topology of the culture changes and as a consequence the context for the decisions of other changes. Variation in memes occurs through interpretation during communication and the re-interpretation in different contexts. The approach of taking the meme’s eye view allows a descriptive and non-normative theory of firms.

Introduction

Firm theory: Why do we have firms? (and to what extent do they have us?). Firms have a cultural influence on people and that is why it is difficult to answer the question of why firms exist: we believe we need them because we were schooled in believing that. ‘They serve our purposes because they have a hand in defining those purposes and evaluating their achievement’ (p 1309). Assuming this is true then a functionalist approach, treating firms as if they are people’s tools, doesn’t help to understand why firms function as they do. It is not sufficient to start at a normative model and explain away the rest as noise as is the common practice with firm theorists; as a start they assume that firms should exist (for instance because of a supposed performance advantage over market forms of coordination) and that these theoretical advantages would pan out in practice. It is argued that a truly descriptive theory of the firm takes seriously the idea that firms are fundamentally cultural in nature and that culture evolves.

Existing theories of the firm

1) Transaction cost economics (Coase, Williamson): individuals will organize in a firm rather than contract in a market because firms are efficient contractual instruments; this organization economizes transaction costs. A contender is knowledge based firm theory (Conner and Prahalad, Kogut and Zander, Grant) positing that firms are better than markets at applying and integrating knowledge to business activity. These theories are complementary in the sense that they share the idea that business organizations exist because they offer some economic advantage to members. This theory makes a further attempt at enhancing purely economic theories of the firm. This theory reaches beyond the idea of a firm as a knowledge bearing entity to a culture bearing entity, where culture is a much wider concept of ideas than mere knowledge. In addition it is required to understand that some elements will enhance the organization’s performance and further the interests of its members and other will not. The theory must explain both. In addition the theory must explain how a firm functionally evolves if it is not towards an optimum in a best of possible worlds while aberrations are minimalized.

Defining Characteristics of the Firm

In transaction cost economics, the difference between a market and a firm is defined by authority (Coase). If B is hired by A to reduce the transaction cost of the market, then A controls the performance of B and hierarchy is introduced, whereas in a market A and B are autonomous: hierarchies and markets differ in how they exert control. The word ‘firm’ denotes the name under which the business of a commercial house is transacted, its symbol of identity (Oxford English Dictionary). It came to refer to a partnership for carrying on a business and then expanded to a broad definition of any sort of business organization. Hierarchy is common in business organizations, but it is not the defining attribute. The defining difference between market and firm is not only control but also identity; this is a key insight of the knowledge based view (Kogut and Zander 1992). People express this identity in their shared culture (Kogut and Zander 1996); the identity reflects participation in a shared culture. The knowledge based view claims that it is this shared culture that affords firms their lower transaction costs compared to the market. However, culture is left exogenous in the knowledge-based theory and in the transaction-based theory; culture is presupposed in both.

Assumptions

Bounded rationality: only if people are fully rational is the neo-classical assumption of rationality justified. In that case the organizational advantage over markets is limited and this assumption of transaction-based economics is invalid. If agents are unable to construct contracts with one another as autonomous agents is it valid. Similarly if no threat of opportunism exists and everybody is fully trustworthy (and known to be so) then organizations bring no additional advantage over the markets, market operations and firm operations imply the same transaction costs. Because this element is of a weak form (it suffices if some agents are unreliable), this is a realistic assumption. The third assumption is the functionalism: not only should transaction cost be economized, but given time and sufficient competitive forces (Williamson and Ouchi 1981: 363-364: 10 years). However, for the transaction cost theory to be descriptive, it needs an explanation of the identification and realization of the efficiency of the economies of the costs of transactions; how do economic agents know the origins, the effects of the cost and how do they know how to economize on them? This requires strong assumptions of neo-classical competition and human rationality. The knowledge-based firm theory is also functional and it is assumed that: 1) the interests of the individual and the enterprise are aligned and 2) individuals can and will always identify the relation between performance and business organization and market respectively when deciding whether to establish a firm or definitely be selected out in time. Firms are theorized to do better than markets is to share and transfer knowledge between members of the organization, individuals and groups, because of the shared identity. This shared identity is built through culture and this takes time; not only does it allow capturing of specific knowledge, also it limits the kind of future knowledge can be further captured and exploited.

An evolutionary model is more suitable: firms evolve as cultures and this need not be functional from the point of view of the organization as a whole. Cultural patterns do not necessarily arise among a social group because they benefit the members of the group equally: power may result in the benefiting of some members more than others, some elements of organizations even though carefully managed do not benefit every member equally and some elements seem not to benefit or disadvantage anyone. Culture seems to be an emergent phenomenon and even organizations that were created for specific purposes tend not to dissolve after having met them, but rather tend to adapt their goals for new purposes unforeseen by their founders.

Intra-organizational Perspective

Individuals learn more about organizations if they are more and longer involved with them, but they are likely to not learn all of it and seldom to accept all that is learned. This is called ‘population thinking’ (Ernst Mayer): every member of the organization has an interpretation resulting in a scatter of cultural elements that they carry and reproduce in a slightly different way. The scatter results in a center of gravity (or a contour) of the prototypical culture of the firm. The interpretation of the culture by each member is a variation to that prototype. None of them might be exactly the same but they have what Wittgenstein calls the ‘family resemblance’: ’They share enough of the beliefs and values and meanings and language to be recognized and to recognize themselves as part of the culture’ (p 1316) NB: this prototype resembles the organization of the autopoietic system that keeps it intact as a unity and that gives it its identity such as to allow it to be recognized by an observer. The entire scatter of cultural elements that builds the firms culture is the structure. Those elements that are dispensable are structure, those that are not are also part of the organization of the autopoietic system that is the firms culture. Complications: 1) how is the social distribution formed and how does it change over time? A theory is needed for the ecology of the cultural elements as well as how they change as they spread over the organization and how a flow of new cultural elements enter the firm and has an impact on existing culture 2) How do the careers of cultural elements develop over time. Memes refer to cultural modes of thought values, beliefs, assumptions, know-how &c. ‘Culture results from the expression of memes, their enactment in patterns of behavior and language and so forth’ (p 1317). Studying evolution of culture it is important to keep in mind that memes have a meanings in the context of other memes.

A firm theory based on knowledge-based firm theory must take into account not only knowledge but culture; it must be evolutionary so as to account for the firms’ changes over time, while a ‘use’ or a ‘purpose’ for some or all of the members of the population is not required for the change to take place.

Memes: The Unit of Cultural Selection

What this means is that the overall, intricate patterns of culture that we call firms are not the best understood as the result of the conscious and coherent designs of astonishing organizational leaders. Instead, for better or for worse, they emerge step-by-step out of the interactions of intendedly rational people making what sense they can of their various situations, pursuing their various aims, and often acting in ways that they have difficulty explaining, even to themselves’ (p 1318)

The key to evolution in the sense of an algorithm providing selection, variation and retention is that it postulates a population of replicators but it does not make assumptions about what those can be. Assuming that the environment stays the same, then every next generation will be slightly better adapted to that environment than the previous one. Competition is assumed for some scarce resource, be it food, air or human attention. Retention assumes the ability of a replicator to be copied accurately. ‘Firms and markets are cultural entities. They have evolved in the same way any part of culture evolves: though selection, variation and retention of memes. Memes are the replicators in cultural evolution. They are the modes of thought (ideas, assumptions, values, beliefs and know-how) that when they are enacted (as language and other forms of expression)create the macro-level patterns of culture. Memes are units of information stored in the brain that replicate from brain to brain as people observe and interpret their cultural expression. .. Memes are the genes of culture. Just as plants and animals and all biological organisms are the phenotypic expression of particular combinations of genes, so cultural patterns such as firms are the phenotypic expression of particular combinations of memes’ (p 1320)

Small Replicators

Genes are the replicators, not the organism. Organisms exist because they are a good way to replicate. Memes are the replicators, not people and not culture. But those memes that are part of firms replicate more than those who aren’t. ‘We have the firms that we do, in other words, not because they are necessarily good for society or good for their members (though often they are both), but fundamentally because they are good was for memes to replicate themselves’(p 1321). To study a firm in this sense is the equivalent of studying ecology: selection but not variation nor retention. Firms do not replicate themselves in toto; selection, however, is theorized as occurring to this object in its entirety. A unit of selection is required that is smaller than the firm as a whole.

Systemic Elements and Social Phenomena

First premise: memes are small and analytically divisible. Second premise: the environment where the selection of memes takes place principally includes other memes. The memes build on themselves and they do so according to the ‘bricoleur principle’ (Lévi-Strauss 1966: 17): building on making use of the materials at hand. Memes are recycled and recombined, informing and constraining the creation of new memes. Some are implicated more than others. NB: here the existence of culture is confused with the existence of memes. The latter are the tools for thought and culture is built of their enactment. And so memes are the experiments (anything that can be uttered) and culture is their expression in the physical world, even spoken, gestured & written (anything that is in fact uttered). ‘In firms, these fundamental memes are akin to what Schein ((1992) calls basic assumptions. They are deeply held assuumptions about the nature of reality and truth, about time aand space, and about the nature of human nature, human activity, and human relationships (Schein 1992: pp. 95-6). When these are widely shared in a culture, they tend to be taken for granted and therefore pass unnoticed. They structure the way firm members think of the mission and goals of the firm, its core competencies, and the way things are done in the firm. Often borrowed and reinterpreted from some part of the wider context in which the firm is located, they are central to the identity of the firm and the identity the firm affords its members. The concept of meme must be robust enough to include these taken-for-granted assumptions if it is to serve usefully as the unit of selection in a theory of the cultural evolution of the firm’ (p 1323). NB This does not explain clearly whence memes come. My premises is that the firm is a cultural pattern originating in the memes that stem from the commonly held beliefs in a society. Not that they merely structure goals and mission, but that they are the stuff of them. There is indeed a relation between the memes and the identity of the firm. There is no mention of the belief systems and more specifically belief in the idea of progress, ala capitalism &c.

Why Memes

Meme is the umbrella term for the category containing all cultural modes of thought. Memes are cultural modes of thought. The concept preserves the distinction between modes of thought and their forms of externalization: the memes in people’ s heads and the ways they talk and act and the artifacts they produce as a product of enacting those memes. ‘The firm is a product of memes in the way that the fruit fly is the product of genes’ (p 1324): a distinction is possible between particular elements of culture and the memes that correspond to them. ‘Memes, the unit of selection, are in the mind. Culture, on the other hand, is social. Culture reflects the enactment of memes. Culture is a social phenomenon that is produced and continuously reproduced through the words and actions of individuals as they selectively enact the memes in their mind. Culture may be embedded in objects or symbols, but it requires an interpreting mind to have meaning and to be enacted’ (p 1324)

With memes in Mind

Without human minds to enact it and interpret it, there is no culture: ‘Memes spread as they are replicated in the minds of people perceiving and interpreting the words and actions and artifacts (compare Hannerz 1992: 3-4; Sperber 1996: 25). They vary as they are enacted and reinterpreted’ (p 1324). A change in culture can be seen as a change in the social distribution of the memes among the members of the population carrying that culture. NB: the social distribution trick gets rid of the meme – culture difference. A change in memes produces different enactment in turn produces different culture resulting in different cultural products such as utterances and artifacts. From the existence of phenotypic traits, the existence of genes and their relation to that phenotype (that property) can with some considerable difficulty be inferred through a reverse engineering exercise. The analog statement is that from cultural features the existence of these particular memes that caused those features can be inferred. This statement is of a statistical nature: ‘He is implicitly saying: there is variation in eye color in the population; other things being equal, a fly with this gene is more likely to have red eyes than a fly without the gene. That is all we ever mean by a gene ‘for’ red eyes’ (p 1325, Dawkins 1982: 21). Concerning the substance of memes and the way it is enacted in culture: ‘Studies of psychological biases (Kahneman and Tversky 1973) can help us to understand ways in which the make-up of our brains themselves may shape the selection of memes’ (p 1326).

The Meme’s-Eye View

The essence is that not survival of the organism but survival of the genes best capable to reproduce themselves. These statements are usually congruent: whatever works for the organism works for the gene and the genes best suitable to reproduce are inside the fittest organism. The Maltusian element of Darwin’s theory is that evolution is about selection based on competition for a scarce resource; in the case of memes the scarce resource is human attention. Memes compete to be noticed, to be internalized and to be reproduced. Memes can gain competitive advantage by their recognized contribution to the firms performance; misunderstanding or mismanagement can lead to reproduction of the wrong memes by management. If firms would be subject to competition and the least successful would die out at each generation then the most successful would thrive in time: ‘We hold that a theory of the firm must be able to explain not why we should have firms, but why we do have the firms (good, bad, and ugly alike) that we have’ (p 1327). NB: This is too modest and I do not agree: before anything can be said about their characteristics, an explanation must be in place about the raison d’ for firms, why does something like a firm exist? But why this limitation of the scope of the explanation?

Mechanisms of Selection, Variation, and Retention

Selection. A meme is internalized when the cultural expression corresponding to it is observed and interpreted by a member of the firm. NB: Is not a form of memorization required such that the observation and enactment are independent in time and ready for enactment? A meme is selected when it is enacted. ‘At any point in time, the pattern of selection events acting on a given variation of memes across the firm defines the ecology of memes in the firm’ (p 1327) NB: Firstly it defines the culture in the firm as the expressions of actions, the enactments of the memes hosted by individuals; those enactments in turn harbor memes and those remain for other members to observe, to interpret and at to enact at some occasion. Selective pressures on memes are: function, fit and form. Function: members believe that some function is served when a particular meme is enacted. This is not straightforward because 1) functionality is wrongly defined because reality and the reaction to it is complex, especially given that people are boundedly rational. Events will conspire to ensure that ill-functioning memes are selected against: members notice that they do not lead to the aspired goal and stop reproducing them. If not they may be removed from their position or the part of the firm or the entire firm is closed. For myriad reasons (p 1328), members may not deviate from their belief in the functional underpinning for a particular meme and they keep reproducing it; therefore function is not a strong argument for the selection of memes. 2) Fit: the manner in which a meme fits into a population of other memes and the memes that fit with other dominant memes stand a better chance of survival: ‘Institutional theory emphasizes that organizations are open systems – strongly influenced by their environments – but that many of the most fateful forces are the result not of rational pressures for more effective performance but of social and cultural pressures to conform to conventional beliefs’ (Scott 1992: 118 in p 1329) NB: this is crucial: the beliefs deliver memes that deliver culture hen they are enacted. The feedback loop is belief > memes > culture > memes > culture and performance is a cultural by-product. How does the produced culture feed back into the memes? ‘Powell and DiMaggio (1991: 27-28) describe this environment as a system of ‘cultural elements, that is, taken-for-granted beliefs and widely promulgated rules that serve as templates for organizing’. In other words, as a system of memes’ (p 1329). NB: this is complex of just-so stories guiding everyday practice. ‘The memetic view shares a central assumption with institutional theory: choices and preferences cannot be properly understood outside the cultural and historical frameworks in which they are set (Powell and DiMaggio 1991: 10). Our perspective, our identity, is a cumulative construction of the memes we carry (see Cohen and Levinthal 1990; Le Doux 2002). We are a product of our memes’ (p 1329) NB: this is a long and generalized version of the memes originating in a belief in the idea of progress. ‘By focusing analysis on the social distributions of memes within the firm, rather than assuming the firm is a monolith that adapts uniformly to its competitive or institutional environment, the memetic view suggests that its isomorphism is always imperfect, and that there are always sources of variation that may evolve into important organizational traits’ (p 1330). NB: this is the equivalent of the monadic view: as perfect as possible given circumstances and time, but never quite perfect. Also the identity of the firm as a consequence of the autopoietic organization and the structure is develops and that adds additional traits to the identity but that can be selected away without losing its identity as a unity. 3) Memes can be selected for their form: the morphology of genetic expressions may influence reproductive success; the ease with which an idea can be imitated is correlated to its actual reproductive success (urban legend, disgustingness, sound bite, self-promotion in the sense of piggybacking on others so as to be reproduced more often and in the sense of creating more network externalities (Blackmore on altruism), catchyness, stickyness).

Variation

Novel combinations of memes and altogether new memes. NB if a memeplex is an autopietic system then it is closed to external information. It is a linguistic system. Signals are received and trigger the system to react to them. But no information is actually transferred; this implies that memes stay inside the memeplex and that other members carrying other memeplexes copy based on what they perceive is the effect of the meme in another member in their context. A distinction is made into mutation and migration of memes. The latter does not exist in in autopoietic systems. Hiring is limited because of the tendency to hire those who are culturally close to the firm as is; and the effect of firing severs the availability of their views. Different backgrounds of people in a firm are seen as a source of diversity of memes. NB: how does this idea match autopoiesis?

A difference is pointed out between potential variation and realized variation: the number of new memes that come available to the members of the firm versus the number of new memes that are actually realized. ‘If there is ‘information overload’ and ‘information anxiety’, then it is to a great extent because people cannot confidently enough manage the relationship between the entire cultural inventory and their reasonable personal share in it’ (Hannerz 1992: 32 in p 1332). In this way an increase in the potential memetic variety can lead to a decrease in the realized memetic variety. Whether a relation exists between the potential and the realized in evolving systems is unclear. ‘But an evolutionary perspective, and an understanding of the firm as an ecology of memes, should make us a little more humble about predicting unidirectional outcomes between such things as diversity and performance’ (p 1333). Mutation is a source of variation via misunderstandings. These are in practical terms the rule rather than the exception, especially if conveyed not via written or even spoken word. The final source of variation is recombination: move around the group and then actual recombination. NB: this is the preferred version in an autopoietic system.

Retention

Key elements are 1) longevity, 2) fidelity, and 3) fecundity. 1) Longevity is about the firm reproducing itself through the actions of individuals as they conduct recurring social practices and thereby incorporate and reproduce constituent rules and ideas, memes, of the firm. ‘In other words, firm activity is not a fixed object, but a constant pattern of routine activity that reproduces the memes that express these routines’ (p 1335). NB: routine activity in this phrase resembles the organization of an autopoietic system 2) Fidelity means how accurately memes are copied. This is an advantage over markets. ‘The defining elements of the firm (its characteristic patterns of control and identity) provide for meme retention. Control in firms means that employees accept to a relatively greater degree than in markets that they may be told how to behave and even how to think. They accept, in other words, reproducing certain memes and not others’ (p 1335). NB: this is a key notion: based on this definition of control in firms, this is the effect that firms have as the context (ambience) for their employees: they get to copy some desired memes and not others. I have a difficulty with the word ACCEPT in this context: how does it relate to the concept of free will and the presumed lack of it? ‘Those memes that become part of the firm’s identity become less susceptible to change (Whetten and Godfrey 1998). Being consistent with dominant memes in the firm becomes a selection factor for other memes, which further reinforces fidelity’ (p 1336). NB: Copy-the-product versus copy-the-instruction. 3) Fecundity refers to the extent to which a meme is diffused in the firm. This depends on the mind that the meme currently occupies: the more senior the member, the higher the chance that the meme gets replicated. ‘The cultural apparatus includes all those specializations within the division of labor which somehow aim at affecting minds, temporarily or in a enduring fashion; the people and institutions whose main purpose it is to meddle with our consciousness’ (Hannerz 1992: 83). This was meant to apply to societies (media &c.), but it can be used for firms just the same, especially because it is assumed to part of the standard outfit of firms that some groups of people meddle with the minds of other groups.

Why Do Firms Exist?

Why has the cultural evolution process led to a situation where the memes bundle together as firms?’ (p 1337). The scope of the answer is in the bundling of the memes (into patterns of control and identity) such that they have a competitive advantage over others; why do memes that are a part of firms replicate more often than memes that are not a part of a firm? NB: Weeks and Galunic are mistakenly assuming that memes in firms benefit their host by offering them an advantage (p 1338). ‘A cultural and evolutionary theory also forces us to recognize that the reasons firms came into existence are not necessarily the reasons this form persists now’(p 1338). Two questions arise: 1) what are the historical origins of the evolution of the firm and 2) why does the concept of the firm persist until today? Ad 1 origins) the idea is that large (US) firms exist around 50 years. The concept started as a family-run firms and grew from that form to a larger corporate form. As the scale of the business grew it was not longer possible to oversee it for one man and so management emerged, including the functional areas of production, procurement &c. ‘From a meme’s-eye view, we would say that these memes produced cultural effects with a tremendous functional selection advantage, but they did so only when bundled with each other. This bundling was made possible by the enacted identity and control memes of the firm. Thus, together, both sets of memes flourished’ (p 1339). ‘In evolutionary terms, this pattern is to be expected. Through bundling, replicators can combine in ways that produce more complex expressions that are better to compete for resources (such as human attention in the case of memes), but this bundling requires some apparatus to be possible. In our case, this apparatus consists of the memes that enact the firm’ (p 1340). NB: Because of their complexity they are better suited to compete because they better manage to retain bundles of memes for business functions such as production, procurement and distribution. Firms enhanced the faithful reproduction and enactment of those memes; they have reduced variation.

Persistence

Once the bundle of memes we call the firm had emerged, the logic of its evolution changed somewhat and the possibility of group selection emerged’ (p 1340). NB: I don’t believe that the concept of the firm has changed since it was initially conceived: it must be mirrored. Also as an autopoietic system it has to have existed as a unity and an organization, a unity from the outset in whatever slim shape. It cannot ‘emerge’ from nothingness and evolve into something.’There is always a balance in any evolving system between the longevity offered by retention at the level of the individual meme and for adaptation at the level of of the bundle of memes. The firm emerged because of the reproductive advantages it gave memes, but it persisted because it was also able to provide more effective variation and selection processes’ (p 1340). NB: this is about the diffusion of administrative and managerial processes.

Retention

Firms offer memes advantages of retention as a result of: 1) control: peole can be told what to do and what to think 2) the identity that employees develop towards their firms, which brings them to hold certain memes close and protect them against different ideas. ‘Control and identity come together in firms by virtue of the legitimacy granted generally by society and specifically by employees to managers of firms to impose and manipulate corporate culture and thus the assumptions, beliefs, values, and roles internalized by employees and enacted by them not only in the organization (when management may be looking to ensure displays of compliance) but outside as well’ (p 1341). NB: I find this still not entirely satisfactory, because I am convinced that the memes carried by management may be somewhat more specialized than those of the people outside the firm, but the general ideas are widely known and carried by members of society. A firm could not exist in a society where some of the memes that compose a firm do not exist or are not believed to be true. ‘Without very much exaggeration we might say that firms are systems of contractual docility. They are structures that ensure, for the most part, that members find it in their self-interest to be tractable, manageable and, above all, teachable’ (p 1341). The economy for an incumbent meme to be added to the memeplex is described as follows: ‘When you can give ideas away and retain them at the same time, you can afford to be generous. In contrast, it is less easy to maintain allegiance to any number of contradictory ideas, and especially to act in line with all of them. Thus, if somebody accepts your ideas and therefore has to discard or reject competing ideas, in belief or in action, he may really be more generous than you are as a donor’ (Hannerz 1992: 104 in p 1341). NB: members protect memes because they are a product of them. Firms through their efforts of dedicated management to replicate meme high-fidelity and their firm-specific language, facilitate the retention of memes in the minds of their members.

Apart from control and authority, firms provide identity for members. At the core of institutional thinking two elements are held: 1) human actors are susceptible to merging their identity with that of the firms and 2) to be an institution presupposes some stable core memes as attractors of social union. Ad 1 identity) people are inclined to collective enterprise for a need to cooperate (Axelrod 1997) and from a natural tendency to seek and adopt moral order (Durkheim 1984; Weber 1978): ‘This is the sense in which the firms have us as much as we have them: they socialize us, fill our heads with their memes, which shape our sense of identity and which we carry, reproduce, and defend outside the organization as well as inside’ (p 1342). NB: this is where process and content meet: members reproduce the memes provided by the firm and the enacted memes produce the culture which is the environment for the members to base their beliefs on about ‘how things are done around here’. The culture is now also the basis for the development of memes; the content has become process. ‘.. the presence of managerially assigned monetary incentives and career progression that motivate the display of adherence to corporate memes; and, not least, the power of leaders to sanction and select out actors who do not abide by corporate values’ (p 1342).

Selection and Variation

Firms offer two sorts of selection and variation advantages to memes: 1) they offer a context that places memes that are potentially beneficial to the firm in closer proximity to one another than is typical in markets (complementary ideas, groups socially evolving norms) and 2) the presence of professional management who motivated and responsible for the creating and enforcement of memes considered beneficial. ‘.. firms have an advantage over markets as superior explorers of design space and thus are beter able to create variation through novel recombinations of memes’ (p 1344).

Survey of Schools in Economics

Ecological economics/eco-economics refers to both a transdisciplinary and interdisciplinary field of academic research that aims to address the interdependence and coevolution of human economies and natural ecosystems over time and space.[1] It is distinguished from environmental economics, which is the mainstream economic analysis of the environment, by its treatment of the economy as a subsystem of the ecosystem and its emphasis upon preserving natural capital.[2]

Heterodox economics refers to methodologies or schools of economic thought that are considered outside of “mainstream economics”, often represented by expositors as contrasting with or going beyond neoclassical economics.[1][2] “Heterodox economics” is an umbrella term used to cover various approaches, schools, or traditions. These include socialist, Marxian, institutional, evolutionary, Georgist, Austrian, feminist,[3] social, post-Keynesian (not to be confused with New Keynesian),[2] and ecological economics among others.

Institutional economics focuses on understanding the role of the evolutionary process and the role of institutions in shaping economic behaviour. Its original focus lay in Thorstein Veblen’s instinct-oriented dichotomy between technology on the one side and the “ceremonial” sphere of society on the other. Its name and core elements trace back to a 1919 American Economic Review article by Walton H. Hamilton. Institutional economics emphasizes a broader study of institutions and views markets as a result of the complex interaction of these various institutions (e.g. individuals, firms, states, social norms). The earlier tradition continues today as a leading heterodox approach to economics. Institutional economics focuses on learning, bounded rationality, and evolution (rather than assume stable preferences, rationality and equilibrium). Tastes, along with expectations of the future, habits, and motivations, not only determine the nature of institutions but are limited and shaped by them. If people live and work in institutions on a regular basis, it shapes their world-views. Fundamentally, this traditional institutionalism (and its modern counterpart institutionalist political economy) emphasizes the legal foundations of an economy (see John R. Commons) and the evolutionary, habituated, and volitional processes by which institutions are erected and then changed (see John Dewey, Thorstein Veblen, and Daniel Bromley.)

The vacillations of institutions are necessarily a result of the very incentives created by such institutions, and are thus endogenous. Emphatically, traditional institutionalism is in many ways a response to the current economic orthodoxy; its reintroduction in the form of institutionalist political economy is thus an explicit challenge to neoclassical economics, since it is based on the fundamental premise that neoclassicists oppose: that economics cannot be separated from the political and social system within which it is embedded.

Behavioral economics, along with the related sub-field, behavioral finance, studies the effects of psychological, social, cognitive, and emotional factors on the economic decisions of individuals and institutions and the consequences for market prices, returns, and the resource allocation.[1] Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory; in so doing, these behavioral models cover a range of concepts, methods, and fields.[2][3] Behavioral economics is sometimes discussed as an alternative to neoclassical economics.

Prospect theory

In 1979, Kahneman and Tversky wrote Prospect Theory: An Analysis of Decision Under Risk, an important paper that used cognitive psychology to explain various divergences of economic decision making from neo-classical theory.[12] Prospect theory has two stages, an editing stage and an evaluation stage.

In the editing stage, risky situations are simplified using various heuristics of choice. In the evaluation phase, risky alternatives are evaluated using various psychological principles that include the following:

(1) Reference dependence: When evaluating outcomes, the decision maker has in mind a “reference level”. Outcomes are then compared to the reference point and classified as “gains” if greater than the reference point and “losses” if less than the reference point.

(2) Loss aversion: Losses bite more than equivalent gains. In their 1979 paper in Econometrica, Kahneman and Tversky found the median coefficient of loss aversion to be about 2.25, i.e., losses bite about 2.25 times more than equivalent gains.

(3) Non-linear probability weighting: Evidence indicates that decision makers overweight small probabilities and underweight large probabilities – this gives rise to the inverse-S shaped “probability weighting function”.

(4) Diminishing sensitivity to gains and losses: As the size of the gains and losses relative to the reference point increase in absolute value, the marginal effect on the decision maker’s utility or satisfaction falls.

Research Plan, Version 17 mei 2016

Below some research ideas and structure for the development of a new firm theory.

A theory is relevant and useful that explains the existence, the behavior and the death of firms with a wide application because of the changing relation between individual people and firms. This is relevant for an extended audience associated with firms such as policy makers and academics even when the latter differ only in their academic school of thought. Such a theory must necessarily be independent of situational variables such as the sector of the firm’s business, its size, the people associated with it, its financing, its assets and all kinds of temporal issues. Bearing in mind the above, the research question can be posed:

What is a firm?’

A hypothesis anwering this question is:

A firm is a pattern in space and time produced by global behavior of some system. Said global behavior is produced by behavior of individual people. Material and energy flow through’ the pattern – the system bringing forth a firm is not in equilibrium. The pattern that is the firm computes its relation to its environment thus acquiring and maintaining its identity. This identity ceases to exist if the firm dies, usually because of its associating with another firm

Meta. The current shape that firms have taken is a result of the set of beliefs that are fashionable in western society. They are of the same stuff that our ‘other’ beliefs are made of: it harks back to what ‘we’ believe to be, to be good, to be useful. We know these things because they have been taught us from an early age on. They are our beliefs sufficiently corroborated by reality to represent reality to us: they work to some sufficient measure, we consider them to be ‘true’, to us they are knowledge, more than just any belief. To enable a peek at this belief system from outside it is required to ‘unbelieve’ these things and not take them as a given and not defend them as beyond doubt. Doing that, however, implies rejecting many certainties as such: the role of humans in the universe, the existence of God, human consciousness, human freedom of will and agency, moral and ethical certainties such as ‘to work is a good thing’. It is required to look beyond a number of dogmas that for practical reasons people consider truths. In doing so it is also required to release any divinity involved in the capabilities and the faculties, of the human brain or human behavior. As a consequence it is required that human beings exist in the same space of possibilities as every other thing in the universe. They are not fast-tracked nor do they otherwise receive a ‘special treat’. And the same goes for human products: they are not sprinkeled with ‘human stardust’: they too must make do with whatever hand nature deals them. Firms also have no special deal with the laws of nature; they must allow the general rules to rule over them also.

Ontology. This hypothesis above generalizes the behavior of firms to a pattern to which people associated with the firm contribute with their individual behavior in their contexts. The pattern can autonomously develop behavior particular to it and in its own context, independent of the people associated with the firm. In this frame of thought the relation between the behavior of people and the behavior of the firm is the subject of study. The people needn’t per se be the master of the firm, actively controlling it, nor does the converse: that firms develop behavior without the involvement of the people associated, hold true. The subject of this study is the behavior of the individual, the behavior of the firm that is the result, and the process that leads from the individual to the collective behavior. This process can be seen as an operation on or a transposition of the individuals’ behavior to the firm’s behavior. However the case may be, the global behavior of the firm can be different from, even contrary to that of the individuals contributing to the extent that it can be damaging for the indivduals bringing it forth. Looking at the question in this generalized way and not restricted to the perspective of people associated with firms – or other mechanics generally traditionally deemed relevant for firm behavior – allows an unbiased observation of the relation between firms and the people associated with them. Somewhat new is the view that firms can exhibit autonomous behavior, which represents a new souvereign being or perhaps adding new characteristics to an existing category of being and attempting to add scope to what is at this point knowable.

Epist. People’s behavior is to some extent motivated by their beliefs. A belief in turn is information believed true after some level of confirmation with reality, however shallow and indirect. It is therefore not fact, but how reality is modeled by the believer. The extent to which it is corroborated by scientific proof and appropriate frame is decisive for whether it is not mere belief but factual knowledge. Individual people’s behavior driving the overall behavior of the firm is therefore not necessarily motivated by factual reality but what people believe to be true and have accepted as a fact. To them there is no knowing of the alternatives in practical terms at a reasonable cost or in a reasonable time-frame, if at all. The behavior of firms and the relation of firms and individual people is driven by what people believe to be true, including what concerns the actual relationship itself. To phrase the hypothesis in this generalized way allows observation of said relation in an unbiased way so as to assess the beliefs that are at its foundations for what they are. This view affects this study in the sense that what the firm is in reality is a result of the beliefs of individual people collectively: in a sense the firm is what it is said to be. The opposite – at this point fashionable – hypothesis is that firms are designed, developed or built and executed conform a preconceived plan or that they are at least being oriented towards some definable level of utility for all involved. In that view the firm itself is the subject of people’s efforts ‘in the field’ and the subject of the studies of firm theories. This is contrary on this study at hand, because it considers the firm itself to be the object of study, while this study considers it a result of the forces internal and external to the firm that motivate it (sic!) to behave in certain ways. It also implies individual people can improve a given state the firm is in, or its perceived utility for the respective stakeholders. The assumption of this study that this is not automatic.

Meth. A model of reality is suggested that sets out to explain the behavior of firms and their relation with people. The final objective of the model is to predict some aspects of the behavior of firms. In so doing this book loosely follows the train of logic leading to the proof of the hypothesis above. Using the developed model firms are observed in an unbiased way, namely based on the current system of beliefs of the western world.

The scope of the concept of a firm used here is restricted so that it is assumed:

  • to have more than one person associated with it

  • to encompass more than a strictly legal body, namely informational

  • to be detachable from the physical objects a firm can encompass and employ

  • to differ from other kinds of human organisations only because its activities are owned by someone or something

  • that it can be studied as a concept and as a real object in the period from their birth to their death

The cultural elements pivotal to this study are restricted so that they are assumed to be of part of culture and traditions considered to be of western origin, but increasingly wide-spread geographically.

The objective is not to design a normative model: with other belief systems, other firm, or organisational in a wider sense, characteristics might be possible. At best it can show how this belief might lead to that relation between people and their firms and the relation with the world around them. And so in no way is the model intended to qualify peope’s beliefs regarding this or to issue advice regarding people’s actions required for that. Otherwise the approach is pragmatic in the sense that whatever works to predict the current situation is used.

As the study is to a large extent philosophical in nature, the approach is to describe the state of the art in the respective fields, namely universal darwinism, psychology of free will, belief and thinking, neuro-psychological processes of decision making, theoretical ecology, cognitive science, computational sciences, complex sciences, thermodynamics, memetics that cover the chain of logic of the study and to argue and debate relevant viewpoints in each field and their connections. The linking pin is the way that the firm computes its anticipated future. To prove that the individual people’s collectively held belief systems can produce behavioral patterns such as a firm, computer simulation is used.

The stance is constructivist in the sense that a pivot is that the behavior of individuals propels the behavior of the collective, namely the firm, which in turn is to a large extent the environment of the individual associated with the firm in that way motivating its behavior. And in that sense the knowledge of reality of the associated individual depends on the knowledge structures of the system, the firm in this case, that individual amasses by interacting with the system.

The individual acts in the context created by her own actions and those of other entities in the environment of the firm as a system: the agency of the individual is less than complete while structure is an important influence but dependent on her own actions. To bridge this gap between agency and structure, the construct of Jobs is proposed1 as a locus for thoughts. A subset of the class of thoughts is the class of knowledge objects, a concept describing social relations within cultures, unfolding structures that are non-identical with themselves.

Social constructionism examines the development of jointly constructed understandings of the world that form the basis for shared assumptions about reality. The theory centers on the notion that human beings rationalize their experience by creating models of the social world and sharing these via language. A social construct concerns the meaning placed on an object or an event by a society, and adopted by the individual members of that society with respect to how they view or deal with it. A social construct can be widely accepted as natural by the members of the society, but not necessarily by those outside it, and the construct would be an “invention or artifice of that society.”

Social constructionism uncovers ways in which members participate in the construction of their perceived social reality. It involves looking at the ways social phenomena are created, institutionalized, known, and made into tradition by humans. “Social construction” may mean many things to many people. Ian Hacking argues that when something is said to be “socially constructed”, this is shorthand for at least the following two claims: 0) In the present state of affairs, X is taken for granted; X appears to be inevitable, 1) X need not have existed, or need not be as it is. X, or X as it is at present, is not determined by the nature of things; it is not inevitable.

Hacking adds that the following claims are also often, though not always, implied by the use of the phrase “social construction”: 2) X is quite bad as it is, 3) We would be much better off if X were done away with, or at least radically transformed.

Social constructionism is cultural in nature and critics argue that it ignores biological influences on behavior or culture. Many scientists suggest that behavior is a complex outcome of both biological and cultural influences or a nature–nurture interactionism approach is taken to understand behavior or cultural phenomena.

Phenom. From a logical perspective the suggested theory is a construct of a number of partial theories. They loosely start from the philosophies pertaining to the various disciplines listed in the alinea above. Some of them, such as the theory of free will, the theory of memetics, the theory of universal darwinism and the theory of universal computation, are for various reasons and to a various extent dynamic at this time. Some parts of the developed model are therefore falsifications per se and in its entirety the hypothesis is a generalisation and therefore scientifically a falsification also. However, an advantage of a hypothesis at this level over one at a lower level of abstraction is that discussion about the foundations of the concept of firms and their role in society is possible, unbiased by the supposed role of people in its establishment or maintenance.

It is hoped that this overarching theory for firms become an item of discussion and in that way to ‘firm itself up’ in various directions as a viable and robust theory. In this way it is hopefully a contribution to the ongoing discussion about the role of the firm in the development of society.

@naar boven bij ontologie of naar intentional stance believe – act

2) Van gedragsverklaring naar handelingsverklaring: Popper probeert dualisme te overwinnen, namelijk een waarheid voor de natuur en iets anders voor de mens. De essentie van die brug is dat gedrag dat bijv. een amoebe vertoont iets anders is dan handelen dat een mens vertoont: het verschil is overleg. Dat laatse kan niet met natuurwetten worden verklaard, omdat daar het overleg en de rationaliteit (precies het verschil tussen de beide wetenschappelijke benaderingen) niet in is inbegrepen.

1 The construct of ‘situation‘ in methodological situationalism [Knorr-Cetina, K. and Cicourel, A.V.. . The micro-sociological challenge of macro-sociology: towards a reconstruction of social theory and methodology . 1981 . Advances in social theory and methodology . Boston . pp. 1-47].

Micro-Economics

This post contains notes from different sources about micro-economics. The backdrop is that a connection is needed between the economic models that are taught in schools and any new theory under development. Even if it were only to be able to translate from language to the other and to distinguish the conditions from the main issues, however the case may be.

If the bold hypotheses … , that complex systems achieve the edge of chaos internally and collectively, were to generalize to economic systems, our study of the proper marriage of self-organization and selection would enlist Charles Darwin and Adam Smith to tell us who and how we are in the nonequilibrium world we mutually create and transform.‘ [Kauffman, 1993 p. 401]

How does this theory relate to economic subjects? In economic theory technology is an important factor in the development of an economy. Kauffman suggests it is the important pillar of economic development: the existence of goods and services leads to the emergence of new goods and services. And conversely: new goods and services force existing goods and services out. In this way, the economy renews itself [Kauffman, 1993, pp. 395-402]. The question is how an economic structure does control its means of transforming the entry and exit of goods and services. A theory is required that describes how goods and services ‘match’ or ‘fit’ from a technological perspective.

With this model an economy can be simulated as a population of ‘as-if’ goods and services, sourcing from external sources (basic materials), that supply to each other when complementary goods and substitute when substituting goods and that each represent a utility. The equilibrium for this simulated economy can be the production ratio in that economy at a maximum utility for the whole of all present goods and services. That ratio can also be the basis for a measurement of the unit of price in the simulated economy. How does this simulated economy develop?

Introduce variations to existing goods and services through random mutations or permutations to generate new goods. Generate a new economy by introducing this new technology into it. Determine the new equilibrium: at this equilibrium some of the newly introduced goods and services will turn out to be profitable: they will stay. Some will not be profitable and they will disappear. This is of interest for these reasons:

  • Economic growth is modelled with new niches emerging as a consequence of the introduction of new goods and services
  • This kind of system leads to new models for economic take-off. The behavior of an economy depends on the complexity of the grammar, the diversity of the renewable sources, the discount factor as a part of the utility function of the consumer and the prediction horizon of the model. An insufficient level of complexity or of renewable resources leads to stagnation and the system remains subcritical. If too high then the economy can reach a supra critical level.

This class of models depends on past states and on dynamical laws. The process of testing of the newly introduced goods and services in a given generation is the basis on which future generations can build and so it guides the evolution and growth of the system. Because it will usually not be clear a priori how new goods and services are developed from the existing, the concepts of complete markets and rational agents can not be maintained as such: classical theory needs to be adapted.

An important behavioral factor of large complex adaptive systems is that no equilibrium is encountered. The economy (or the markets) is a complex system and so it will not reach an equilibrium. However, it is possible that limited rational agents are capable of encountering the edge of chaos where markets are near equilibrium. On that edge avalanches of change happen, which in the biological sphere leads to extinction and in the economy may lead to disruption.

xxx

Whenever we try to explain the behavior of human beings we need to have a framework on which our analysis can be based. In much of economics we use a framework built on the following two simple principles.

The optimization principle: People try to choose the best patterns of consumption that they can afford.

The equilibrium principle: Prices adjust until the amount that people demand of something is equal to the amount that is supplied.

Let us consider these two principles. The first is almost tautological. If people are free to choose their actions, it is reasonable to assume that they try to choose things they want rather than things they don’t want. Of course there are exceptions to this general principle, but they typically lie outside the domain of economic behavior. The second notion is a bit more problematic.The second notion is a bit more problematic. It is at least conceivable that at any given time peoples’ demands and supplies are not compatible, and hence something must be changing. These changes may take a long time to work themselves out, and, even worse, they may induce other changes that might “destabilize” the whole system.

This kind of thing can happen . . . but it usually doesn’t. In the case of apartments, we typically see a fairly stable rental price from month to month. It is this equilibrium price that we are interested in, not in how the market gets to this equilibrium or how it might change over long periods of time. It is worth observing that the definition used for equilibrium may be different in different models. In the case of the simple market we will examine in this chapter, the demand and supply equilibrium idea will be adequate for our needs. But in more general models we will need more general definitions of equilibrium. Typically, equilibrium will require that the economic agents’ actions must be consistent with each other.

One useful criterion for comparing the outcomes of different economic institutions is a concept known as Pareto efficiency or economic efficiency. 1 We start with the following definition: if we can find a way to make some people better off without making anybody else worse off, we have a Pareto improvement. If an allocation allows for a Pareto improvement, it is called Pareto inefficient; if an allocation is such that no Pareto improvements are possible, it is called Pareto efficient.

A Pareto inefficient allocation has the undesirable feature that there is some way to make somebody better off without hurting anyone else. There may be other positive things about the allocation, but the fact that it is Pareto inefficient is certainly one strike against it. If there is a way to make someone better off without hurting anyone else, why not do it?

Let us try to apply this criterion of Pareto efficiency to the outcomes of the various resource allocation devices mentioned above. Let’s start with the market mechanism. It is easy to see that the market mechanism assigns the people with the S highest reservation prices to the inner ring namely, those people who are willing to pay more than the equilibrium price, p ∗ , for their apartments. Thus there are no further gains from trade to be had once the apartments have been rented in a competitive market. The outcome of the competitive market is Pareto efficient. What about the discriminating monopolist? Is that arrangement Pareto efficient? To answer this question, simply observe that the discriminating monopolist assigns apartments to exactly the same people who receive apartments in the competitive market. Under each system everyone who is willing to pay more than p ∗ for an apartment gets an apartment. Thus the discriminating monopolist generates a Pareto efficient outcome as well.

Although both the competitive market and the discriminating monopolist generate Pareto efficient outcomes in the sense that there will be no further trades desired, they can result in quite different distributions of income. Certainly the consumers are much worse off under the discriminating monopolist than under the competitive market, and the landlord(s) are much better off. In general, Pareto efficiency doesn’t have much to say about distribution of the gains from trade. It is only concerned with the efficiency of the trade: whether all of the possible trades have been made.

We will indicate the consumer’s consumption bundle by (x 1 , x 2 ). This is simply a list of two numbers that tells us how much the consumer is choosing to consume of good 1, x 1 , and how much the consumer is choosing to consume of good 2, x 2 . Sometimes it is convenient to denote the consumer’s bundle by a single symbol like X, where X is simply an abbreviation for the list of two numbers (x 1 , x 2 ).

We suppose that we can observe the prices of the two goods, (p 1 , p 2 ), and the amount of money the consumer has to spend, m. Then the budget constraint of the consumer can be written as

p 1 x 1+ p 2 x 2 ≤ m. (2.1)

Here p 1 x 1 is the amount of money the consumer is spending on good 1, and p 2 x 2 is the amount of money the consumer is spending on good 2.

p1 x1 + x2 ≤ m.

This expression simply says that the amount of money spent on good 1, p1 x1 , plus the amount of money spent on all other goods, x2 , must be no more than the total amount of money the consumer has to spend, m. equation (2.2) is just a special case of the formula given in equation (2.1), with

p 2 = 1

p 1 x 1 + p 2 x 2 = m

and

p 1 (x 1 + Δx 1 ) + p 2 (x 2 + Δx 2 ) = m.

Subtracting the first equation from the second gives

p 1 Δx 1 + p 2 Δx 2 = 0.

This says that the total value of the change in her consumption must be zero. Solving for Δx 2 /Δx 1 , the rate at which good 2 can be substituted for good 1 while still satisfying the budget constraint, gives

Δx 2 /Δx 1 = -p1/p2

This is just the slope of the budget line. The negative sign is there since Δx 1 and Δx 2 must always have opposite signs. If you consume more of good 1, you have to consume less of good 2 and vice versa if you continue to satisfy the budget constraint. Economists sometimes say that the slope of the budget line measures the opportunity cost of consuming good 1.

Consumer Preferences

We will suppose that given any two consumption bundles, (x 1 , x 2 ) and (y 1 , y 2 ), the consumer can rank them as to their desirability. That is, the consumer can determine that one of the consumption bundles is strictly better than the other, or decide that she is indifferent between the two bundles. We will use the symbol to mean that one bundle is strictly preferred to another, so that (x 1 , x 2 ) (y 1 , y 2 ) should be interpreted as saying that the consumer strictly prefers (x 1 , x 2 ) to (y 1 , y 2 ), in the sense that she definitely wants the x-bundle rather than the y-bundle. This preference relation is meant to be an operational notion. If the consumer prefers one bundle to another, it means that he or she would choose one over the other, given the opportunity. Thus the idea of preference is based on the consumer’s behavior. In order to tell whether one bundle is preferred to another, we see how the consumer behaves in choice situations involving the two bundles. If she always chooses (x 1 , x 2 ) when (y 1 , y 2 ) is available, then it is natural to say that this consumer prefers (x 1 , x 2 ) to (y 1 , y 2 ).

If the consumer is indifferent between two bundles of goods, we use the symbol ∼ and write

(x 1 , x 2 ) ∼ (y 1 , y 2 ). Indifference means that the consumer would be just as satisfied, according to her own preferences, consuming the bundle (x 1 , x 2 ) as she would be consuming the other bundle, (y 1 , y 2 ).

If the consumer prefers or is indifferent between the two bundles we say that she weakly prefers (x 1 , x 2 ) to (y 1 , y 2 ) and write (x 1 , x 2 ) (y 1 , y 2 ). These relations of strict preference, weak preference, and indifference are not independent concepts; the relations are themselves related! Indifference curves are a way to describe preferences. Nearly any “reasonable” preferences that you can think of can be depicted by indifference curves. The trick is to learn what kinds of preferences give rise to what shapes of indifference curves.

well-behaved indifference curves

First we will typically assume that more is better, that is, that we are talking about goods, not bads. More precisely, if (x 1 , x 2 ) is a bundle of goods and (y 1 , y 2 ) is a bundle of goods with at least as much of both goods (x 1 , x 2 ). This assumption is sometimes and more of one, then (y 1 , y 2 ) called monotonicity of preferences. As we suggested in our discussion of satiation, more is better would probably only hold up to a point. Thus the assumption of monotonicity is saying only that we are going to examine situations before that point is reached—before any satiation sets in—while more still is better. Economics would not be a very interesting subject in a world where everyone was satiated in their consumption of every good.

What does monotonicity imply about the shape of indifference curves? It implies that they have a egative slope. That is, if the consumer gives up Δx 1 units of good 1, he can get EΔx 1 units of good 2 in exchange. Or, conversely, if he gives up Δx 2 units of good 2, he can get Δx 2 /E units of good 1. Geometrically, we are offering the consumer an opportunity to move to any point along a line with slope −E that passes through (x 1 , x 2 ), as depicted in Figure 3.12. Moving up and to the left from (x 1 , x 2 ) involves exchanging good 1 for good 2, and moving down and to the right involves exchanging good 2 for good 1. In either movement, the exchange rate is E. Since exchange always involves giving up one good in exchange for another, the exchange rate E corresponds to a slope of −E.

If good 2 represents the consumption of “all other goods,” and it is measured in dollars that you can spend on other goods, then the marginal- willingness-to-pay interpretation is very natural. The marginal rate of substitution of good 2 for good 1 is how many dollars you would just be willing to give up spending on other goods in order to consume a little bit more of good 1. Thus the MRS measures the marginal willingness to give up dollars in order to consume a small amount more of good 1. But giving up those dollars is just like paying dollars in order to consume a little more of good 1.

Originally, preferences were defined in terms of utility: to say a bundle (x 1 , x 2 ) was preferred to a bundle (y 1 , y 2 ) meant that the x-bundle had a higher utility than the y-bundle. But now we tend to think of things the other way around. The preferences of the consumer are the fundamental description useful for analyzing choice, and utility is simply a way of describing preferences. A utility function is a way of assigning a number to every possible consumption bundle such that more-preferred bundles get assigned larger numbers than less-preferred bundles. That is, a bundle

(x 1 , x 2 ) is preferred to a bundle (y 1 , y 2 ) if and only if the utility of (x 1 , x 2 ) is larger than the utility of (y 1 , y 2 ): in symbols, (x 1 , x 2 ) (y 1 , y 2 ) if and only if u(x 1 , x 2 ) > u(y 1 , y 2 ). The only property of a utility assignment that is important is how it orders the bundles of goods. This is ordinal utility.

We summarize this discussion by stating the following principle: a monotonic transformation of a utility function is a utility function that represents the same preferences as the original utility function. Geometrically, a utility function is a way to label indifference curves. Since every bundle on an indifference curve must have the same utility, a utility function is a way of assigning numbers to the different indifference curves in a way that higher indifference curves get assigned larger numbers. Seen from this point of view a monotonic transformation is just a relabeling of indifference curves. As long as indifference curves containing more-preferred bundles get a larger label than indifference curves containing less-preferred bundles, the labeling will represent the same preferences.

Consider a consumer who is consuming some bundle of goods, (x 1 , x 2 ). How does this consumer’s utility change as we give him or her a little more of good 1? This rate of change is called the marginal utility with respect to good 1. We write it as M U 1 and think of it as being a ratio, MU1 = ΔU /Δx 1 = ( u(x 1 + Δx 1 , x 2 ) − u(x 1 , x 2 ) )/ Δx 1

that measures the rate of change in utility (ΔU ) associated with a small change in the amount of good 1 (Δx 1 ). Note that the amount of good 2 is held fixed in this calculation. This definition implies that to calculate the change in utility associated with a small change in consumption of good 1, we can just multiply the change in consumption by the marginal utility of the good:

ΔU = MU1 Δx 1

The marginal utility with respect to good 2 is defined in a similar manner:

M U 2 = ΔU /Δx 2 = u(x 1 , x 2 + Δx 2 ) − u(x 1 , x 2 ) ) / Δx 2

Note that when we compute the marginal utility with respect to good 2 we keep the amount of good 1 constant. We can calculate the change in utility associated with a change in the consumption of good 2 by the formula ΔU = MU2 Δx2 .

It is important to realize that the magnitude of marginal utility depends on the magnitude of utility. Thus it depends on the particular way that we choose to measure utility. If we multiplied utility by 2, then marginal utility would also be multiplied by 2. We would still have a perfectly valid utility function in that it would represent the same preferences, but it would just be scaled differently.

Solving for the slope of the indifference curve we have

MRS = MU1 / MU2 = – Δx2 / Δx1 (4.1)

(Note that we have 2 over 1 on the left-hand side of the equation and 1 over 2 on the right-hand side. Don’t get confused!).

The algebraic sign of the MRS is negative: if you get more of good 1 you have to get less of good 2 in order to keep the same level of utility. However, it gets very tedious to keep track of that pesky minus sign, so economists often refer to the MRS by its absolute value—that is, as a positive number. We’ll follow this convention as long as no confusion will result. Now here is the interesting thing about the MRS calculation: the MRS can be measured by observing a person’s actual behavior we find that rate of exchange E where he or she is just willing to stay put, as described in Chapter 3. The condition that the MRS must equal the slope of the budget line at an interior optimum is obvious graphically, but what does it mean economically? Recall that one of our interpretations of the MRS is that it is that rate of exchange at which the consumer is just willing to stay put. Well, the market is offering a rate of exchange to the consumer of −p 1 /p 2 —if you give up one unit of good 1, you can buy p 1 /p 2 units of good 2. If the consumer is at a consumption bundle where he or she is willing to stay put, it must be one where the MRS is equal to this rate of exchange:

MRS = − p1 / p2

Another way to think about this is to imagine what would happen if the MRS were different from the price ratio. Suppose, for example, that the MRS is Δx2 / Δx1 = −1/2 and the price ratio is 1/1. Then this means the consumer is just willing to give up 2 units of good 1 in order to get 1 unit of good 2—but the market is willing to exchange them on a one-to-one basis. Thus the consumer would certainly be willing to give up some of good 1 in order to purchase a little more of good 2. Whenever the MRS is different from the price ratio, the consumer cannot be at his or her optimal choice.

Revealed preferences

In Chapter 6 we saw how we can use information about the consumer’s preferences and budget constraint to determine his or her demand. In this chapter we reverse this process and show how we can use information about the consumer’s demand to discover information about his or her preferences. Up until now, we were thinking about what preferences could tell us about people’s behavior. But in real life, preferences are not directly observable: we have to discover people’s preferences from observing their behavior. In this chapter we’ll develop some tools to do this. When we talk of determining people’s preferences from observing their behavior, we have to assume that the preferences will remain unchanged while we observe the behavior. Over very long time spans, this is not very reasonable. But for the monthly or quarterly time spans that economists usually deal with, it seems unlikely that a particular consumer’s tastes would change radically. Thus we will adopt a maintained hypothesis that the consumer’s preferences are stable over the time period for which we observe his or her choice behavior.

 

I have had several occasions to ask founders and participants in innovative start-ups a question: “To what extent will the outcome of your effort depend on what you do in your firm?” This is evidently an easy question; the answer comes quickly and in my small sample it has never been less than 80%. Even when they are not sure they will succeed, these bold people think their fate is almost entirely in their own hands. They are surely wrong: the outcome of a start-up depends as much on the achievements of its competitors and on changes in the market as on their own efforts‘ [Kahneman, 2011, p. 261]

Competition neglect – excess entry – optimistic martyrs / micro economics modeling

WYSIATI – what you see is all there is. The inclination of people to react to what is immediately at hand, observable, while neglecting any other information available requiring slightly more effort. Inward looking. Basis for micro-economic model?

Utility theory as p/ Bernouilli (wealth > utilty) is flawed because 1) reference point for initial wealth and change in wealth. Utility theory is also the basis for most of economic theory, p. 274-76. Harry Markowitz suggests to use changes of wealth instead p. 278.

Coordination of Economic Decisions

Douma, S. and Schreuder, H. . Economic Approaches to Organizations . United Kingdodom : Pearson . 20013 . ISBN 978-0-273-73529-8

The subject of this book is the  coordination of economic decisions. The (categories of) mechanisms for that job are markets and organizations. A special class of organizations is of course the firm. And so this summary of the above book is included as a connection of a new theory of the firm under construction with existing economic theories.

Chapter 1: Markets and Organizations

Economic systems can be segmented by their property rights regime for the means of production and by their dominant resource allocation mechanism1. The coordination problem is the question how information is obtained and used in economic decision-making, namely decisions where demand and supply meet. The book contributes to the answering of the coordination problem in economics: why are economic decisions coordinated by markets and by organizations and why do these systems for that job co-exist?

An economic problem is any situation where needs are not met as a result of scarcity of resources. Knowing this, then what is the optimal allocation of the available resources over the alternative uses? If resources are allocated optimally, they are used efficiently (with efficiency).

Economic approaches to organisations can be fruitful if the allocation of scarce resources are taken into account. To this end consider this conceptual framework (figure 1.1): division of labour (1) >> specialization (2) >> coordination (3) >> markets (4) AND organization (5) << information (6) << pressure from environment and selection (7)

1) division of labour as per Adam Smith: splitting of composite tasks into their components leads to increased productivity (this is taken as a fact of life in our kind of (western) society), because:

2) specialisation (Adam Smith: greater dexterity, saving of time to switch between jobs, tools) enables to do the same work with less labour: economies of specialisation. This higher performace comes at a cost to get acquainted with a new task. Higher performance but less choice: trade-off between satisfaction of higher performance and lower satisfaction because of limited choice and boredom

3) coordination: hardly anyone is self-reliant and exchange must take place between specialists to get the products needed and not self-made. The right to use them is transferred: a transaction takes place. This need to be reciprocal. Specialisation leads to a need for coordination, namely the allocation of scarce resources. There are 2 types of coordination: transactions across markets or within organizations.

4) and 5) markets and organizations: for example the stock market: no individual finds another to discuss allocation, but the price system is the coordinating device taking care of allocation. The price is a sufficient statistic (Hayek 1945) for the transaction. Optimal allocation occurs when prices meet at their equilibrium without parties needing to meet or to exchange more information than the price alone. Why is not all exchange via markets? Because if a workperson goes from dept x to dept y then the reason is not a change of relative prices but because he is told to do it (Coase 1937). A firm is essentially a device for creating long term contracts when short term contracts are too bothersome. They do not continue to grow forever, because as they grow, firms tend to accumulate their transaction cost as well; and so over time the tyransactio cost of the firm will offset those of the market. Transactions will shift between markets and organizations as a function of the transaction cost involved in either choice of alternative. Williamson (1975) has expounded this element to be adressed in Ch8 to include the marginal cost of either alternative. The balance between markets and hierarchies is constantly ‘sought after’ and when it is struck then the entrepreneur may decide to change its transaction cost by forming firms or increasing their size up to the point that its transaction cost becomes too high. Ideal markets are characterized by ‘their’ prices being sufficient statistics for individual economic decision making. Ideal organizations are characterized if their transactions are not based on prices to communicate informatiion between parties. Many transactions in reality are governed by hybrid forms of coordination.

6) Information: the eminent form of coordination is a result of the information requirements in that specific sitution. And so information is the crucial element in the model, producing the coordination mechanism. There are many situations where the price alone cannot provide sufficient information to effect a transaction – up to the point where price alone is entirely incapable of the transaction. Organization thus arises as a solution to information problems.

7) the environment and institutions are the environment in which the trade-offs between market and organization take place and they are economic, political, social cultural, institutional, etc in nature. The environment provides the conditions for the creation of both, shapes both and selects both. Institutions are the rules that shape human interaction in a society (a subset of MEMES with a regulatory character or just the entirety of the memes or the memes that are motivators); they are an important element in the environment of organizations and markets. Douglass North (1990, 2005b?). ‘In the absence of the essential safeguards, impersonal exchange does not exist, except in cases where strong ethnic or religious ties make reputation a viable underpinning‘ [Douglass North 2005b p. 27 in Schreuder and Douma p. 18]. Not agreed: evolution of morale.

If the institutions are the rules of the game imposed by the environment, ‘the way the game is played’ is shaped by the countries’ institutional framework – all institutions composing the environment of organizations and markets. These factors detemine which organizations and markets are allowed and if they do then they shape the way they function. These factors are dynamic.

This approach is fairly new because economists viewed coordination by the market between organizations and organizational scientists viewed coordination inside organizations.

Chapter 2 Markets

Standard micro-economic theory focuses on how economic decisions are coordinated by the market mechanism. Consumers decide on how much to consume, producers decide on how much to produce, they meet on the market and there quantity and price are coordinated.

Law of demand: the lower the price the higher the demand. Law of supply: the lower the price the lower the supply. Market equilibrium occurs where demand and supply intersect.

Theory of demand: goods are combined in baskets, each person can rank the goods in a basket for preference, the preferences are assumed to be transitive, each person prefers to have more of a certain good than less of it. Indifference curves represent the preferences of the person. If two baskets are on different locations on the same indifference curve (he is indifferent), then the utility of the two baskets is said to be the same (because the person’s satisfaction is the same for either). It iss assumed that the consumer knows which basket she prefers, but not by how much. The budget line indicates the person’s budget: if this line is combined with the indifference curve, the maximum utility is located on the tangential of the indifference curve with the budget line (there can be only one).

Theory of supply: how a supplier decides on how much to produce. The firm is an objective function describing the goals of the firm (profit, share value). The objective function must be maximized given the constraints of the firm’s production function. The production function describes the relation of the inputs of a firm and the maximum outputs given those inputs. Q=Q(K, L, M) is the maximum ouput at some given input. If K and M are given at some time then the output increases if L increases. L cannot be increased indefinitely and either K or M will constrain a further increase of L and thus of Q. To increase K takes most time and can only be executed in the long run only: in the short run (and so at any time) M can be changed, in the medium and the long term term L can be changed. L = variable short and long run, K = variable long run only. The production function represents all the combinations of K (LT; Capital) and L (short term; labour) isoquants that the firm can choose from if it wants to produce quantity Qx.

Profit maximization in competitive markets: assume that a firm wants to maximise profits. Then Profit = Q.p – c.K – w.L. Constraint of the production function Q = Q(K, L). Decide How Much to produce implies to choose Q. Deciding How To produce means choosing K and L. K and L are free, Q is their function. Short run: K is fixed so only L is free to choose. Profit = p.Q(KL)-c.K-w.L; its maximum is dProfit / dL = p.dQ / dL – w = 0 or dQ / dL = w / p. If dQ / dL is the marginal productivity of labour it decreases with increasing use of Labour (yet another unit of labour will decrease the marginal productivity of Labour, dQ / dL is a decreasing function.). The firm can choose how much to produce, not how to produce. Long run: dQ / dK = p. dQ / dK – c = 0. From which follows that dQ / dK = c / p, while (see above) dQ / dL = w / p. Solving both gives optimal values foor L and K and from that follows Q. The firm chooses K so that the marginal productivity of K is c / p while choosing L so that the marginal productivity of L is w / p. The firm can choose how to produce and how much.

Market coordination. Producers maximize profit: via the amount she calculates L in the short term and K and L in the long term. This results in a supply curve for all firms and an industry supply curve. Consumers maximize utility and for any given price he decides the amount he is going to buy, resulting in a demand curve for all consumers. Supply and demand meet at one point only, the intersection of their curves, and the resulting price is a given for consumers and producers. Now every consumer knows how much he will buy and every producer how much she will produce.

The paradox of profits. Normal profit equals the opportunity cost of the equity capital. Economic profit is any profit in excess of normal profit. If profit falls below the normal profits, then the shareholders will invest their capital elsewhere. In a competitive market a firm cannot make an economic profit in the long run, because profit attracts new incumbants, supply increases, prices go down and economic profits vanish. Hence the paradox: each firm tries to make a profit, but no firm can in the long run.

Comments: 1) if competition was perfect then resource allocation was efficient and the world would be pareto optimal. This does not imply that everyone’s wants are satisfied, however, it just means that, given some configuration, an initial distribution of wealth and talents, nobody can be made better off whithout someone else being worse off. 2) assumptions underpinning the assumption of perfect markets are: 2a) large number of small firms, 2b) free entry and exit of firms, 2c) standardization of products. 3) it is assumed that firms are holistic entities in the sense that its decisions are homogeneous, taken as if by one person with profit maximization in mind, given their utility function. 4) firms are assumed to have only one objective such as profit or shareholder value. If there are other then they must be combined into one as a trade-off. 5) it is assumed that there is perfect information: everyone knows everything relevant to their decisions. In reality information is biased: the insured knows more about his risks than the insurance company, the sales person knows more about his activities when travelling then his boss. This is not a sustainable market. 6) consumers and producers are assumed to maximize their profit and utility and so it is assumed that they must be rational decision makers. The decisions may be less solid and more costly the longer the prediction horizon is. 7) markets are assumed to function in isolation, but it is clear that the environment influences the market.

Chapter 3 Organizations

Are ubiquitous. It is impossible for markets alone to coordinate people’s actions. Paradox at the heart of modern economies: it is possible to an increasing extent to work individually doing specialized work but thhis is only possible because of some form of organization and interdependency. While people appear to have more agency, they are more dependent on others’ performance. The central question the is how organizational coordination – as opposed to market coordination – is achieved.

.. the operation of a market costs something and by forming an organization and allowing some of authority (‘an entrepreneur’) to direct the resources, certain marketing costs are saved‘ [Coase 1937 in Schreuder Douma 2013 p 48].

.. the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy – or of designing an efficient economic system. The answer to this question is closely connected with that other question which arises here, that of who is to do the planning. .. whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals‘ [Hayek 1945 in Schreuder and Douma 2013 pp. 48-49].

The best use of dispersed information is indeed one of the main problems in economic coordination.

Mintzberg identified these ways in which work is coordinated in organizations: mutual adjustment, direct supervision, standardization of work process, standardization of output, standardization of skills, standardization of norms. ‘These are thus also the ways in which people in organizations can communicate knowledge and expectations. Conversely, they are the ways in which people in the organization may learn from other what they need to know to carry out their tasks as well as what is expected from them‘ [Schreuder and Douma 2013 p. 51]. In large organizations is it no longer possible to coordinate via the authority mechanism and so combinations of the other mechanisms are used.

Real organizations are hybrids of the above coorinating mechanisms. Some prototypical organizations are dominated by a specific coordinating mechanism: 1) Entrepreneurial Organization – Direct Supervision, 2) Machine O – Stand. of Work Processes, 3) Professional O – Stand. of Skills, 4) Diversified O – Stand. of Outputs, 5) Innovative O – Mutual Adjustment, 6) Missionary O – Stand. of Norms. When markets are replaced by organizations then the market (price) mechanism is replaced by other coordinating mechanisms. Organizations can take many forms depending on the circumstances: it can handle different types of transactions [p 58].

Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, mysellf included, are in a state of shocked disbelief‘ [Alan Greenspan former chairman of the Federal Reserve about the lack of regulation in the financial markets to the House Committee on Oversight and Govenment Reform during a congressional hearing in 2008].

Chapter 4: Information

The information requirements in any situation determine the kind of coordination mechanisms or mix of them. If agents cannot influence the price then the market is perfect and the agents are price-takers; in that case the prices are sufficient statistics conveying all the necessary information to the market parties. Under conditions of perfect markets (namely perfect competition) agents can only decide on the quantity at some price for some homogeneous good (namely no difficulties with the specifications, quality differences). The price mechanism is a sufficient coordination mechanism where the economic entities have a limited need for information. If all the required information can be absorbed in the price can we rely on the market (price) mechanism as the sole coordinating device.

If the specifications vary then more informtion than the price only is necessary. Sugar: commodity product, price suffices. Fruit: some changes with the season, some more info is needed by selecting the individual pieces. Soup: more info is needed, tasting not practical, brand name as a label to inform client of the specifications to expect. A brand name is a solution to an information problem. Uncertainties exist for instance the quality of next year’s fruit: retailers and suppliers may agree on a contingent claims contract (prices depends on the actual quality at that time). In practical terms it is difficult to cover all contingencies.

If client and supplier have different information then information asymmetry exists. Disclosing all information to a client needed to fully understand some solution or product enables the construction of the object by the client himself and destroys its value. This situation can invite opportunistic (or strategic) behavior in agents.

Hidden information means the existing skewness of availability information between the parties, leading the one to take advantage of the other. Hidden action means introduced skewness between the availability of information between parties. Hidden information and hidden action both come from unobservability, they both imply a skewness of information and both occur in both market and organizational environments. Hidden information is an ex-ante problem, while hidden action is an ex-post problem.

If everybody knew everything then all information would be of equal value.

Chapter 5: Game Theory

Coordination game: two or more players coordinate their decisions so as to reach an outcome that is best for all. Example new technology. If both choose the same platform then the customer is not forced to choose between tech AND brands, but brands only. This is an advantage for both. If the choice is to be made simultaneously then the outcome is unpredictable, if the decisions are seqeuential then one player will follow the other player’s choice of tech. As soon as a first player chooses then the choice must be comunicated to the other so as to reap the benefits and not allow the other to deviate.

The entry (monopolist versus incumbant) game: moving from one stage to two stages. This can be solved by looking ahead and reasoning backwards in a decision tree. Commitment in this sense means that a participant altes the pay-offs irreversibly by committing to some course of action so that it it becomes in its own interest to execute a threat. Example: investing in extending a mobile network prior to a new incumbant entering allows the monopolist to execute its threat to lower prices – thereby increasing its number of customers.

Situations involving more than two players in a single stage game: auctions. In both open auctions and closed bid auctions, the observability of information plays a crucial role. At an increasing bid auction the price may not be perfect for the seller as the one-but-the-last potential buyer may drop out at a price far below the cut-off of the last potential buyer. To prevent that, the dutch auction can be used: a decreasing price auction. In this way the seller reclaims some of the difference between the highest and the one-but-the-highest bid. A problem for the seller is that there is no minimum price. To establish a minimum a seller can revert to a two-stage auction: first the increasing price competition where the winner takes some premium, followed by a dutch auction. If the second stage does not result in a price then the winner of the first stage buys the lot. In this game, only the winner’s private information remains private, the others’ are known after the intial round. During the second round the bidder with the highest private valuation is induced to reveal it and the seller is willing to pay a premium to get this information. The premium is hopefully loer than the difference between the highest and the one-but-the highest bid.

Sealed-bid auction: best performance+synergies considering first bid competitor’s prize.

The observability of auctions pertains to the differences in availability of the private information of each of the participants in the auction. The winner’s curse is the question whether the winner was lucky to win or overly optimistic in her predictions. Competitors can collude to keep the price low.

Single stage PD, Iterated PD for many players. IPD with players’ mistakes: show generosity by retaliating to a lesser extent than the defection and show contrition by not re-retaliating if the other retaliates after a mistaken defection. However, too much forgiveness invites exploitation.

In evolutionary game theory strategies evolve over time: variation, selection and retention. In a fixed environment (fixed proportions of strategies) it pays to learn which competitors are exploitable: maximize cooperation with the cooperating strategies and exploit the exploitables. In a dynamic environment the fitter strategies increase their proportions in the population. If more can evolve then they co-evolve.

Chapter 6: Behavioural theory of the firm

In micro-economics the firm is viewed holistically (as a dot with agency), in behavioral theory it is seen as the locus of the coalition of the (groups of) participants of the firm. The starting point is not full but bounded rationality: cognitive and informational limits to rationality exist. Decision processes in the firm are described as step: 1) defining the goals of it2 2) how it forms expectations on which the decision processes are based 3) describe the process of organizational choice.

Each participant receives inducements and makes contributions to the organization. These can have a wide defintion: they are a vector of inducements and contributions. What sets behavioral economics apart from standard micro-economics is that participants are not fully capable to know every alternative; it is in the information they have. For some of the elements of the vector of for instance employees these are even harder to know than regarding the pay; and so on for all participants of the coalition. In standard micro-economics the management is hired by the shareholder and works for them alone. In behavioral economics, management represents the interests of all stakeholders. The competitive environment as per micro-economics is a given, behavioral economics focuses on the decision making processes in the firm.

Step 1 organizational goals: in standard micro economics (SME) one goal is assumed of profit maximization. In behavioral economics it is assumed that every participant has her goals, that between them do not necessarily coincide. The composition and the overall goals of the coalition (the firrm) are arrived at via bargaining: the more unique the expected contribution the better her bargaining position. Each participant demands that the goals reach some individual level of aspiration; if that hurdle is not reached she will leave the coalition. Theoretically in the long term there would be no difference between the levels of achievement in the firm, the levels of achievement of other firms and the level of aspiration of the participants in these respects. The difference between total resources and total payments required to preserve the coalition is the ‘organizational slack’. So in the long run there would be no organizational slack. However, the markets for the various contributions are not perfect because information about it is difficult to obtain and the levels of aspiration change only slowly. In behavioral theory it is assumed that operational subgoals are specified per managerial area; it is however often impossible to define operational goals per area. And so aspirational levels are identified taking into account the effects of the conflicts between areas and so the conlict is quasi-solved instead of completely.

Step 2 organisational expectations: SME assumes information symmetry; in behavioral firm theory this is not the case. The production manager needs the sales manager to makes a forecast. Expectations means to infer a prediction from available information. Members have different information and different inference rules.

Step 3: organizational choice: SME assumes that behavior of firms is adequately described as maximizing behavior: all alternatives are known and they can be compared so as to maximize the objective. Behavioral theory rejects these assumptions: decisions have to be made under limitations. They make decisions on a proposal, without knowing what alternatives turn up the next day. SME assumes that firms search until the marginal cost of additional searching equals the marginal revenue of additional searching. Other firms would take advantage of this because they decide quicker. In reality this is impractical. In behavioral theory alternatives are roughly evaluated based on available information one at a time instead of maximizing their (assumed) objective function and weighted against some aspired level. This process is better described (then maximizing) as satisficing: to search for alternatives that satisfy levels of aspiration and is threfore acceptable. This process is closer to reality because alternatives often present themselves one at a time (is that so?). Also it is quite implausible that the consequences of each alternative can be calculated because people cannot handle all the relevant information: their rationality is bounded. They intend to be rational but only manage to a limited extent. The final argument why firms are rather satisficing than maximizing is that each stakeholder has her objectives and if a firm has no single objective functon, how can it maximze? Alternatives are evaluated against an aspiration level of each stakeholder and if they meet those they are then accepted.

Even the inteded rationality is rather generous when it concerns people. Kahneman and Tversky concluded that people are biased and use simple rules of thumb to decide.

Chapter 7: Agency Theory

This theory stems from the separation of ownership and control and discusses the relation between the entities the principal and the agent, who makes decisions on behalf of (or that affect) the principle (e.g. manager – shareholder). Dialects of the theory are: the positive agency theory (the firm is a nexus of contracts), that attempts to explain why organizations are as they are, and the principal and agent theory (how does the principle design the agent’s reward structure).

There is a stock market for corporate shares and a market for corporate control, entire companies. Here competition between management teams increases the pressure on management performance. Also there is a market for managerial labour: management of a large firm is typically more prestigious than a smaller one. Also there is a market for the firms products: the more competition in those marjets, the less opportunity for the manager to wing it. Lastly the pay package of the manager usually includes a profit or stock related bonus that brings the manager’s interests more in line with the shareholder’s.

Managerial behavior and ownership structure.

Monitoring and bonding.

Entrepreneurial firms (owned and managed by the same person) and team production. The entrepreneur monitors and controls the work of others and gets paid after all the contracts have been fulfilled. If a freelancer puts in n extra effort she enjoys m extra utility working alone. If in a team putting in extra n she enjoys only 1/m additional utility. This results in shirking: when in a team people tend to put in much less effort then when they work alone. Everyone is willing to put in more effort if the others do also. If this can be monitored by the other members of the team then a solution can be for all to agree not to shirk and to punish someone who does. Else it is unobservable , an informational problem. [Minkler 2004]. If shirking can be detected by an independent monitor (and not or with difficulty by the other team members) then if the monitor is paid a fixed pay then the monitor is incentivized to shirk also. If the monitor has a right to the residuals after the contracted cost are fulfilled, then she will have no incentive to shirk. If the monitor is to be effective then she must be able to make changes to the team (revise contracts, hire and fire, change individual payments) without consent of all the other members and sell her right to be the monitor (to justify actions the effect of which is delayed in time). The monitor in this sense is the entrepreneur, the firm is an entrepreneurial firm. This theory assumes the existence of team production and that monitoring reduces the amount of shirking. The latter implies that this is useful if it is more cumbersome for the members to monitor themselves and each other then for an outsider to do it; only in that case is this model viable.

In these two ways 1) consumption on the job and 2) shirking are restricted by managers.

The firm as a nexus for contracts: if 1) and 2) then how to explain the existence of large corporations not (or to a limited extent) owned by their managers. Shareholders in this sense merely have contracted to receive the residual funds: they are security owners. Shareholders are just one party bound by a contract to the firm like many others with their specific individual contracts.

[Fama and Jensen 1983 a, b] explain entrepreneurial and corporations with this ‘nexus of contracts’ model. ‘They see the organization as a nexus of contracts, written and unwritten, between owners of factors of production and customers‘ [Schreuder and Douma 2013 p151].

The residual payment is the difference between the stochastic cash inflow and the contracted cash outflow, usually fixed amounts. The residual risk is the risk of this diffrence, borne by the residual claimants or risk bearers. The most important contracts determine the nature of the residual payments and the sttructuring of the steps in the decision process of the agents: initiation (decision management), ratification (decision control), implementation (decision management), monitoring (decision control) of proposals. Fama and Jensen distinguish between non-complex and complex organizations: non-complex are the organizations where decisions are concentrated in one or a few agents, complex ini more than a few (small and large organizations respectively). If a small firm is acquired by a larger one, then the decision control transfers from the management of the smaller to the larger while decision management stays with the management of the smaller firm. As the management of the smaller firm is no longer the ultimate risk bearer nor the receiver of the residual payments, this confirms the theory.

Theory of principal and agent

In this theory risk and private information are introduced in the relation between agent and principal. Conditions concerning these issues in the previous versions of the agency theory are relaxed here. If the performance of the firm depends on the weather (random) and the performance of the agent, then: situation 1) the principal has information about the agent’s performance, 2) the agent has no information about the agent, 3) the agent has no direct information about the agent’s performance but has other signals.

These models are single-period and single-relation and therefore not realistic, because agents are usually employed for more than one period. Also if more than agent is employed often in circumstances that are not exactly the same and therefore the relation is different. Monitoring is costly and so the question remains how and how much to monitor. The model is based on monetary criteria only and that is not reality.

Chapter 8: Transaction Cost Economics

The fundamental unit of analysis is a transaction. Whether a transaction is allocated to a market or a firm is a cost minimization issue. Schreuder and Douma argue that to assume tht cost in a firm are lower than cost outside of it is a tautology, becaue: ‘If there is a firm then, apparently, the costs of internal coordination are lower than the cost of market transactions‘ [Douma and Schreuder 2013 p167]. But boundaries can emerge for other reasons than costs alone and, contary to what they claim, this can be empirically tested in a ‘make or buy comparison’. Transaction cost economics as per Williamson is based on bounded rationality and on opportunism. Bounded rationality means that the capacity of humans to formulate and solve problems is limited: it is ‘intendedly rational but only limitedly so‘ [Simon, H.A. . Administrative Behavior (2nd edition) . New York . MacMillan . 1961 and Organizations and Markets . Journal of Economic Perspectives / vol. 5 (2) pp 25-44 . 1991]. Bounded rationality will pose problems when the environment is uncertain or complex. Opportunism is defined as ‘self-interest seeking with a guile’ and as making ‘self-disbelieved statements’. Opportunistic means to try to exploit a situation to your own advantage in some cases by some people. It is difficult and costly to ex-ante find out who will do this and in which cases. Opportunistic behavior can occur ex-ante (not telling the buyer of a defect prior to the transactio) and ex-post (backing out of a purchase). This problem can occur when trading numbers are small and if the numbers are large but reputations are unimportant or information about reputations is unavailable.

Whether a transaction is governed by the market or by an organization (the mode) is governed by the sum of the production cost and the transaction cost and by the atmosphere. The atmosphere is the local environment where the transaction takes place itself giving satisfaction (for example to work as a freelancer or be an employee of some organization). This acknowledges the fact that economic exchange is embedded in an environmental and institutional context with formal and informal ‘rules of the game’ (as per chapter 1); ‘the informal rules of the game are norms of behaviour, conventions and internally imposed rules of conduct, such as those of a company culture. this can be related to the informal organization. ., he acknowledges the importance of such informal rules, but admits that both the concepts of informal organization and the economics of atmosphere remain relatively underdeveloped’ [Williamson 1998, 2007 in Douma and Schreuder 2013 p. 174].

The fundamental transformation means that lock-in occurs after a supplier has fulfilled a contract during some time and has learned how to manufacture efficiently. This lock-in is effectively a monopoly in a many supplier situation.

Critical dimension of a transactions: 1) Asset specificity (asset required for one transaction only) resulting in the availability of quasi-rent (everything above the variable cost) that the buyer will want to appropriate. Solution: merger or long-term contract includes inspection of the buyer’s business by the seller. 2) Uncertainty / complexity 3) Frequency. If 1), 2) and 3) are high then the transaction is likely to be executed within an organization in the long run. If the cost of transacting under the different modes differ then the more efficient mode will prevail. This leads to competition between organizational forrms and the one that turns out to be most efficient prevails in the long term.

A peer group is a group of people together without hierarchy. The coordinating mechanism is mutual adjustment. Advantages are: 1) economies of scale regarding specific assets 2) risk-bearing advantages 3) associational gains (atmospherical elements like higher effort, inspiration, quality). Disadvantages are shirking and so even in peer groups some form of hierarchy emerges (senior partners).

A simple hierarchy is a group of workers with a boss. The advantages are: 1) team production (monitoring according to Alchian and Demsetz (1972), separation of technical areas according to Williamson (1975), this is rare). 2) Economies of communciation and of decision making (in a simple hierchy the connections are n-1, in a peer group the number of connections is 1/2n(n-1): the cost of communicating is much higher in a peer group),re decision making takes less effort and less cost also as a consequence). 3) Monitoring (to prevent shirking in a peer group).

Multistage hierarchies: U form enterprises are functional hierarchies. They suffer from cumulative control loss and corruption of the strategic decisionmaking process. M-form enterprises are a solution for those problems: divided at top level into several semi-autonomous operating divisions along product lines. Top management is assisted by a general office (corporate staff). Advantages: 1) responsibility is assigned to division management cum staff 2) the corporate staff have auditing and advisory role so as to increase control 3) the gereal office is concerned with stratefgic decision including staffing 4) separation of general office from operations allows their executives to not absorb themselves to operational detail. A third is the H-form, a holding with divisions, the general office is reduced to the shareholder representative.

Concerning coordination mechanisms other than markets and organisations: markets coordinate via price mechanisms, organizations via the 6 mechanisms defined earlier. Namely: mutual adjustment, direct supervision, standardization of work process, standardization of output, standardization of skills, standardization of norms. Often the organizational form is a hybrid of some of the ‘pure’ configurations. In addition the markets are usually to some extent organized and organizations can have markets of all kinds inside of them.

Williamson’s transaction cost economics is also called the markets and hierarchies paradigm: markets are replaced with organizations when the price coordination breaks down3. Comments on the paradignm are that: 1) people are not that opportunistic: they can and do trust each other, 2) markets and organizations are not mutually exclusive coordination mechanisms but they should be viewed as a continuum.

Ouchi introduced clans as an intermediate form between markets and organizations as markets, bureaucracies (later hierarchies) and clans [Ouchi 1980, Ouchi and Williamson 1981]. Clans are a third way of coordinating economic transactions. The replacement of bureaucracies for hierarchies was standard form in organizational sociology [Max Weber 1925, translation by A.M. Henderson and T. Parsons . The Theory of Social and Economic Organization . New York: Free Press . 1947]: personal authority is replaced with organizational authority. Modern organizations now had the legitimacy to substitute personal rules for organizational rules, described by Weber as bureaucracies. Ouchi argues that in those bureaucracies prices are replaced with rules. And the rules contain the information required for coordination. The essence therefore of this type of coordination is not its hierarchic but its bureaucratic nature.

The third way of coordinating transactions is a clan. The clan relies on the socialization of individuals ensuring they have common values and beliefs: individuals who have been socialized in the same way have common norms for behavior. The norms can also contain the information necessary for transactions. This is clarified by an axamples of Japanese firms, where workers are socialized so as t adopt the company goals as their own and compensating them for non-performance criteria such as length of service. Their natural inclination as a result of socialization is to do what is best for the firm. Douma and Schreuder argue that Ouchi’s emphasis on rules does not cover the entire richness of observed organizations and it is subsumed by Mintzberg’s typology in 6.

The role of trust: the position of Williamson is that you cannot know ex ante whom to trust because some people cheat some of the time. If you like your business partner and you know that she trusts you, you are less likely to cheat on her, even if that would result in some gain: trust is an important issue. If the trust is mutual you can develop a long-term business relationship. Trust is important between and within organizations. If, in general, people are treated in good faith then they are more likely to act in good faith also. But as Williamson argues, you cannot always ex-ante be sure about the stranger and you might be needing to prepare for an interaction.

Chapter 9: Economic Contributions to Business/Competitive Stategy

Economic contributions to strategy planning and management are mainly related to content, not process: the focus is on the information that firms need to make their choices.

Move and counter-move: In 5.3 commitment was introduced as a way to change the pay-off in a game setting. The example concerned the investment in a network by National, the existing cellphone provider. ‘Commitments are essential to management. They are the means by which a company secures the resourcces necessary for its survival. Investors, customers and employees would likely shun any company the management of which refused to commit publicly to a strategy and back its intentions with investment. Commitments are more than just necessities, however. Used wisely (?), they can be powerful tools that help a company to beat the competition. Pre-emptive investments in production capacity or brand recognition can deter potential rivals from entering a market, while heavy investments in durable, specialized and illiquid resources can be difficult for other companies to replicate quickly. Sometimes, just the signal sent by a major commitment can freeze copetitors in their tracks. When Microsoft announces a coming product launch, for instance, would-be rivals rethink their plans‘ [Sull, D.N. . Managing by Commitments . Harvard Business Review, June 2003 pp. 82-91 in Douma and Schreuder 2013 pp. 223-4].

Memeplex > Belief + Environment > Predicting* / Planning* > Committing* > Execution = Acting as Planned, * means anticipating the future. Compare to: ‘Each single business firm and each business unit in a multibusiness firm needs to have a competitive strategy that specifies how that business intends to compete in its given industry‘ [Douma and Schreuder 2013 p. 228].

Chapter 10: Economic Contributions to Corporate Strategy

In a multibusiness firm some transactions are taken out of the market and internalized within the firm: capital market, management market, market for advise. Also some transactions between the individual businesses are taken out of the market and internalized, such as components, know-how. The question is whether this approach is more efficient than the pure market approach, namely is value created or destroyed. Parenting advantages poses 2 alternative questions: 1) decide whether corporate HQ adds value. Yes if it is cheaper than the market. 2) Can another HQ add more value to one of the business units. Yes if another parent cannot add more value to the BU. This is related to the market of corporate control earlier discussed.

Value adding activities of HQ are: 1) Attract capital and allocate to business units 2) appoint, evaluate abd reward business unit managers 3) offer advice 4) provide functions and services 5) portfolio management by making adjustments to the business units.

In a mature market economy it is harder for an organization to surpass the coordinating capacity of the market. In a less developed economy this threshold is easier to meet and organizatrional coordination is more favourable than market coordination. Organizational relatedness of business units A and B sharing the same HQ can take different shapes: 1) vertical integration (A supplying B) 2) horizontally related (A and B are in the same industry) 3) related diversification (A and B share same technology or same type of customer) 4) unrelated diversification (A and B share nothing). Portofolio management means management of the business units.

Chapter 11: Evolutionary Approaches to Organizations

The perspective is on the development of organizational forms over time: from static to dynamic. The anaysis is about populations of organizational forms, not the individual organization but the ‘species’. Organizations are human constructs: ‘.. organizations can lead a life of their own, to continue the biological analogy – but the element of purposive human behaviour and rational construction is always there‘ [Scott, W.R. . Organizations: Rational, Natural and Open Systems (5th edition) . Englewood Cliffs . NJ: Prentice Hall . 2003]. Thus the creationist view is likely to have more implications for the organizational view than for the biological view. The meaning of the term construct goes beyond the design of something, and includes a product of human mental activity. It might be said that organizations are more constructionist / constructional than giraffes. ‘Organizations are much less ‘out there’: we have first to construct them in our minds before we find them. This delicate philosophical point has important consequences. One of those consequences is that it is harder to agree on the delineation of organizations than of biological species. Another consequence is that it is much less clear what exactly is being ‘selected’, reproduced’ in the next generations and so on‘ [Schreuder and Douma 2013 p 261].

Similarities between the organizational and the biological view evolve from the assumptions that 1) organizations have environments and 2) environments play a role in the explanation of the development of organizational forms. As a result the development of organizational forms instead of individual forms can be studied and additionally the concept of environment is broadened to anything that allows for selective processes. As a reminder: selection on certain forms of organization is now replacing adaptation of individual firms to their environment. ‘So, there is no question that selection, birth and death, replacement and other such phenomena are important objects of orgnizational study as well‘ [Douma and Schreuder 2013 p. 262].

Ecologists study the behavior of populations of beings: what is the defintiion of a population in organizational science and what is the procedure for the distinction of one population of organizational forms from another. Organizational ecology distinguishes three levels of complexity: 1) demography of organizations (changes in populations of organizations such as mortality) 2) population ecology (concerning the links between vital rates between populations of organizations) 3) community ecology of organizations (how the links within and between populations affect the chances of persistence of the community (=population of firms or society?) as a whole). 1) has received the most attention, 2) and 3) not so much.

The definition of a species is interbreeding: its genotype, the genepool. According to Douma and Schreuder there is no equivalent for organizations. This can be solved using the concept of memes identifying the general rules that are adopted by participants in this kind of organization, DPB.

An organizational form is defined as the core properties that make a set of organizations ecologically similar. An organizational population is a set of organizations with some specific organizational form. [Caroll and Hannan in 1995 in Douma and Schreuder 2013 p264]. An assumption is the relative inertia of organizations: they are slow to respond to changes in their environment and they are hard-pressed to implement radical change should this be required. As a consequence organizations are inert relative to their environments. This sets the ecological view apart from many others as the latter focus on adaptability. In other approaches efficiency selects the most efficient organizations. The Carroll and Hannan approach of ecological organizations is that these have other competences: 1) reliability (compared to ad-hoc groups) 2) routines can be maintained in organizations but not in ad-hoc groups 3) organizations can be held accountable more easily 4) the organizational structures are reproducible (procedures must stay in place). Selection pressures will favor those criteria in organizations and so they will remain relatively inert: inertia is a result of selection, not a precondition.

What is the size of a population, namely how many organizations with some typology do wee expect to find in a population: 1) what is its niche 2) what is the carrying capacity. Whether an actual organization survives is detemined by 1) competition with other organizations in their niche, 2) legitimation is defined as the extent to which an organization form is accepted socially (D & S are confusing the organizational form and the actual organization here). As they perform consistently and satisfactorily then they survive.

[Nelson, R. and Winter, S. . An Evolutionary theory of economic change . 1982] Their view is routine behavior of firms and developments of economic systems. Firms are better at self-maintenenance than at change if the environment is constant and if change is required than they are better at ‘more of the same’ than at other kinds of change. They denote the functioning of organizations with: 1) routines that are learned by doing 2) the routines are largely tacit knowledge (Viz Polyani 1962). Organizational routines are equivalent to personal skills: they are automatic behavior programmes. ‘In executing those automatic behavior programmes, choice is suppressed‘ [Douma and Schreuder 2013 p272]. Routines are 1) ubiquitous in organizations, they are the 2) organizational memories and they serve as an organizational truce meaning that satisficing takes the place of maximizing in the classical sense. ‘The result may be that the routines of the organization as a whole are confined to extermely narrow channels by the dikes of vested interest … fear of breaking the truce is, in general, a powerful force tending to hold organizations on the path of relatively inflexible routine‘ [Nelson and Winter 1982 pp 111-2 in Douma and Schreuder p. 272].

Thre classes of routines: 1) operating characteristics, given its short term production factors 2) patterns in the period-by-period changes in production factors 3) routines that modify over time the firm’s operating characteristics. And so routine changing processes are themselves guided by routines. And so just as in the biological sphere, the routine make-up of firms determines the outcomes of their organizational search. (The pivot of this categorization is the presence of production factors in the firm and how that changes over time; my starting point, via Rodin, was the presence of ideas that might or might not lead to the buying or making of production factors or any other method, contract, agreement, innovation or mores DPB). Whatever change happens it is expected to remain as close as possible to the existing situation minimizing damage to the organizational truce.

‘He (Nelson) went on to point out that there are three different if strongly related features of a firm..: its strategy, its structure, and its core capabilities’. .. Some of the strategy may be formalizedand writtten down, but some may also reside in the organizational culture and the management repertoire. .. Structure involves the aay a firm is organizaed and governed and the way decisions are actually made and carried out. Thus, the organization’s structure largely detemines what it does, given the broad strategy. Strategy and structure call forth and mould organizational capabilities, but what an organization can do well also has something of a life of its own (its core capabilities DPB).

Nelson and Winter classify themselves as Lamarckian, while Hannan and Freeman classify themselves as Darwinian [Douma and Schreuder 2013 p 275]. In my opinion this classification is trivial as memetic information can recombine so as to introduce new ‘designs’ in a darwinian sense or starting from the environment, new requirements can be introduced that the organization must deal with to in the end internalize them in the rules, DPB.

Hannan and Freeman conclude that organizational change is random, because 1) organizations cannot predict the future very well 2) the effects of the orrganizational change are uncertain. Nelson and Winter conclude that some elbow room (namely learning imitation and conscious adaptation) exists, but that changes are constrained by the routines that exist at some point. From a practical point of view organizations are less adaptable than might be expected.

Differences between ecological and evolutionary approach: 1) in the ecological approach the organizational form is selected, in the volutionary approach the routines are selected 2) the ecological approach observes the organization as an empty box in an environment, whereas the evolutionary approach introduces behavioral elements and so the inside of the firmm is adressed as well.

Chapter 12: All in the Family

The model encompasses a family of economic approaches. The chapter is about their similarities and differences.

Information is pivotal in the model detemining which coordination mechanism prevails. Environmental and selection pressures on both markets and organizations. In this context the pressure on organizations results in the population power law and the pressure on the stock exchange results in the power law (or exponential ?) for the distribution of the listed firms on the grid.

Commonalities in the family of models: 1) comparison between markets and organizations 2) efficiency guides towards an optimal allocation of scarce resources and therefore in the selection of either markets or organizations as coordinating mechanism 3) information is stored in the routines, the rules, arrangements.

Process and / or content traditional dichotomy: differences in the family of models: content theories dealing with the content of strategies or process theories enabling strategies to come into being. Similarly approaches to organizations can be distinguished as process (what are the processes regardless the outcomes) and content (what is the outcome regardless the process leading up to it). From process to ascending content: behavioral theory – organizational ecology – evolutionary theory – dynamic capabilities – RBV – strategy – transaction cost economics – positive agency theory – principal agent theory.

Evolutionary theory is classified as a process based theory with increasingly more capabilities to generate outcomes.

Static and dynamic approaches: itt turns out that on a content-process and statis-dynamic grid, the middle sections are empty: there is no theory that addresses both dynamicism and content generation simultaneously. View picture 12.3 p. 302.

Level of analysis ascending from micro to macro: dyad of individual persons – small group with common interest or purpose – intergroup of groups with different interests or purposes – organization as a nexus of contracts, a coalition, administratieve unit – organizational dyad as a pair of interacting organizations – population of organizations as all organizations of a specific type – system as the entire set of all organizational populations. View picture 12.4 on p. 304.

The extension of the evolutionary theory with dynamic capabilities has provided a bridge to Resource Based View strategy theories and it implies that evolutionary theories can now allow for more purposeful adaptation than before. In addition the managerial task is recognized in the sense of build, maintain and modify the resource and capability base of organizations.

Lastly: 1) at all levels of analysis (dyads to systems) economic aspects are involved 2) the approaches address different problems because they view a different level and because of different time frames 3) even at the same level of analysis different theories see different problems (differrent lenses etc).

Paragraph about complex adaptive systems.

Chapter 13: Mergers and Acquisitions

The significance of m&a: 1) globalization 2) strong cask-flow after the 2001-2003 slump 3) cheap financing facilitates PE 4) shareholder activism and hedge funds. Success and failure: target firms’ shareholder gain 20+% while bidding firms’ shareholders break even. If this is due to more efficient management of the bidder then the market for corporate control is indeed efficient, else: the market can be elated when the deal is announced but disappointed after the deal is closed. Using event analysis (change in stock price around take over) The net overall gain seems to be positive: M&A apparently in that view is a worthwhile activity as it is creating value for the shareholder. Using outcome studies (comparison of performance of merged of taken-over firms against competitors) shows that associated firms compared to a non-merging control group in 11% of the transactions come out stronger after the event and weaker in 58%. This is consistent with event studies in the long term. Details: 1) combined sales equal or lower in spite of consumer prices tendency to rise 2) investments equal 3) combined R&D lowered 4) assets restructured 5) lay-off unclear 6) management turnover in about half the cases. Serial acquirers seem to be more successful than occasional acquirers.

Focus-increasing acquisitions tend to show the best results. Diversifying acquisitions the worst. The best approximation of the success and failutre rate of any acquisition in general is about 50/50. Target shareholders do best, buyers shareholder break-even. Management encounters changes.

Strategy, acquisitions and hidden information: buyers and sellers suffer from hidden information (risk of buying a lemon).

Auctions: the vast majority of M&A take place via an auction. Description of the process.

The winner’s curse and hubris: a majority of the M&A’s destroy shareholder wealth.

Adverse selection. Moral hazard.

Chapter 14: Hybrid forms

This is a form of coordination in between market and organization. Examples: franchise, joint venture, purchase organization, long-term buyer-supplier relation, business groups (some tie of ownership, management, financing etc), informal networks.

The basic thought was that if asset specificity rises then transaction cost rises more rapidly in a market configuration than in an organization: and in a hybrid form this is in between. As an illustration: if asset specificity is very low then the market can coordinate this, if it is medium specific then a hybrid can coordinate it, else it has to an organization to coordinate it.

Tunnelling is the transfer of value through artificial invoicing. Propping is to prop up underperforming or struggling firms to the benefit of the controlling owners.

Chapter 15: Corporate Governance

This is the system by which the business firms are directed and controlled via rules, responsibilities for decisions and their procedures. It also involves the way the company objectives are set, the means of attaining them and the monitorig of them. The focus here is on the relation between the shareholders and the management. Problems can arise for a lack of alignment and because of information asymmetry between them. This may arise because sharwholders expect the management to maximize their shareholder value, while the management expects to maximize her utility function. Porblems: 1) free cash flow issue in mature markets and hubris 2) difference in attitude towards risk: shareholders invest some portion in each firm to spread risk, a CEO invests all her time in the firm: the shareholder expects that risk be taken, the CEO tends to more risk averse 3) different time horizons: shareholder are entitled forever, CEO’s are contracted for a limite period only 4) the issue of on-the-job consumption by management. Any program in this area should focus on reducing the information gap and the existing interests: the size of the agency problem can be reduced by organizational solutions and market solutions.

[Paul Frentrop 2003] shows that the main reason for improvement of the corporate governance regulations was stock market crashes and scandals such as the South Sea Bubble in the UK 1720 and the 1873 Panic in the USA.

The evolution of different corporate governance systems in the world: 1) social and cultural values: in Anglosaxon countries in the social and political realm individual interests prevail over collective interests and this may explain why markets play a relatively large role 2) is the concept of a corporation viewed from a shareholder perspective or from a stakeholder perspective 3) the existence of large blockholdings in companies by institutional investors (yes in Germany and Japan, no in the US) implies a difference of the corporate governance 4) the institutional arrangementss have been developed over time and they incorporate the lessons of the past; in that sense the countries’ policies are path-dependent. Do these diffferences between countries’ corporate governance regulations increase over time or do they converge? This may be the case because: 1) cross-border mergers, 2) international standardization of discosure requirements 3) harmonization of securities regulations and merger of stock exchanges 4) development of corporate govenernance codes (best practices) incorporating those of other countries.

1If private ownership is combined with market allocation the system is called “market capitalism”, and economies that combine private ownership with economic planning are labelled “command capitalism” or dirigisme. Systems that mix public or cooperative ownership of the means of production with economic planning are called “socialist planned economies”, and systems that combine public or cooperative ownership with markets are called “market socialism.

2In Schreuder and Douma ‘it’ is replaced with the organization.

3In this sense Williamson’s ideas are descendant of Coase’s, who argued that organizations are primarily characterized by authority (here: direct supervision).