Coordination of Economic Decisions

Douma, S. and Schreuder, H. . Economic Approaches to Organizations . United Kingdodom : Pearson . 20013 . ISBN 978-0-273-73529-8

The subject of this book is the  coordination of economic decisions. The (categories of) mechanisms for that job are markets and organizations. A special class of organizations is of course the firm. And so this summary of the above book is included as a connection of a new theory of the firm under construction with existing economic theories.

Chapter 1: Markets and Organizations

Economic systems can be segmented by their property rights regime for the means of production and by their dominant resource allocation mechanism1. The coordination problem is the question how information is obtained and used in economic decision-making, namely decisions where demand and supply meet. The book contributes to the answering of the coordination problem in economics: why are economic decisions coordinated by markets and by organizations and why do these systems for that job co-exist?

An economic problem is any situation where needs are not met as a result of scarcity of resources. Knowing this, then what is the optimal allocation of the available resources over the alternative uses? If resources are allocated optimally, they are used efficiently (with efficiency).

Economic approaches to organisations can be fruitful if the allocation of scarce resources are taken into account. To this end consider this conceptual framework (figure 1.1): division of labour (1) >> specialization (2) >> coordination (3) >> markets (4) AND organization (5) << information (6) << pressure from environment and selection (7)

1) division of labour as per Adam Smith: splitting of composite tasks into their components leads to increased productivity (this is taken as a fact of life in our kind of (western) society), because:

2) specialisation (Adam Smith: greater dexterity, saving of time to switch between jobs, tools) enables to do the same work with less labour: economies of specialisation. This higher performace comes at a cost to get acquainted with a new task. Higher performance but less choice: trade-off between satisfaction of higher performance and lower satisfaction because of limited choice and boredom

3) coordination: hardly anyone is self-reliant and exchange must take place between specialists to get the products needed and not self-made. The right to use them is transferred: a transaction takes place. This need to be reciprocal. Specialisation leads to a need for coordination, namely the allocation of scarce resources. There are 2 types of coordination: transactions across markets or within organizations.

4) and 5) markets and organizations: for example the stock market: no individual finds another to discuss allocation, but the price system is the coordinating device taking care of allocation. The price is a sufficient statistic (Hayek 1945) for the transaction. Optimal allocation occurs when prices meet at their equilibrium without parties needing to meet or to exchange more information than the price alone. Why is not all exchange via markets? Because if a workperson goes from dept x to dept y then the reason is not a change of relative prices but because he is told to do it (Coase 1937). A firm is essentially a device for creating long term contracts when short term contracts are too bothersome. They do not continue to grow forever, because as they grow, firms tend to accumulate their transaction cost as well; and so over time the tyransactio cost of the firm will offset those of the market. Transactions will shift between markets and organizations as a function of the transaction cost involved in either choice of alternative. Williamson (1975) has expounded this element to be adressed in Ch8 to include the marginal cost of either alternative. The balance between markets and hierarchies is constantly ‘sought after’ and when it is struck then the entrepreneur may decide to change its transaction cost by forming firms or increasing their size up to the point that its transaction cost becomes too high. Ideal markets are characterized by ‘their’ prices being sufficient statistics for individual economic decision making. Ideal organizations are characterized if their transactions are not based on prices to communicate informatiion between parties. Many transactions in reality are governed by hybrid forms of coordination.

6) Information: the eminent form of coordination is a result of the information requirements in that specific sitution. And so information is the crucial element in the model, producing the coordination mechanism. There are many situations where the price alone cannot provide sufficient information to effect a transaction – up to the point where price alone is entirely incapable of the transaction. Organization thus arises as a solution to information problems.

7) the environment and institutions are the environment in which the trade-offs between market and organization take place and they are economic, political, social cultural, institutional, etc in nature. The environment provides the conditions for the creation of both, shapes both and selects both. Institutions are the rules that shape human interaction in a society (a subset of MEMES with a regulatory character or just the entirety of the memes or the memes that are motivators); they are an important element in the environment of organizations and markets. Douglass North (1990, 2005b?). ‘In the absence of the essential safeguards, impersonal exchange does not exist, except in cases where strong ethnic or religious ties make reputation a viable underpinning‘ [Douglass North 2005b p. 27 in Schreuder and Douma p. 18]. Not agreed: evolution of morale.

If the institutions are the rules of the game imposed by the environment, ‘the way the game is played’ is shaped by the countries’ institutional framework – all institutions composing the environment of organizations and markets. These factors detemine which organizations and markets are allowed and if they do then they shape the way they function. These factors are dynamic.

This approach is fairly new because economists viewed coordination by the market between organizations and organizational scientists viewed coordination inside organizations.

Chapter 2 Markets

Standard micro-economic theory focuses on how economic decisions are coordinated by the market mechanism. Consumers decide on how much to consume, producers decide on how much to produce, they meet on the market and there quantity and price are coordinated.

Law of demand: the lower the price the higher the demand. Law of supply: the lower the price the lower the supply. Market equilibrium occurs where demand and supply intersect.

Theory of demand: goods are combined in baskets, each person can rank the goods in a basket for preference, the preferences are assumed to be transitive, each person prefers to have more of a certain good than less of it. Indifference curves represent the preferences of the person. If two baskets are on different locations on the same indifference curve (he is indifferent), then the utility of the two baskets is said to be the same (because the person’s satisfaction is the same for either). It iss assumed that the consumer knows which basket she prefers, but not by how much. The budget line indicates the person’s budget: if this line is combined with the indifference curve, the maximum utility is located on the tangential of the indifference curve with the budget line (there can be only one).

Theory of supply: how a supplier decides on how much to produce. The firm is an objective function describing the goals of the firm (profit, share value). The objective function must be maximized given the constraints of the firm’s production function. The production function describes the relation of the inputs of a firm and the maximum outputs given those inputs. Q=Q(K, L, M) is the maximum ouput at some given input. If K and M are given at some time then the output increases if L increases. L cannot be increased indefinitely and either K or M will constrain a further increase of L and thus of Q. To increase K takes most time and can only be executed in the long run only: in the short run (and so at any time) M can be changed, in the medium and the long term term L can be changed. L = variable short and long run, K = variable long run only. The production function represents all the combinations of K (LT; Capital) and L (short term; labour) isoquants that the firm can choose from if it wants to produce quantity Qx.

Profit maximization in competitive markets: assume that a firm wants to maximise profits. Then Profit = Q.p – c.K – w.L. Constraint of the production function Q = Q(K, L). Decide How Much to produce implies to choose Q. Deciding How To produce means choosing K and L. K and L are free, Q is their function. Short run: K is fixed so only L is free to choose. Profit = p.Q(KL)-c.K-w.L; its maximum is dProfit / dL = p.dQ / dL – w = 0 or dQ / dL = w / p. If dQ / dL is the marginal productivity of labour it decreases with increasing use of Labour (yet another unit of labour will decrease the marginal productivity of Labour, dQ / dL is a decreasing function.). The firm can choose how much to produce, not how to produce. Long run: dQ / dK = p. dQ / dK – c = 0. From which follows that dQ / dK = c / p, while (see above) dQ / dL = w / p. Solving both gives optimal values foor L and K and from that follows Q. The firm chooses K so that the marginal productivity of K is c / p while choosing L so that the marginal productivity of L is w / p. The firm can choose how to produce and how much.

Market coordination. Producers maximize profit: via the amount she calculates L in the short term and K and L in the long term. This results in a supply curve for all firms and an industry supply curve. Consumers maximize utility and for any given price he decides the amount he is going to buy, resulting in a demand curve for all consumers. Supply and demand meet at one point only, the intersection of their curves, and the resulting price is a given for consumers and producers. Now every consumer knows how much he will buy and every producer how much she will produce.

The paradox of profits. Normal profit equals the opportunity cost of the equity capital. Economic profit is any profit in excess of normal profit. If profit falls below the normal profits, then the shareholders will invest their capital elsewhere. In a competitive market a firm cannot make an economic profit in the long run, because profit attracts new incumbants, supply increases, prices go down and economic profits vanish. Hence the paradox: each firm tries to make a profit, but no firm can in the long run.

Comments: 1) if competition was perfect then resource allocation was efficient and the world would be pareto optimal. This does not imply that everyone’s wants are satisfied, however, it just means that, given some configuration, an initial distribution of wealth and talents, nobody can be made better off whithout someone else being worse off. 2) assumptions underpinning the assumption of perfect markets are: 2a) large number of small firms, 2b) free entry and exit of firms, 2c) standardization of products. 3) it is assumed that firms are holistic entities in the sense that its decisions are homogeneous, taken as if by one person with profit maximization in mind, given their utility function. 4) firms are assumed to have only one objective such as profit or shareholder value. If there are other then they must be combined into one as a trade-off. 5) it is assumed that there is perfect information: everyone knows everything relevant to their decisions. In reality information is biased: the insured knows more about his risks than the insurance company, the sales person knows more about his activities when travelling then his boss. This is not a sustainable market. 6) consumers and producers are assumed to maximize their profit and utility and so it is assumed that they must be rational decision makers. The decisions may be less solid and more costly the longer the prediction horizon is. 7) markets are assumed to function in isolation, but it is clear that the environment influences the market.

Chapter 3 Organizations

Are ubiquitous. It is impossible for markets alone to coordinate people’s actions. Paradox at the heart of modern economies: it is possible to an increasing extent to work individually doing specialized work but thhis is only possible because of some form of organization and interdependency. While people appear to have more agency, they are more dependent on others’ performance. The central question the is how organizational coordination – as opposed to market coordination – is achieved.

.. the operation of a market costs something and by forming an organization and allowing some of authority (‘an entrepreneur’) to direct the resources, certain marketing costs are saved‘ [Coase 1937 in Schreuder Douma 2013 p 48].

.. the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy – or of designing an efficient economic system. The answer to this question is closely connected with that other question which arises here, that of who is to do the planning. .. whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals‘ [Hayek 1945 in Schreuder and Douma 2013 pp. 48-49].

The best use of dispersed information is indeed one of the main problems in economic coordination.

Mintzberg identified these ways in which work is coordinated in organizations: mutual adjustment, direct supervision, standardization of work process, standardization of output, standardization of skills, standardization of norms. ‘These are thus also the ways in which people in organizations can communicate knowledge and expectations. Conversely, they are the ways in which people in the organization may learn from other what they need to know to carry out their tasks as well as what is expected from them‘ [Schreuder and Douma 2013 p. 51]. In large organizations is it no longer possible to coordinate via the authority mechanism and so combinations of the other mechanisms are used.

Real organizations are hybrids of the above coorinating mechanisms. Some prototypical organizations are dominated by a specific coordinating mechanism: 1) Entrepreneurial Organization – Direct Supervision, 2) Machine O – Stand. of Work Processes, 3) Professional O – Stand. of Skills, 4) Diversified O – Stand. of Outputs, 5) Innovative O – Mutual Adjustment, 6) Missionary O – Stand. of Norms. When markets are replaced by organizations then the market (price) mechanism is replaced by other coordinating mechanisms. Organizations can take many forms depending on the circumstances: it can handle different types of transactions [p 58].

Those of us who have looked to the self-interest of lending institutions to protect shareholders’ equity, mysellf included, are in a state of shocked disbelief‘ [Alan Greenspan former chairman of the Federal Reserve about the lack of regulation in the financial markets to the House Committee on Oversight and Govenment Reform during a congressional hearing in 2008].

Chapter 4: Information

The information requirements in any situation determine the kind of coordination mechanisms or mix of them. If agents cannot influence the price then the market is perfect and the agents are price-takers; in that case the prices are sufficient statistics conveying all the necessary information to the market parties. Under conditions of perfect markets (namely perfect competition) agents can only decide on the quantity at some price for some homogeneous good (namely no difficulties with the specifications, quality differences). The price mechanism is a sufficient coordination mechanism where the economic entities have a limited need for information. If all the required information can be absorbed in the price can we rely on the market (price) mechanism as the sole coordinating device.

If the specifications vary then more informtion than the price only is necessary. Sugar: commodity product, price suffices. Fruit: some changes with the season, some more info is needed by selecting the individual pieces. Soup: more info is needed, tasting not practical, brand name as a label to inform client of the specifications to expect. A brand name is a solution to an information problem. Uncertainties exist for instance the quality of next year’s fruit: retailers and suppliers may agree on a contingent claims contract (prices depends on the actual quality at that time). In practical terms it is difficult to cover all contingencies.

If client and supplier have different information then information asymmetry exists. Disclosing all information to a client needed to fully understand some solution or product enables the construction of the object by the client himself and destroys its value. This situation can invite opportunistic (or strategic) behavior in agents.

Hidden information means the existing skewness of availability information between the parties, leading the one to take advantage of the other. Hidden action means introduced skewness between the availability of information between parties. Hidden information and hidden action both come from unobservability, they both imply a skewness of information and both occur in both market and organizational environments. Hidden information is an ex-ante problem, while hidden action is an ex-post problem.

If everybody knew everything then all information would be of equal value.

Chapter 5: Game Theory

Coordination game: two or more players coordinate their decisions so as to reach an outcome that is best for all. Example new technology. If both choose the same platform then the customer is not forced to choose between tech AND brands, but brands only. This is an advantage for both. If the choice is to be made simultaneously then the outcome is unpredictable, if the decisions are seqeuential then one player will follow the other player’s choice of tech. As soon as a first player chooses then the choice must be comunicated to the other so as to reap the benefits and not allow the other to deviate.

The entry (monopolist versus incumbant) game: moving from one stage to two stages. This can be solved by looking ahead and reasoning backwards in a decision tree. Commitment in this sense means that a participant altes the pay-offs irreversibly by committing to some course of action so that it it becomes in its own interest to execute a threat. Example: investing in extending a mobile network prior to a new incumbant entering allows the monopolist to execute its threat to lower prices – thereby increasing its number of customers.

Situations involving more than two players in a single stage game: auctions. In both open auctions and closed bid auctions, the observability of information plays a crucial role. At an increasing bid auction the price may not be perfect for the seller as the one-but-the-last potential buyer may drop out at a price far below the cut-off of the last potential buyer. To prevent that, the dutch auction can be used: a decreasing price auction. In this way the seller reclaims some of the difference between the highest and the one-but-the-highest bid. A problem for the seller is that there is no minimum price. To establish a minimum a seller can revert to a two-stage auction: first the increasing price competition where the winner takes some premium, followed by a dutch auction. If the second stage does not result in a price then the winner of the first stage buys the lot. In this game, only the winner’s private information remains private, the others’ are known after the intial round. During the second round the bidder with the highest private valuation is induced to reveal it and the seller is willing to pay a premium to get this information. The premium is hopefully loer than the difference between the highest and the one-but-the highest bid.

Sealed-bid auction: best performance+synergies considering first bid competitor’s prize.

The observability of auctions pertains to the differences in availability of the private information of each of the participants in the auction. The winner’s curse is the question whether the winner was lucky to win or overly optimistic in her predictions. Competitors can collude to keep the price low.

Single stage PD, Iterated PD for many players. IPD with players’ mistakes: show generosity by retaliating to a lesser extent than the defection and show contrition by not re-retaliating if the other retaliates after a mistaken defection. However, too much forgiveness invites exploitation.

In evolutionary game theory strategies evolve over time: variation, selection and retention. In a fixed environment (fixed proportions of strategies) it pays to learn which competitors are exploitable: maximize cooperation with the cooperating strategies and exploit the exploitables. In a dynamic environment the fitter strategies increase their proportions in the population. If more can evolve then they co-evolve.

Chapter 6: Behavioural theory of the firm

In micro-economics the firm is viewed holistically (as a dot with agency), in behavioral theory it is seen as the locus of the coalition of the (groups of) participants of the firm. The starting point is not full but bounded rationality: cognitive and informational limits to rationality exist. Decision processes in the firm are described as step: 1) defining the goals of it2 2) how it forms expectations on which the decision processes are based 3) describe the process of organizational choice.

Each participant receives inducements and makes contributions to the organization. These can have a wide defintion: they are a vector of inducements and contributions. What sets behavioral economics apart from standard micro-economics is that participants are not fully capable to know every alternative; it is in the information they have. For some of the elements of the vector of for instance employees these are even harder to know than regarding the pay; and so on for all participants of the coalition. In standard micro-economics the management is hired by the shareholder and works for them alone. In behavioral economics, management represents the interests of all stakeholders. The competitive environment as per micro-economics is a given, behavioral economics focuses on the decision making processes in the firm.

Step 1 organizational goals: in standard micro economics (SME) one goal is assumed of profit maximization. In behavioral economics it is assumed that every participant has her goals, that between them do not necessarily coincide. The composition and the overall goals of the coalition (the firrm) are arrived at via bargaining: the more unique the expected contribution the better her bargaining position. Each participant demands that the goals reach some individual level of aspiration; if that hurdle is not reached she will leave the coalition. Theoretically in the long term there would be no difference between the levels of achievement in the firm, the levels of achievement of other firms and the level of aspiration of the participants in these respects. The difference between total resources and total payments required to preserve the coalition is the ‘organizational slack’. So in the long run there would be no organizational slack. However, the markets for the various contributions are not perfect because information about it is difficult to obtain and the levels of aspiration change only slowly. In behavioral theory it is assumed that operational subgoals are specified per managerial area; it is however often impossible to define operational goals per area. And so aspirational levels are identified taking into account the effects of the conflicts between areas and so the conlict is quasi-solved instead of completely.

Step 2 organisational expectations: SME assumes information symmetry; in behavioral firm theory this is not the case. The production manager needs the sales manager to makes a forecast. Expectations means to infer a prediction from available information. Members have different information and different inference rules.

Step 3: organizational choice: SME assumes that behavior of firms is adequately described as maximizing behavior: all alternatives are known and they can be compared so as to maximize the objective. Behavioral theory rejects these assumptions: decisions have to be made under limitations. They make decisions on a proposal, without knowing what alternatives turn up the next day. SME assumes that firms search until the marginal cost of additional searching equals the marginal revenue of additional searching. Other firms would take advantage of this because they decide quicker. In reality this is impractical. In behavioral theory alternatives are roughly evaluated based on available information one at a time instead of maximizing their (assumed) objective function and weighted against some aspired level. This process is better described (then maximizing) as satisficing: to search for alternatives that satisfy levels of aspiration and is threfore acceptable. This process is closer to reality because alternatives often present themselves one at a time (is that so?). Also it is quite implausible that the consequences of each alternative can be calculated because people cannot handle all the relevant information: their rationality is bounded. They intend to be rational but only manage to a limited extent. The final argument why firms are rather satisficing than maximizing is that each stakeholder has her objectives and if a firm has no single objective functon, how can it maximze? Alternatives are evaluated against an aspiration level of each stakeholder and if they meet those they are then accepted.

Even the inteded rationality is rather generous when it concerns people. Kahneman and Tversky concluded that people are biased and use simple rules of thumb to decide.

Chapter 7: Agency Theory

This theory stems from the separation of ownership and control and discusses the relation between the entities the principal and the agent, who makes decisions on behalf of (or that affect) the principle (e.g. manager – shareholder). Dialects of the theory are: the positive agency theory (the firm is a nexus of contracts), that attempts to explain why organizations are as they are, and the principal and agent theory (how does the principle design the agent’s reward structure).

There is a stock market for corporate shares and a market for corporate control, entire companies. Here competition between management teams increases the pressure on management performance. Also there is a market for managerial labour: management of a large firm is typically more prestigious than a smaller one. Also there is a market for the firms products: the more competition in those marjets, the less opportunity for the manager to wing it. Lastly the pay package of the manager usually includes a profit or stock related bonus that brings the manager’s interests more in line with the shareholder’s.

Managerial behavior and ownership structure.

Monitoring and bonding.

Entrepreneurial firms (owned and managed by the same person) and team production. The entrepreneur monitors and controls the work of others and gets paid after all the contracts have been fulfilled. If a freelancer puts in n extra effort she enjoys m extra utility working alone. If in a team putting in extra n she enjoys only 1/m additional utility. This results in shirking: when in a team people tend to put in much less effort then when they work alone. Everyone is willing to put in more effort if the others do also. If this can be monitored by the other members of the team then a solution can be for all to agree not to shirk and to punish someone who does. Else it is unobservable , an informational problem. [Minkler 2004]. If shirking can be detected by an independent monitor (and not or with difficulty by the other team members) then if the monitor is paid a fixed pay then the monitor is incentivized to shirk also. If the monitor has a right to the residuals after the contracted cost are fulfilled, then she will have no incentive to shirk. If the monitor is to be effective then she must be able to make changes to the team (revise contracts, hire and fire, change individual payments) without consent of all the other members and sell her right to be the monitor (to justify actions the effect of which is delayed in time). The monitor in this sense is the entrepreneur, the firm is an entrepreneurial firm. This theory assumes the existence of team production and that monitoring reduces the amount of shirking. The latter implies that this is useful if it is more cumbersome for the members to monitor themselves and each other then for an outsider to do it; only in that case is this model viable.

In these two ways 1) consumption on the job and 2) shirking are restricted by managers.

The firm as a nexus for contracts: if 1) and 2) then how to explain the existence of large corporations not (or to a limited extent) owned by their managers. Shareholders in this sense merely have contracted to receive the residual funds: they are security owners. Shareholders are just one party bound by a contract to the firm like many others with their specific individual contracts.

[Fama and Jensen 1983 a, b] explain entrepreneurial and corporations with this ‘nexus of contracts’ model. ‘They see the organization as a nexus of contracts, written and unwritten, between owners of factors of production and customers‘ [Schreuder and Douma 2013 p151].

The residual payment is the difference between the stochastic cash inflow and the contracted cash outflow, usually fixed amounts. The residual risk is the risk of this diffrence, borne by the residual claimants or risk bearers. The most important contracts determine the nature of the residual payments and the sttructuring of the steps in the decision process of the agents: initiation (decision management), ratification (decision control), implementation (decision management), monitoring (decision control) of proposals. Fama and Jensen distinguish between non-complex and complex organizations: non-complex are the organizations where decisions are concentrated in one or a few agents, complex ini more than a few (small and large organizations respectively). If a small firm is acquired by a larger one, then the decision control transfers from the management of the smaller to the larger while decision management stays with the management of the smaller firm. As the management of the smaller firm is no longer the ultimate risk bearer nor the receiver of the residual payments, this confirms the theory.

Theory of principal and agent

In this theory risk and private information are introduced in the relation between agent and principal. Conditions concerning these issues in the previous versions of the agency theory are relaxed here. If the performance of the firm depends on the weather (random) and the performance of the agent, then: situation 1) the principal has information about the agent’s performance, 2) the agent has no information about the agent, 3) the agent has no direct information about the agent’s performance but has other signals.

These models are single-period and single-relation and therefore not realistic, because agents are usually employed for more than one period. Also if more than agent is employed often in circumstances that are not exactly the same and therefore the relation is different. Monitoring is costly and so the question remains how and how much to monitor. The model is based on monetary criteria only and that is not reality.

Chapter 8: Transaction Cost Economics

The fundamental unit of analysis is a transaction. Whether a transaction is allocated to a market or a firm is a cost minimization issue. Schreuder and Douma argue that to assume tht cost in a firm are lower than cost outside of it is a tautology, becaue: ‘If there is a firm then, apparently, the costs of internal coordination are lower than the cost of market transactions‘ [Douma and Schreuder 2013 p167]. But boundaries can emerge for other reasons than costs alone and, contary to what they claim, this can be empirically tested in a ‘make or buy comparison’. Transaction cost economics as per Williamson is based on bounded rationality and on opportunism. Bounded rationality means that the capacity of humans to formulate and solve problems is limited: it is ‘intendedly rational but only limitedly so‘ [Simon, H.A. . Administrative Behavior (2nd edition) . New York . MacMillan . 1961 and Organizations and Markets . Journal of Economic Perspectives / vol. 5 (2) pp 25-44 . 1991]. Bounded rationality will pose problems when the environment is uncertain or complex. Opportunism is defined as ‘self-interest seeking with a guile’ and as making ‘self-disbelieved statements’. Opportunistic means to try to exploit a situation to your own advantage in some cases by some people. It is difficult and costly to ex-ante find out who will do this and in which cases. Opportunistic behavior can occur ex-ante (not telling the buyer of a defect prior to the transactio) and ex-post (backing out of a purchase). This problem can occur when trading numbers are small and if the numbers are large but reputations are unimportant or information about reputations is unavailable.

Whether a transaction is governed by the market or by an organization (the mode) is governed by the sum of the production cost and the transaction cost and by the atmosphere. The atmosphere is the local environment where the transaction takes place itself giving satisfaction (for example to work as a freelancer or be an employee of some organization). This acknowledges the fact that economic exchange is embedded in an environmental and institutional context with formal and informal ‘rules of the game’ (as per chapter 1); ‘the informal rules of the game are norms of behaviour, conventions and internally imposed rules of conduct, such as those of a company culture. this can be related to the informal organization. ., he acknowledges the importance of such informal rules, but admits that both the concepts of informal organization and the economics of atmosphere remain relatively underdeveloped’ [Williamson 1998, 2007 in Douma and Schreuder 2013 p. 174].

The fundamental transformation means that lock-in occurs after a supplier has fulfilled a contract during some time and has learned how to manufacture efficiently. This lock-in is effectively a monopoly in a many supplier situation.

Critical dimension of a transactions: 1) Asset specificity (asset required for one transaction only) resulting in the availability of quasi-rent (everything above the variable cost) that the buyer will want to appropriate. Solution: merger or long-term contract includes inspection of the buyer’s business by the seller. 2) Uncertainty / complexity 3) Frequency. If 1), 2) and 3) are high then the transaction is likely to be executed within an organization in the long run. If the cost of transacting under the different modes differ then the more efficient mode will prevail. This leads to competition between organizational forrms and the one that turns out to be most efficient prevails in the long term.

A peer group is a group of people together without hierarchy. The coordinating mechanism is mutual adjustment. Advantages are: 1) economies of scale regarding specific assets 2) risk-bearing advantages 3) associational gains (atmospherical elements like higher effort, inspiration, quality). Disadvantages are shirking and so even in peer groups some form of hierarchy emerges (senior partners).

A simple hierarchy is a group of workers with a boss. The advantages are: 1) team production (monitoring according to Alchian and Demsetz (1972), separation of technical areas according to Williamson (1975), this is rare). 2) Economies of communciation and of decision making (in a simple hierchy the connections are n-1, in a peer group the number of connections is 1/2n(n-1): the cost of communicating is much higher in a peer group),re decision making takes less effort and less cost also as a consequence). 3) Monitoring (to prevent shirking in a peer group).

Multistage hierarchies: U form enterprises are functional hierarchies. They suffer from cumulative control loss and corruption of the strategic decisionmaking process. M-form enterprises are a solution for those problems: divided at top level into several semi-autonomous operating divisions along product lines. Top management is assisted by a general office (corporate staff). Advantages: 1) responsibility is assigned to division management cum staff 2) the corporate staff have auditing and advisory role so as to increase control 3) the gereal office is concerned with stratefgic decision including staffing 4) separation of general office from operations allows their executives to not absorb themselves to operational detail. A third is the H-form, a holding with divisions, the general office is reduced to the shareholder representative.

Concerning coordination mechanisms other than markets and organisations: markets coordinate via price mechanisms, organizations via the 6 mechanisms defined earlier. Namely: mutual adjustment, direct supervision, standardization of work process, standardization of output, standardization of skills, standardization of norms. Often the organizational form is a hybrid of some of the ‘pure’ configurations. In addition the markets are usually to some extent organized and organizations can have markets of all kinds inside of them.

Williamson’s transaction cost economics is also called the markets and hierarchies paradigm: markets are replaced with organizations when the price coordination breaks down3. Comments on the paradignm are that: 1) people are not that opportunistic: they can and do trust each other, 2) markets and organizations are not mutually exclusive coordination mechanisms but they should be viewed as a continuum.

Ouchi introduced clans as an intermediate form between markets and organizations as markets, bureaucracies (later hierarchies) and clans [Ouchi 1980, Ouchi and Williamson 1981]. Clans are a third way of coordinating economic transactions. The replacement of bureaucracies for hierarchies was standard form in organizational sociology [Max Weber 1925, translation by A.M. Henderson and T. Parsons . The Theory of Social and Economic Organization . New York: Free Press . 1947]: personal authority is replaced with organizational authority. Modern organizations now had the legitimacy to substitute personal rules for organizational rules, described by Weber as bureaucracies. Ouchi argues that in those bureaucracies prices are replaced with rules. And the rules contain the information required for coordination. The essence therefore of this type of coordination is not its hierarchic but its bureaucratic nature.

The third way of coordinating transactions is a clan. The clan relies on the socialization of individuals ensuring they have common values and beliefs: individuals who have been socialized in the same way have common norms for behavior. The norms can also contain the information necessary for transactions. This is clarified by an axamples of Japanese firms, where workers are socialized so as t adopt the company goals as their own and compensating them for non-performance criteria such as length of service. Their natural inclination as a result of socialization is to do what is best for the firm. Douma and Schreuder argue that Ouchi’s emphasis on rules does not cover the entire richness of observed organizations and it is subsumed by Mintzberg’s typology in 6.

The role of trust: the position of Williamson is that you cannot know ex ante whom to trust because some people cheat some of the time. If you like your business partner and you know that she trusts you, you are less likely to cheat on her, even if that would result in some gain: trust is an important issue. If the trust is mutual you can develop a long-term business relationship. Trust is important between and within organizations. If, in general, people are treated in good faith then they are more likely to act in good faith also. But as Williamson argues, you cannot always ex-ante be sure about the stranger and you might be needing to prepare for an interaction.

Chapter 9: Economic Contributions to Business/Competitive Stategy

Economic contributions to strategy planning and management are mainly related to content, not process: the focus is on the information that firms need to make their choices.

Move and counter-move: In 5.3 commitment was introduced as a way to change the pay-off in a game setting. The example concerned the investment in a network by National, the existing cellphone provider. ‘Commitments are essential to management. They are the means by which a company secures the resourcces necessary for its survival. Investors, customers and employees would likely shun any company the management of which refused to commit publicly to a strategy and back its intentions with investment. Commitments are more than just necessities, however. Used wisely (?), they can be powerful tools that help a company to beat the competition. Pre-emptive investments in production capacity or brand recognition can deter potential rivals from entering a market, while heavy investments in durable, specialized and illiquid resources can be difficult for other companies to replicate quickly. Sometimes, just the signal sent by a major commitment can freeze copetitors in their tracks. When Microsoft announces a coming product launch, for instance, would-be rivals rethink their plans‘ [Sull, D.N. . Managing by Commitments . Harvard Business Review, June 2003 pp. 82-91 in Douma and Schreuder 2013 pp. 223-4].

Memeplex > Belief + Environment > Predicting* / Planning* > Committing* > Execution = Acting as Planned, * means anticipating the future. Compare to: ‘Each single business firm and each business unit in a multibusiness firm needs to have a competitive strategy that specifies how that business intends to compete in its given industry‘ [Douma and Schreuder 2013 p. 228].

Chapter 10: Economic Contributions to Corporate Strategy

In a multibusiness firm some transactions are taken out of the market and internalized within the firm: capital market, management market, market for advise. Also some transactions between the individual businesses are taken out of the market and internalized, such as components, know-how. The question is whether this approach is more efficient than the pure market approach, namely is value created or destroyed. Parenting advantages poses 2 alternative questions: 1) decide whether corporate HQ adds value. Yes if it is cheaper than the market. 2) Can another HQ add more value to one of the business units. Yes if another parent cannot add more value to the BU. This is related to the market of corporate control earlier discussed.

Value adding activities of HQ are: 1) Attract capital and allocate to business units 2) appoint, evaluate abd reward business unit managers 3) offer advice 4) provide functions and services 5) portfolio management by making adjustments to the business units.

In a mature market economy it is harder for an organization to surpass the coordinating capacity of the market. In a less developed economy this threshold is easier to meet and organizatrional coordination is more favourable than market coordination. Organizational relatedness of business units A and B sharing the same HQ can take different shapes: 1) vertical integration (A supplying B) 2) horizontally related (A and B are in the same industry) 3) related diversification (A and B share same technology or same type of customer) 4) unrelated diversification (A and B share nothing). Portofolio management means management of the business units.

Chapter 11: Evolutionary Approaches to Organizations

The perspective is on the development of organizational forms over time: from static to dynamic. The anaysis is about populations of organizational forms, not the individual organization but the ‘species’. Organizations are human constructs: ‘.. organizations can lead a life of their own, to continue the biological analogy – but the element of purposive human behaviour and rational construction is always there‘ [Scott, W.R. . Organizations: Rational, Natural and Open Systems (5th edition) . Englewood Cliffs . NJ: Prentice Hall . 2003]. Thus the creationist view is likely to have more implications for the organizational view than for the biological view. The meaning of the term construct goes beyond the design of something, and includes a product of human mental activity. It might be said that organizations are more constructionist / constructional than giraffes. ‘Organizations are much less ‘out there’: we have first to construct them in our minds before we find them. This delicate philosophical point has important consequences. One of those consequences is that it is harder to agree on the delineation of organizations than of biological species. Another consequence is that it is much less clear what exactly is being ‘selected’, reproduced’ in the next generations and so on‘ [Schreuder and Douma 2013 p 261].

Similarities between the organizational and the biological view evolve from the assumptions that 1) organizations have environments and 2) environments play a role in the explanation of the development of organizational forms. As a result the development of organizational forms instead of individual forms can be studied and additionally the concept of environment is broadened to anything that allows for selective processes. As a reminder: selection on certain forms of organization is now replacing adaptation of individual firms to their environment. ‘So, there is no question that selection, birth and death, replacement and other such phenomena are important objects of orgnizational study as well‘ [Douma and Schreuder 2013 p. 262].

Ecologists study the behavior of populations of beings: what is the defintiion of a population in organizational science and what is the procedure for the distinction of one population of organizational forms from another. Organizational ecology distinguishes three levels of complexity: 1) demography of organizations (changes in populations of organizations such as mortality) 2) population ecology (concerning the links between vital rates between populations of organizations) 3) community ecology of organizations (how the links within and between populations affect the chances of persistence of the community (=population of firms or society?) as a whole). 1) has received the most attention, 2) and 3) not so much.

The definition of a species is interbreeding: its genotype, the genepool. According to Douma and Schreuder there is no equivalent for organizations. This can be solved using the concept of memes identifying the general rules that are adopted by participants in this kind of organization, DPB.

An organizational form is defined as the core properties that make a set of organizations ecologically similar. An organizational population is a set of organizations with some specific organizational form. [Caroll and Hannan in 1995 in Douma and Schreuder 2013 p264]. An assumption is the relative inertia of organizations: they are slow to respond to changes in their environment and they are hard-pressed to implement radical change should this be required. As a consequence organizations are inert relative to their environments. This sets the ecological view apart from many others as the latter focus on adaptability. In other approaches efficiency selects the most efficient organizations. The Carroll and Hannan approach of ecological organizations is that these have other competences: 1) reliability (compared to ad-hoc groups) 2) routines can be maintained in organizations but not in ad-hoc groups 3) organizations can be held accountable more easily 4) the organizational structures are reproducible (procedures must stay in place). Selection pressures will favor those criteria in organizations and so they will remain relatively inert: inertia is a result of selection, not a precondition.

What is the size of a population, namely how many organizations with some typology do wee expect to find in a population: 1) what is its niche 2) what is the carrying capacity. Whether an actual organization survives is detemined by 1) competition with other organizations in their niche, 2) legitimation is defined as the extent to which an organization form is accepted socially (D & S are confusing the organizational form and the actual organization here). As they perform consistently and satisfactorily then they survive.

[Nelson, R. and Winter, S. . An Evolutionary theory of economic change . 1982] Their view is routine behavior of firms and developments of economic systems. Firms are better at self-maintenenance than at change if the environment is constant and if change is required than they are better at ‘more of the same’ than at other kinds of change. They denote the functioning of organizations with: 1) routines that are learned by doing 2) the routines are largely tacit knowledge (Viz Polyani 1962). Organizational routines are equivalent to personal skills: they are automatic behavior programmes. ‘In executing those automatic behavior programmes, choice is suppressed‘ [Douma and Schreuder 2013 p272]. Routines are 1) ubiquitous in organizations, they are the 2) organizational memories and they serve as an organizational truce meaning that satisficing takes the place of maximizing in the classical sense. ‘The result may be that the routines of the organization as a whole are confined to extermely narrow channels by the dikes of vested interest … fear of breaking the truce is, in general, a powerful force tending to hold organizations on the path of relatively inflexible routine‘ [Nelson and Winter 1982 pp 111-2 in Douma and Schreuder p. 272].

Thre classes of routines: 1) operating characteristics, given its short term production factors 2) patterns in the period-by-period changes in production factors 3) routines that modify over time the firm’s operating characteristics. And so routine changing processes are themselves guided by routines. And so just as in the biological sphere, the routine make-up of firms determines the outcomes of their organizational search. (The pivot of this categorization is the presence of production factors in the firm and how that changes over time; my starting point, via Rodin, was the presence of ideas that might or might not lead to the buying or making of production factors or any other method, contract, agreement, innovation or mores DPB). Whatever change happens it is expected to remain as close as possible to the existing situation minimizing damage to the organizational truce.

‘He (Nelson) went on to point out that there are three different if strongly related features of a firm..: its strategy, its structure, and its core capabilities’. .. Some of the strategy may be formalizedand writtten down, but some may also reside in the organizational culture and the management repertoire. .. Structure involves the aay a firm is organizaed and governed and the way decisions are actually made and carried out. Thus, the organization’s structure largely detemines what it does, given the broad strategy. Strategy and structure call forth and mould organizational capabilities, but what an organization can do well also has something of a life of its own (its core capabilities DPB).

Nelson and Winter classify themselves as Lamarckian, while Hannan and Freeman classify themselves as Darwinian [Douma and Schreuder 2013 p 275]. In my opinion this classification is trivial as memetic information can recombine so as to introduce new ‘designs’ in a darwinian sense or starting from the environment, new requirements can be introduced that the organization must deal with to in the end internalize them in the rules, DPB.

Hannan and Freeman conclude that organizational change is random, because 1) organizations cannot predict the future very well 2) the effects of the orrganizational change are uncertain. Nelson and Winter conclude that some elbow room (namely learning imitation and conscious adaptation) exists, but that changes are constrained by the routines that exist at some point. From a practical point of view organizations are less adaptable than might be expected.

Differences between ecological and evolutionary approach: 1) in the ecological approach the organizational form is selected, in the volutionary approach the routines are selected 2) the ecological approach observes the organization as an empty box in an environment, whereas the evolutionary approach introduces behavioral elements and so the inside of the firmm is adressed as well.

Chapter 12: All in the Family

The model encompasses a family of economic approaches. The chapter is about their similarities and differences.

Information is pivotal in the model detemining which coordination mechanism prevails. Environmental and selection pressures on both markets and organizations. In this context the pressure on organizations results in the population power law and the pressure on the stock exchange results in the power law (or exponential ?) for the distribution of the listed firms on the grid.

Commonalities in the family of models: 1) comparison between markets and organizations 2) efficiency guides towards an optimal allocation of scarce resources and therefore in the selection of either markets or organizations as coordinating mechanism 3) information is stored in the routines, the rules, arrangements.

Process and / or content traditional dichotomy: differences in the family of models: content theories dealing with the content of strategies or process theories enabling strategies to come into being. Similarly approaches to organizations can be distinguished as process (what are the processes regardless the outcomes) and content (what is the outcome regardless the process leading up to it). From process to ascending content: behavioral theory – organizational ecology – evolutionary theory – dynamic capabilities – RBV – strategy – transaction cost economics – positive agency theory – principal agent theory.

Evolutionary theory is classified as a process based theory with increasingly more capabilities to generate outcomes.

Static and dynamic approaches: itt turns out that on a content-process and statis-dynamic grid, the middle sections are empty: there is no theory that addresses both dynamicism and content generation simultaneously. View picture 12.3 p. 302.

Level of analysis ascending from micro to macro: dyad of individual persons – small group with common interest or purpose – intergroup of groups with different interests or purposes – organization as a nexus of contracts, a coalition, administratieve unit – organizational dyad as a pair of interacting organizations – population of organizations as all organizations of a specific type – system as the entire set of all organizational populations. View picture 12.4 on p. 304.

The extension of the evolutionary theory with dynamic capabilities has provided a bridge to Resource Based View strategy theories and it implies that evolutionary theories can now allow for more purposeful adaptation than before. In addition the managerial task is recognized in the sense of build, maintain and modify the resource and capability base of organizations.

Lastly: 1) at all levels of analysis (dyads to systems) economic aspects are involved 2) the approaches address different problems because they view a different level and because of different time frames 3) even at the same level of analysis different theories see different problems (differrent lenses etc).

Paragraph about complex adaptive systems.

Chapter 13: Mergers and Acquisitions

The significance of m&a: 1) globalization 2) strong cask-flow after the 2001-2003 slump 3) cheap financing facilitates PE 4) shareholder activism and hedge funds. Success and failure: target firms’ shareholder gain 20+% while bidding firms’ shareholders break even. If this is due to more efficient management of the bidder then the market for corporate control is indeed efficient, else: the market can be elated when the deal is announced but disappointed after the deal is closed. Using event analysis (change in stock price around take over) The net overall gain seems to be positive: M&A apparently in that view is a worthwhile activity as it is creating value for the shareholder. Using outcome studies (comparison of performance of merged of taken-over firms against competitors) shows that associated firms compared to a non-merging control group in 11% of the transactions come out stronger after the event and weaker in 58%. This is consistent with event studies in the long term. Details: 1) combined sales equal or lower in spite of consumer prices tendency to rise 2) investments equal 3) combined R&D lowered 4) assets restructured 5) lay-off unclear 6) management turnover in about half the cases. Serial acquirers seem to be more successful than occasional acquirers.

Focus-increasing acquisitions tend to show the best results. Diversifying acquisitions the worst. The best approximation of the success and failutre rate of any acquisition in general is about 50/50. Target shareholders do best, buyers shareholder break-even. Management encounters changes.

Strategy, acquisitions and hidden information: buyers and sellers suffer from hidden information (risk of buying a lemon).

Auctions: the vast majority of M&A take place via an auction. Description of the process.

The winner’s curse and hubris: a majority of the M&A’s destroy shareholder wealth.

Adverse selection. Moral hazard.

Chapter 14: Hybrid forms

This is a form of coordination in between market and organization. Examples: franchise, joint venture, purchase organization, long-term buyer-supplier relation, business groups (some tie of ownership, management, financing etc), informal networks.

The basic thought was that if asset specificity rises then transaction cost rises more rapidly in a market configuration than in an organization: and in a hybrid form this is in between. As an illustration: if asset specificity is very low then the market can coordinate this, if it is medium specific then a hybrid can coordinate it, else it has to an organization to coordinate it.

Tunnelling is the transfer of value through artificial invoicing. Propping is to prop up underperforming or struggling firms to the benefit of the controlling owners.

Chapter 15: Corporate Governance

This is the system by which the business firms are directed and controlled via rules, responsibilities for decisions and their procedures. It also involves the way the company objectives are set, the means of attaining them and the monitorig of them. The focus here is on the relation between the shareholders and the management. Problems can arise for a lack of alignment and because of information asymmetry between them. This may arise because sharwholders expect the management to maximize their shareholder value, while the management expects to maximize her utility function. Porblems: 1) free cash flow issue in mature markets and hubris 2) difference in attitude towards risk: shareholders invest some portion in each firm to spread risk, a CEO invests all her time in the firm: the shareholder expects that risk be taken, the CEO tends to more risk averse 3) different time horizons: shareholder are entitled forever, CEO’s are contracted for a limite period only 4) the issue of on-the-job consumption by management. Any program in this area should focus on reducing the information gap and the existing interests: the size of the agency problem can be reduced by organizational solutions and market solutions.

[Paul Frentrop 2003] shows that the main reason for improvement of the corporate governance regulations was stock market crashes and scandals such as the South Sea Bubble in the UK 1720 and the 1873 Panic in the USA.

The evolution of different corporate governance systems in the world: 1) social and cultural values: in Anglosaxon countries in the social and political realm individual interests prevail over collective interests and this may explain why markets play a relatively large role 2) is the concept of a corporation viewed from a shareholder perspective or from a stakeholder perspective 3) the existence of large blockholdings in companies by institutional investors (yes in Germany and Japan, no in the US) implies a difference of the corporate governance 4) the institutional arrangementss have been developed over time and they incorporate the lessons of the past; in that sense the countries’ policies are path-dependent. Do these diffferences between countries’ corporate governance regulations increase over time or do they converge? This may be the case because: 1) cross-border mergers, 2) international standardization of discosure requirements 3) harmonization of securities regulations and merger of stock exchanges 4) development of corporate govenernance codes (best practices) incorporating those of other countries.

1If private ownership is combined with market allocation the system is called “market capitalism”, and economies that combine private ownership with economic planning are labelled “command capitalism” or dirigisme. Systems that mix public or cooperative ownership of the means of production with economic planning are called “socialist planned economies”, and systems that combine public or cooperative ownership with markets are called “market socialism.

2In Schreuder and Douma ‘it’ is replaced with the organization.

3In this sense Williamson’s ideas are descendant of Coase’s, who argued that organizations are primarily characterized by authority (here: direct supervision).

A New Kind of Science

Wolfram concludes that ‘the phenomenon of complexity is quite universal – and quite independent of the details of particular systems’. This complex behaviour does not depend on system features such as the way cellulare automata are typically arranged in a rigid array or that they are processed in parallel. Very simple rules of cellular automata generally lead to repetitive behaviour, slightly more complex rules may lead to nested behaviour and even more complex rules may lead to complex behaviour of the system. Complexity with regards to the underlying rules means how they can be intricate or their assembly or make-up is complicated. Complexity with regards to the behaviour of the overall system means that little or no regularity is observed.

The surprise is that the threshold for the level of complexity of the underlying rules to generate overall system complexity is relatively low. Conversely, above the threshold, there is no requirement for the rules to become more complex for the overall behaviour of the system to become more complex.

And vice versa: even the most complex of rules are capable of producing simple behaviour. Moreover: the kinds of behaviour at a system level are similar for various kinds of underlying rules. They can be categorized as repetitive, nested, random and ‘including localized structures’. This implies that general principles exist that produce the behaviour of a wide range of systems, regardless of the details of the underlying rules. And so, without knowing every detail of the observed system, we can make fundamental statements about its overall behaviour. Another consequence is that in order to study complex behaviour, there is no need to design vastly complicated computer programs in order to generate interesting behaviour: the simple programs will do [Wolfram, 2002, pp. 105 – 113].

Numbers
Systems used in textbooks for complete analysis may have a limited capacity to generate complex behaviour because they, given the difficulties to make a complete analysis, are specifically chosen for their amenability to complete analysis, hence of a simple kind. If we ignore the need for analysis and look only at results of computer experiments, even simple ‘number programs’ can lead to complex results.

One difference is that in traditional mathematics, numbers are usually seen as elementary objects, the most important attribute of which is their size. Not so for computers: numbers must be represented explicity (in their entirety) for any computer to be able to work with it. This means that a computer uses numbers as we do: by reading them or writing them down fully as a sequence of digits. Whereas we humans do this on base 10 (0 to 9), computers typically use base 2 (0 to 1). Operations on these sequences have the effect that the sequences of digits are updated and change shape. In tradional mathematics, this effect is disregarded: the effect of an operation on a sequence as a consequence of an operation is considered trivial. Yet this effect amongst others is by itself capable of introducing complexity. However, even when the size only is represented as a base 2 digit sequence when executing a simple operator such as multiplication with fractions or even whole numbers, complex behaviour is possible.

Indeed, in the end, despite some confusing suggestions from traditional mathematics, we will discover that the general behavior of systems based on numbers is very similar to the general behavior of simple programs that we have already discussed‘ [Wolfram, 2002, p 117].

The underlying rules for systems like cellular automata are usually different from those for systems based on numbers. The main reason forr that is that rules for cellular automata are always local: the new color of any particular cell depends only on the previous colour of that cell and its immediate neighbours. But in systems based on numbers there is usually no such locality. But despite the absence of locality in the underlying rules of systes based on numbers it is possible to find localized structures also seen in cellular automata.

When using recursive functions of a form such as f(n) = f(n – f(n- 1) then only subtraction and addition are sufficient for the development of small programs based on numbers that generate behaviour of great complexity.

And almost by definition, numbers that can be obtained by simple mathematical operations will correspond to simple such (symbolic DPB) expressions. But the problem is that there is no telling how difficult it may be to compute the actual value of a number from the symbolic expression that is used to represent it‘ [Wolfram, 2002, p143].

Adding more dimensions to a cellular automaton or a turing machine does not necessarily mean that the complexity increases.

But the crucial point that I will discuss more in Chapter 7 is that the presence of sensitive dependence on initial conditions in systems like (a) and (b) in no way implies that it is what is responsible for the randomness and complexity we see in these systems. And indeed, what looking at the shift map in terms of digit sequences shows us is that this phenomenon on its own can make no contribution at all to what we can reasonably consider the ultimate production of randomness‘ [Wolfram, 2002, p. 155].

Multiway Systems
The design of this class of systems is so that the systems can have multiple states at any one step. The states at some time generate states at the nex step according to the underlying rules. All states thus generated remain in place after they have been generated. Most Multiway systems grow very fast or not at all and slow growth is as rare as is randomness. The usual behaviour is that repetition occurs, even if it is after a large number of seemingly random states. The threshold seems to be in the rate of growth: if the system is allowed to grow faster then the chances that it will show complex behaviour increases. In the process, however, it generates so many states that it becomes difficult to handle [Wolfram 2002, pp. 204 – 209].

Chapter 6: Starting from Randomness
If systems are started with random initial conditions (up to this point they started with very simple initial conditions such as one black or one white cell), they manage to exhibit repetitive, nested as well as complex behaviour. They are capable of generating a pattern that is partially random and partially locally structured. The point is that the intial conditions may be in part but not alone responsible for the existence of complex behaviour of the system [Wolfram 2002, pp. 223 – 230].

Class 1 – the behaviour is very simple and almost all initial conditions lead to exactly the same uniform final state

Class 2 – there are many different possible final states, but all of them consist just of a certain set of simple structures that either remain the same forever or repeat every few steps

Class 3 – the behaviour is more complicated, and seems in many respects random, although triangles and other small-scale structures are essentially always on some level seen

Class 4 – this class of systems involves a mixture of order and randomness: localized structures are produced which on their own are fairly simple, but these structures move around and interact with each other in very complicated ways.

‘There is no way of telling into which class a cellular automaton falls by studying its rules. What is needed is to run them and visually ascertain which class it belongs to’ [Wolfram 2002, Chapter 6, pp.235].

One-dimensional cellular automata of Class 4 are often on the boundary between Class 2 and Class 3, but settling in neither one of them. There seems to be some kind of transition. They do have characteristics of their own, notably localized structures, that do neither belong to Class 2 or Class 3 behaviour. This behaviour including localized structures can occur in ordinary discrete cellular automata as well as in continuous cellular automata as well as in two-dimensional cellular automata.

Sensitivity to Initial Conditions and Handling of Information
Class 1 – changes always die out. Information about a change is always quickly forgotten

Class 2 – changes may persist, but they remain localized, contained in a part of the system. Some information about the change is retained in the final configuration, but remains local and therefore not communicated thoughout the system

Class 3 – changes spread at a uniform rate thoughout the entire system. Change is communicated long-range given that local structures travelling around the system are affected by the change

Class 4 – changes spread sporadically, affecting other cells locally. These systems are capable of communicating long-range, but this happens only when localized structures are affected [Wolfram 2002, p. 252].

In Class 2 systems, the logical connection between their eventually repetitive behaviour and the fact that no long-range communication takes place is that the absence of long-range communication forces the system to behave as if its size were limited. This behaviour follows a general result that any system of limited size, discrete steps and definite rules will repeat itself eventually.

In Class 3 systems the possible sources of randomness are the randomness present in initial conditions (in the case of a cellular automaton the initial cells are chosen at random versus one single black or white cell for simple initial conditions) and the sensitive dependence on initial conditions of the process. Random behaviour in a Class 3 system can occur if there is no randomness in its initial conditions. There is not an a priori difference in the behaviour of most systems generated on the basis of random initial conditions and one based on simple intial conditions1. The dependence on the initial conditions of the patterns arising in the pattern in the overall behaviour of the system is limited in the sense that although the produced randomness is evident in many cases, the exact shape can differ from the initial conditions. This is a form of stability, for, whatever the initial conditions the system has to deal with, it always produces similar recognizable random behaviour as a result.

In Class 4 there must be some structures that can persist forever. If a system is capable of showing a sufficiently complicated structure then eventually at some initial condition, a moving structure is found also. Moving structures are inevitable in Class 4 systems. It is a general feature of Class 4 cellular automata that with appropriate initial conditions they can mimick the behaviour of all sorts of other systems. The behaviour of Class 4 cellular automata can be diverse and complex even though their underlying rules are very simple (compared to other cellular automata). The way that diffferent structures existing in Class 4 systems interact is difficult to predict. The behaviour resulting from the interaction is vastly more complex than the behaviour of the individual structures and the effects of the interactions may take a long time (many steps) after the collision to become clear.

It is common to be able to design special initial conditions so that some cellular automaton behaves like another. The trick is that the special initial conditions must then be designed so that the behaviour of the cellular automaton emulated is contained within the overall behaviour of the other cellular automaton.

Attractors
The behaviour of a cellular automaton depends on the specified initial conditions. The behaviour of the system, the sequences shown, gets progressively more restricted as the system develops. The resulting end-state or final configuration can be thought of as an attractor for that cellular automaton. Usually many different but related initial conditionss lead to the same end-state: the basin of attraction leads it to an attractor, visible to the observer as the final configuration of the system.

Chapter 7 Mechanisms in Programs and Nature
Processes happening in nature are complicated. Simple programs are capable of producing this complicated behaviour. To what extent is the behaviour of the simple programs of for instance cellular automata relevant to phenomena observed in nature? ‘It (the visual similarity of the behaviour of cellular automata and natural processes being, DPB) is not, I believe, any kind of coincidence, or trick of perception. And instead what I suspect is that it reflects a deep correspondence between simple programs and systems in nature‘ [Wolfram 2002, p 298].

Striking similarities exist between the behaviours of many different processes in nature. This suggests a kind of universality in the types of behaviour of these processes, regardless the underlying rules. Wolfram suggests that this universality of behaviour encompasses both natural systems’ behaviour and that of cellular automata. If that is the case, studying the behaviour of cellular automata can give insight into the behaviour of processes occurring in nature. ‘For it (the observed similarity in systems behaviour, DPB) suggests that the basic mechanisms responsible for phenomena that we see in nature are somehow the same as those responsible for phenomena that we see in simple programs‘ [Wolfram 2002, p 298].

A feature of the behaviour of many processes in nature is randomness. Three sources of randomness in simple programs such as cellular automata exist:
the environment – randomness is injected into the system from outside from the interactions of the system with the environment.
initial conditions – the initial conditions are a source of randomness from outside. The randomness in the system’s behaviour is a transcription of the randomness in the initial conditions. Once the system evolves, no new randomness is introduced from interactions with the environment. The system’s behaviour can be no more random than the randomness of the initial conditions. In practical terms many times isolating a system from any outside interaction is not realistic and so the importance of this category is often limited.
intrinsic generation – simple programs often show random behaviour even though no randomness is injected from interactions with outside entities. Assuming that systems in nature behave like the simple programs, it is reasonable to assume that the intrinsic generating of randomness occurs in nature also. How random is this internally generated randomness really? Based on tests using existing measures for randomness they are at least as random as any process seen in nature. It is not random by a much used definition classifying behaviour as random if it can never be generated by a simple procedure such as the simple programs at hand, but this is a conceptual and not a practical definition. A limit to the randomness of numbers generated with a simple program, is that it is bound to repeat itself if it exists in a limited space. Another limit is the set of initial conditions: because it is deteministic, running a rule twice on the same initial conditions will generate the same sequence and the same random number as a consequence. Lastly truncating the generated number will limit its randomness. The clearest sign of intrinsic randomness is its repeatability: in the generated graphs areas will evolve with similar patterns. This is not possible starting from different initial conditions or with external randomness injected while interacting. The existence of intrinsic randomness allows a discrete system to behave in seemingly continuous ways, because the randomness at a local level averages out the differences in behaviour of individual simple programs or system elements. Continuous systems are capable of showing discrete behaviour and vice versa.

Constraints
But despite this (capability of constraints to force complex behaviour DPB) my strong suspicion is that of all of the examples we see in nature almost none can in the end best be explained in terms of constraints‘ [Wolfram 2002, p 342]. Constraints are a way of making a system behave as the observer wants it to behave. To find out which constraints are required to deliver the desired behaviour of a system in nature is in practical terms far too difficult. The reason for that difficulty is that the number of configurations in any space soon becomes very large and it seems impossible for systems in nature to work out which constraint is required to satisfy the constraints at hand, especially if this procedure needs to be performed routinely. Even if possible the procedure to find the rule that actually satisfies the constraint is so cumbersome and computationally intensive, that it seems unlikely that nature uses it to evolve its processes. As a consequence nature seems to not work with constraints but with explicit rules to evolve its processes.

Implications for everyday systems
Intuitively from the perspective of traditional science the more complex is the system, the more complex is its behaviour. It has turned out that this is not the case: simple programs are much capable of generating compicated behaviour. In general the explicit (mechanistic) models show behaviour that confirms the behaviour of the corresponding systems in nature, but often diverges in the details.
The traditional way to use a model to make predictions about the behaviour of an observed system is to input a few numbers from the observed system in your model and then to try and predict the system’s behaviour from the outputs of your model. When the observed behaviour is complex (for example if it exhibits random behaviour) this approach is not feasible.
If the model is represented by a number of abstract equations, then it is unlikely (nor was it intended) that the equations describe the mechanics of the system, but only to describe its behaviour in whatever way works to make a prediction about its future behaviour. This usually implies disregarding all the details and only taking into account only the important factors driving the behaviour of the system.
Using simple programs, there is also no direct relation between the behaviour of the elements of the studied system and the mechanics of the program. ‘.. all any model is supposed to do – whether it is a cellular automaton, a differential equation or anything else – is to provide an abstract representation of effects that are important in detemining the behaviour of a system. And below the level of these effects there is no reason that the model should actually operate like the system itself‘ [Wolfram 2002, p 366].
The approach in the case of the cellular automata is to then visually compare (compare the pictures of) the outcomes of the model with the behaviour of the system and try and draw conclusions about similarities in the behaviour of the observed system and the created system.

Biological Systems
Genetic material can be seen as the programming of a life form. Its lines contain rules that determine the morphology of a creature via the process of morphogenesis. Traditional darwinism suggests that the morphology of a creature determines its fitness. Its fitness in turn detemines its chances of survival and thus the survival of its genes: the more individuals of the species survive, the bigger its representation in the genepool. In this evolutionary process, the occurrence of mutations will add some randommness, so that the species continuously searches the genetic space of solutions for the combination of genes with the highest fitness.
The problem of maximizing fitness is essentially the same as the problem of satisfying constraints..‘ [Wolfram 2002, p386]. Sufficiently simple constraints can be satisfied by iterative random searches and converge to some solution, but if the constraints are complicated then this is no longer the case.
Biological systems have some tricks to speed up this process, like sexual reproduction to mix up the genetic offspring large scale and genetic differentiation to allow for localized updating of genetic information for separate organs.
Wolfram however consides it ‘implausible that the trillions or so of generations of organisms since the beginning of life on earth would be sufficient to allow optimal solutions to be found to constraints of any significant complexity‘ [Wolfram 2002 p 386]. To add insult to injury, the design of many existing organisms is far from optimal and is better described as a make-do, easy and cheap solution that will hopefully not immediately be fatal to its inhabitant.
In that sense not every feature of every creature points at some advantage towards the fitness of the creature: many features are hold-overs from elements evolved at some earlier stage. Many features are so because they are fairly straightforward to make based on simple programs and then they are just good enough for the species to survive, not more and not less. Not the details filled in afterwards, but the relatively coarse features support the survival of the species.
In a short program there is little room for frills: almost any mutation in the program will tend to have an immediate effect on at least some details of the phenotype. If, as a mental exercise, biological evolution is modeled as a sequence of cellular automata, using each others output sequentially as input, then it is easy to see that the final behaviour of the morphogenesis is quite complex.
It is, however, not required that the program be very long or complicated to generate complexity. A short program with some essential mutations suffices. The reason that there isn’t vastly more complexity in biological systems while it is so easy to come by and while the forms and patterns usually seen in biological systems are fairly simple is that: ‘My guess is that in essence it (the propensity to exhibit mainly simple patterns DPB) reflects limitations associated with the process of natural selection .. I suspect that in the end natural selection can only operate in a meaningful way on systems or parts of systems whose behaviour is in some sense quite simple‘ [Wolfram 2002, pp. 391 – 92]. The reasons are:
when behaviour is complex, the number of actual configurations quickly becomes too large to explore when the layout of different individuals in a species becomes very differnent then the details may have a different weight in their survival skills. If the variety of detail becomes large then acting consitently and definitively becomes increasingly difficult when the overall behaviour of a system is more complex then any of its subsystems, then any change will entail a large number of changes to all the subsystems, each with a different effect on the behaviour of the individual systems and natural selection has no way to pick the relevant changes
if chances occur in many directions, it becomes very difficult for changes to cancel out or find one direction and thus for natural selection to understand what to act on iterative random searches tend to be slow and make very little progress towards a global optimum.

If a feature is to be succesfully optimized for different environments then it must be simple. While it has been claimed that natural selection increases complexity of organisms, Wolfram suggests that it reduces complexity: ..’it tends to make biological systems avoid complexity, and be more like systems in engineering‘ [Wolfram 2002, p 393]. The difference is that in engineering systems are designed and developed in a goal oriented way, whereas in evolution it is done by an iterative random search process.

There is evidence from the fossil record that evolution brings smooth change and relative simplicity of features in biological systems. If all this evoltionary process points at simple features and smooth changes, then where comes the diversity from? It turns out that a change in the rate of growth changes the shape of the organism dramatically as well as its mechanical operation.

Fundamental Physics
My approach in investigating issues like the Second Law is in effect to use simple programs as metaphors for physical systems. But can such programs in fact be more than that? And for example is it conceivable that at some level physical systems actually operate directly according to the rules of a simple program? ‘ [Wolfram 2002, p. 434].

Out of 256 rules for cellular automata based on two colours and nearest neighbour interaction, only six exhibit reversible behaviour. This means that overall behaviour can be reversed if the rules of each automaton are played backwards. Their behaviour, however, is not very interesting. Out of 7,500 billion rules based on three colours and next-neighbour interaction, around 1,800 exhibit reversible behaviour of which a handful shows interesting behaviour.

The rules can be designed to show reversible behaviour if their pictured behaviour can be mirrored vertically (the graphs generated are usually from top to bottom, DPB): the future then looks the same as the past. It turns out that the pivotal design feature of reversible rules is that existing rules can be adapted to add dependence on the states of neighbouring cells two steps back. Note that this reversibily of rules can also be constructed by using the preceding step only, if, instead of two states, four are allowed. The overall behaviour showed by these rules is reversible, whether the intial conditons be random or simple. It is shown that a small fraction of the reversible rules exhibit complex behaviour for initial conditions that are simple or random alike.

Whether this reversibility actually happens in real life depends on the theoretical definition of the initial conditions and in our ability to set them up so as to exhibit the reversible overall behaviour. If the initial conditons are exactly right then increasingly complex behaviour towards the future can become simpler when reversed. In practical terms this hardly ever happens, because we tend to design and implement the intial conditions so that they are easy to describe and construct to the experimenter. It seems reasonable that in any meaningful experiment, the activities to set up the experiment should be simpler than the process that the experiment is intended to observe. If we consider these processes as computations, then the computations required to set up the experiment should be simpler than the computations involved in the evolution of the system under review. So starting with simple initial conditions and trace back to the more complex ones, then, starting the evolution of the system there anew, we will surely find that the system shows increasingly simple behaviour. Finding these complicated seemingly random initial conditions in any other way than tracing a reversible process to and fro the simple initial conditions seems impossible. This is also the basic argument for the existence of the Second Law of Thermodynamics.

Entropy is defined as the amount of information about a system that is still unknown after measurements on the system. The Second Law means that if more measurements are performed over time then the entropy will tend to decrease. In other words: should the observer be able to know with absolute certainty properties such as the positions and velocities of each particle in the system, then the entropy would be zero. According to the definition entropy is the information with which it would be possible to pick out the configuration the system is actually in from every possible configuration of the distribution of particles in the system satisfying the outcomes of the measurements on the system. To increase the number and quality of the measurements involved amounts to the same computational effort as is required for the actual evolution of the system. Once randomness is produced, the actual behaviour of the system becomes independent of the details of the initial conditions of the system. In a reversible system different initial conditions must lead to a diffent evolution of the system, for else there would be no way of reversing the system behaviour in a unique way. But even though the outcomes from different initial conditions can be much different, the overall patterns produced by the system can still look much the same. But to identify the initial conditions from the state of a system at any time implies a computational effort that is far beyond the effort for a practical and meaningful measurement procedure. If a system generates sufficient randomness, then it evolves towards a unique equilibrium whose properties are for practical reasons independent of its initial conditions. In this why it is possible to identify many systems based on a few typical parameters.

‘With cellular automata it is possible, using reversible rules and starting from a random set of initial conditions, to generate behaviour that increases order instead of tending towards more random behaviour, e.g. rule 37R [Wolfram 2002, pp. 452 – 57]. Its behaviour neither completely settles down to order nor does it generate randomness only. Although it is reversible, it does not obey the Second Law. To be able to reverse this process, however, the experimenter would have to set up initial conditions exactly so as to be able to reach the ‘earlier’ stages, else the information generated by the system is lost. But how can there be enough information to reconstruct the past? All the intermediate local structures that passed on the way to the ‘turning point’ would have to be absorbed by the system on its way back to in the end to reach its original state. No local structure emitted on the way to the turning point can escape.

The evolution in systems is therefore intrinsically? not reversible. All forms of self organisation in cellular automata without reversible rules can potentially occur?

For these reasons it is possible to parts of the universe get more organised than other parts, even with all laws of nature being reversible. What the cellular automata such as 37R show is that this is even possible for closed systems to not follow the Second Law. If the systems gets partitioned then within the partitions order might evolve while simultaneously elsewhere in the system randomness is generated. Any closed system will repeat itself at some point in time. Until then it must visit every possible configuration. Most of these will be or seem to be random. Rule 37R does not produce this ergodicity: it visits only a small fraction of all possible states before repeating.

Conserved Quantities and Continuum Phenomena
Examples are quantities of energy and electric charge. Can the amount of information in exchanged messages be a proxy for a quantity to be conserved?

With nearest neighbour rules, cellular automata do exhibit this principle (shown as the number of cells of equal colour conserved in each step), but without showing sufficient complex behaviour. Using next-neighbour rules, they are capable of showing conservation while exhibiting interesting behaviour also. Even more interesting and random behaviour occurs when block cells are used, especially using three colours instead of two. In this setup the total number of black cells must remain equal for the entire system. On a local level, however, the number of black cells does not necessarily remain the same.

Multiway systems
In a multiway system all possible replacements are always executed at every step, thereby generating many new strings (i.e. combinations of added up replacements) at each step. ‘In this way they allow for multiple histories for a system. At every step multiple replacements are possible and so, tracing back the different paths from string to string, different histories of the system are possible. This may appear strange, for our understanding of the universe is that it has only one history, not many. But if the state of the universe is a single string in the multiway system, then we are part of that string and we cannot look into it from the outside. Being on the inside of the string it is our perception that we follow just one unique history and not many. Had we been able to look at it from without, then the path that the system followed would seem arbitrary‘ [Wolfram 2002, p 505]. If the universe is indeed a multiway system then another source of randomness is the actual path that its evolution has followed. This randomness component is similar to the outside randomness discussed earlier, but different in the sense that in would occur even if this universe would be perfectly isolated from the rest of the universe.

There are sufficient other sources of randomness to explain interesting behaviour in the universe and that by itself is no sufficient reason to assume the multiway system as a basis for the evolution of the universe. What other reasons can there be to underpin the assumption that the underlying mechanism of the uiverse is a multiway system? For one, multiway systems are much capable of generating a vast many different possible strings and therefore many possible connections between them, meaning different histories.

However, looking at the sequences of those strings it becomes obvious that these can not be arbitrary. Each path is defined by a sequence of ways in which replacements by multiway systems’ rules are applied. And each such path in turn defines a causal network. Certain underlying rules have the property that the form of this causal network ends up being the same regardless of the order in which the replacements are applied. And thus regardless of the path that is followed in the multiway system. If the multiway system ends up with the same causal network, then it must be possible to apply a replacement to a string already generated, to end up at the same final state. Whenever paths always eventually converge then there will be similarities on a sufficiently large scale in the obtained causal networks. And so the structure of the causal networks may vary a lot at the level of individual events. But at a sufficiently large level, the individual details will be washed out and the structure of the causal network will be essentially the same: on a sufficiently high level the universe will appear to have a unique history, while the histories on local levels are different.

Processes of perception and analysis
The processes that lead to some forms of behaviour in systems are comparable to some processes that are involved in their perception and analysis. Perception relates to the immediate reception of data via sensory input, analysis involves conscious and computational effort. Perception and analysis are an effort to reduce events in our everyday lives to manageable proportions so that we can use them. Reduction of data happens by ignoring whatever is not necessary for our everyday survival and by finding patterns in the remaining data so that individual elements in the data do not need to be specified. If the data contains regularities then there is some redundance in the data. The reduction is important for reasons of storage and communication.
This process of perception and analysis is the inverse of the evolving of systems behaviour from simple programs: to identify whatever it is that produces some kind of behaviour. For observed complex behaviour this is not an easy task, for the complex behaviour generated bears no obvious relation to the simple programs or rules that generate them. An important difference is that there are many more ways to generate complex behaviour than there are to recognize the origins of this kind of behaviour. The task of finding the origins of this behaviour is similar to solving problems satisfying a set of constraints.
Randomness is roughly defined as the apparent inability to find a regularity in what we perceive. Absence of randomness implies that redundancies are present in what we see, hence a shorter description can be given of what we see that allows us to reproduce it. In the case of randomness, we would have no choice but to repeat the entire picture, pixel by pixel, to reproduce it. The fact that our usual perceptional abilities do not allow such description doesn’t mean that no such description exists. It is very much possible that randomness is generated by the repetition of a simple rule a few times over. Does it, then, imply that the picture is not random? From a perceptory point of view it is, because we are incapable to find the corresponding rule, from a conceptual point of view this definition is not satisfactory. In the latter case the definition would be that randomness exists if no such rule exists and not only if we cannot immediately discern it. However, finding the short description, i.e. the short program that generates this random behaviour is not possible in a computationally finite way. Resticting the computational effort to find out whether something is random seems unsatisfactory, because it is arbitrary, it still requires a vast amount of computational work and many systems will not be labelled as random for the wrong reasons. So in the definition of randomness some reference needs to be made to how the short descriptions are to be found. ‘..something could be considered to be random whenever there is essentially no simple program that can succeed in detecting regularities in it‘ [Wolfram 2002, p 556]. In practical terms this means that after comparing the behaviour of a few simple programs with the behaviour of the observed would-be random generator and if no regularities are found in it, then the behaviour of the observed system is random.

Complexity
If we say that something is complex, we say that we have failed to find a simple description for it hence that our powers of perception and analysis have failed on it. How the behaviour is described depends on what purpose the description serves, or how we perceive the observed behaviour. The assessment of the involved complexity may differ depending on the purpose of the observation. Given this purpose, then the shorter the description the less complex the behaviour. The remaining question is whether it is possible to define complexity independent of the details of the methods of perception and analysis. The common opinion traditionally was that any complex behaviour stems from a complex system, but this is no longer the case. It takes a simple program to develop a picture for which our perception can find no simple overall description.
So what this means is that, just like every other method of analysis that we have considered, we have little choice but to conclude that traditional mathematics and mathematical formulas cannot in the end realistically be expected to tell us very much about patterns generated by systems like rule 30‘ [Wolfram 2002, p 620].

Human Thinking
Human thinking stands out from other methods of perception in its extensive use of memory, the usage of the huge amount of data that we have encountered and interacted with previously. The way human memory does this is by retrieval based on general notions of similarity rather than exact specifications of whatever memory item is that we are looking for. Hashing could not work, because similar experiences summarized by different words might end up being stored in completely different locations and the relevant piece of information might not be retrieved on the occasion that the key search word involved hits a different hash code. What is needed is information that really sets one piece of information apart from other pieces, to store that and to discard all others. The effect is that the retrieved information is similar enough to have the same representation and thus to be retrieved of some remotely or seemingly remote association occurs with some situation at hand.
This can be achieved with a number of templates that the information is compared with. Only if the remaining signal per layer of nerve cells generates a certain hash code then the information is deemed relevant and retrieved. It is very rare that a variation in the input results in a variation in the output; in other words: quick retrieval (based on the hash code) of similar (not necessarily exactly the same) information is possible. The stored information is pattern based only and not stored as meaningful or a priori relevant information.

But it is my strong suspicion that in fact logic is very far from fundamental, particularly in human thinking‘ [Wolfram 2002, 627]. We retrieve connections from memory without too much effort, but perform logical reasoning cumbersomely, going one step after the next, and it possible that we are in that process mainly using elements of logic that we have learned from previous experience only.

Chapter 11 The Notion of Computation
All sorts of behaviour can be produced by simple rules such as cellular automata. There is a need for a framework for thinking about this behaviour. Traditional science provides this framework only if the observed behaviour is fairly simple. What can we do if the observed behaviour is more complex? The key-idea is the notion of computation. If the various kinds of behaviour are generated by simple rules, or simple programs then the way to think about them is in terms of the computations they can perform: the input is provided by the initial conditions and the output by the state of the system after a number of steps. What happens in between is the computation, in abstract terms and regardless the details of how it actually works. Abstraction is useful when discussing systems’ behaviour in a unified way, regardless the different kinds of underlying rules. For even though the internal workings of systems may be different, the computations they perform may be similar. At this pivot it may become possible to formulate principles applying to a variety of different systems and independent of the detailed structures of their rules.

At some level, any cellular automaton – or for that matter, any system whatsoever – can be viewed as performing a computation that determines what its future behaviour will be‘ [Wolfram, 2002, p 641]. And for some of the cellular automata described it so happens that the computations they perform can be described to a limited extent in traditional mathematical notions. Answers to the question of the framework come from practical computing.

The Phenomenon of Universality
Based on our experience with mechanical and other devices it can be assumed that we need different underlying constructions for different kinds of tasks. The existence of computers has shown that different underlying constructions make universal systems that can be made to execute different tasks by being programmed in different ways. The hardware is the same, different software may be used, programming the computer for different tasks.
This idea of universality is also the basis for programming languages, where instructions from a fixed set are strung together in different ways to create programs for different tasks. Conversely any computer programmed with a program designed in any language can perform the same set of tasks: any computer system or language can be set up to emulate one another. An analog is human language: virtually any topic can be discussed in any language and given two languages, it is largely possible to always translate between them.
Are natural systems universal as well? ‘The basic point is that if a system is universal, then it must effectively be capable of emulating any other system, and as a result it must be able to produce behavior that is as complex as the behavior of any other system. So knowing that a particular system is universal thus immediately implies that the system can produce behavior that is in a sense arbitrarily complex‘ [Wolfram 2002, p 643].

So as the intuition that complex behaviour must be generated by complex rules is wrong, so the idea that simple rules cannot be universal is wrong. It is often assumed that universality is a unique and special quality but now it becomes clear that it is widespread and occurs in a wide range of systems including the systems we see in nature.

It is possible to construct a universal cellular automaton and to input initial conditions so that it emulates any other cellular automata and thus to produce any behaviour that the other cellular automaton can produce. The conclusion is (again) that nothing new is gained by using rules that are more complex than the rules of the universal cellular automaton, because given it, more complicated rules can always be emulated by the simple rules of the universal cellular automaton and by setting up appropriate initial conditions. Universality can occur in simple cellular automata with two colours and next-neighbour rules, but their operation is more difficult to follow than cellular automata with a more complex set-up.

Emulating other Systems with Cellular Automata
Mobile cellular automata, cellular automata that emulate Turing machines, substitution systems2, sequential substitution systems, tag systems, register machine, number systems and simple operators. A cellular automaton can emulate a practical computer as it can emulate registers, numbers, logic expressions and data retrieval. Cellular automata can perform the computations that a practical computer can perform.
And so a universal cellular automaton is universal beyond being capable of emulating all other cellular automata: it is capable of emulating a vast array of other systems, including practical computers. Reciprocally all other automata can be made to emulate cellular automata, including a universal cellular automaton, and they must therefore itself be universal, because a universal cellular automaton can emulate a wide array of systems including all possible mobile automata and symbolic systems. ‘By emulating a universal cellular automaton with a Turing machine, it is possible to construct a universal Turing machine‘ [Wolfram 2002, p 665].

And indeed the fact that it is possible to set up a univeral system using essentially just the operations of ordinary arthmetic is closely related to the proof af Godel’s Theorem‘ [Wolfram 2002, p 673].

Implications of Universality
All of the discussed systems can be made to emulate each other. All of them have certain features in common. And now, thinking in terms of computation, we can begin to see why this might be the case. They have common features just because they can be made to emulate each other. The most important consequence is that from a computational perspective a very wide array of systems with very different underlying structures are at some level fundamentally equivalent. Although the initial thought might have been that the different kinds of systems would have been suitable for different kinds of computations, this is in fact not the case. They are capable of performing exactly the same kinds of computations.
Computation therefore can be discussed in abstract terms, independent of the kind of system that performs the computation: it does not matter what kind of system we use, any kind of system can be programmed to perform any kind of computation. The results of the study of computation at an abstract level are applicable to a wide variety of actual systems.
To be fair: not all cellular automata are capable of all kinds of computations, but some have large computational capabilties: once past a certain threshold, the set of possible computations will be always the same. Beyond that threshold of universality, no additional complexity of the underlying rules might increase the computational capabilties of the system. Once the threshold is passed, it does not matter what kind of system it is that we are observing.

The rule 110 Cellular Automaton
The threshold for the complexity of the underlying rules required to produce complex behaviour is remarkably low.

Class 4 Behaviour and Universality
Rule 110 with random initial conditions exhibits many localized structures that move around and interact with each other. This is not unique to that rule, this kind of behaviour is produced by all cellular automata of Class 4. The suspicion is that any Class 4 system will turn out to have universal computational capabilities. For the 256 nearest-neighbour rules and two colours, only four more or less comply (rule 124, 137 and 193, all require some trivial amendments). But for rules involving more colours, more dimensions and / or next-neigbour rules, Class 4 localized structures often emerge. The crux for the existence of class 4 behaviour is the control of the transmission of information through the system.

Universality in Turing Machines and other Systems
The simplest Universal Turing Machine currently known has two states and five possible colours. It might not be the simplest Universal Turing Machine in existence and so the simplest lies between this and two states and two colours, none of which are Universal Turing Machines; there is some evidence that a Turing Machine with two states and three colours is universal, but no proof exists as yet. There is a close connection between the appearance of complexity and universality.

Combinators can emulate rule 110 and are known to be universal from the 1930s. Other symbolic sytems show complex behaviour and may turn out to be universal too.

Chapter 12 The Principle of Computational Equivalence
The Principle of Computational Equivalence applies to any process of any kind, natural or artificial: ‘all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations‘ [Wolfram 2002, p 715]. This means that any process that follows definite rules can be thought of as a computation. For example the process of evolution of a system like a cellular automaton can be viewed as a computation, even though all it does is generate the behaviour of the system. Processes in nature can be thought of as computations, although the rules they follow are defined by the basic laws of nature and all they do is generate their own behaviour.

Outline of the principle
The principle asserts that there is a fundamental equivalence between many kinds of processes in computational terms.

Computation is defined as that which a universal system as meant here can do. It is possible to imagine another system capable of computations beyond universal cellular automata or other such systems but they can never be constructed in our universe.

Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication. In other words: there is just one level of computational sophistication and it is achieved by almost all processes that do not seem obviously simple. Universality allows the construction of universal systems that can perform any computation and thus they must be capable of exhibiting the highest level of computational sophistication. From a computational view this means that systems with quite different underlying structures will show equivalence in that rules can be found for them that achieve universality and that can thus exhibit the same level of computational sophistication.
The rules need not be more complicated themselves to achieve universality hence a higher level of computational sophistication. On the contrary: many simple though not overly simple rules are capable of achieving universality hence computational sophistication. This property should furthermore be very common and occur in a wide variety of systems, both abstract and natural. ‘And what this suggests is that a fundamental unity exists across a vast range of processes in nature and elsewhere: despite all their detailed differences every process can be viewed as corresponding to a computation that is ultimately equivalent in its sophistication‘ [Wolfram 2002, p 719].

We could identify all of the existing processes, engineered or natural, and observe their behaviour. It will surely become clear that in many instances it will be simple repetitive or nested behaviour. Whenever a system shows vastly more complex behaviour, the Principle of Computational Equivalence then asserts that the rules underlying it are universal. Conversely: given some rule it is usually very complicated to find out if it is universal or not.

If a system is universal then it is possible, by choosing the appropriate initial conditions, to perform computations of any sophistication. No guarantee exists, however, that some large portion of all initial conditions result in behaviour of the system that is more interesting and not merely obviously simple. But even rules that are by themselves not complicated, given simple initial conditions, may produce complex behaviour and may well produce processes of computational sophistication.

Introduction of a new law to the effect that no system can carry out explicit computations that are more sopisticated than that can be carried out by systems such as cellular automata or Turing Machines. Almost all processes except those that are obviously simple achieve the limit of Computational Equivalence implying that almost all possible systems with behaviour that is not obviously simple an overwhelming fraction are universal. Every process in this way can be thought of as a ‘lump of computation’.

The Validity of the Principle
The principle is counter-intuitive from the perspective of traditional science and there is no proof for it. Cellular automata are fundamentally discrete. It appears that systems in nature are more sophisticated than these computer systems because they should from a traditional perspective be continuous. But the presumed continuousness of these systems itself is an idealization required by traditional methods. As an example: fluids are traditionally described by continuous models. However, they consist of discrete particles and their computational behaviour must be of a system of discrete particles.
It is my strong suspicion that at a fundamental level absolutely every aspect of our universe will in the end turn out to be discrete. And if this is so, then it immediately implies that there cannot ever ultimately be any form of continuity in our universe that violates the Principle of Computational Equivalence’ [Wolfram 2002, p 730]. In a continuous system, the computation is not local and every digit has in principle infinite length. And in the same vein: ‘.. it is my strong belief that the basic mechanisms of human thinking will in the end turn out to correspond to rather simple computational processes’ [Wolfram 2002, p 733].

Once a system reaches a relatively low threshold of complexity then any real system must exhibit the same level of computational sophistication. This means that observers will tend to be computationally equivalent to the observed systems. As a consequence they will consider the behaviour of such systems complex.

Computational Irreducibility
Scientific triumphs have in common that almost all of them are based on finding ways to reduce the amount of computational work in order to predict how it will behave. Most of the time, the idea is to derive a mathematical formula that allows to detemine what the outcome of the evolution of the system will without having to trace its every step explicitly. There is great shortage of formulas describing all sorts of known and common systems’ behaviour.
Traditional science takes as a starting point that much of the evolutionary steps perfomed by a system are an unnecessarily large effort. It is attempted to shortcut this process and find an outcome with less effort. However, describing the behaviour of systems exhibiting complex behaviour is a difficult task. In general not only the rules for the system are required to do that, but its initial conditions as well. The difficulty is that, knowing the rules and the initial condtions, it might still take an irreducible amount if time to predict its behaviour. When computational irreducibility exists there is no other way to find out how it will behave but to go though its every evolutionary step up to the required state. The predicting system can only outrun the actual system of which we are trying to predict its future with less effort if its computations are more sophisticated. This idea violates the Principle of Computational Equivalence: every system that shows no obviously simple behaviour is computationally exactly equivalent. So predicting models cannot be more sophisticated than the systems they intend to describe. And so for many systems no systematic predictions can be done, their process of evolution cannot be shortcut and they are computationally irreducible. If the behaviour of a system is simple, for example repetitive or nested, then the system is computationally reducible. This reduces the potential of traditional science to advance in studying systems of which the behaviour is not quite simple.

To make use of mathematical formulas for instance only makes sense if the computation is reducible hence the system’s behaviour is relatively simple. Science must constrain itself to the study of relatively easy systems because only these are computationally reducible. This is not the case for the new kind of science, because it uses limited formulas but pictures of the evolution of systems instead. The observed systems may very well be computationally irreducible. They are not a preamble to the actual ‘real’ predictions based on formulas, but they are the real thing themselves. A universal system can emulate any other system, including the predictive model. Using shortcuts means trying to outrun the observed system with another that takes less effort. Because the latter can be emulated by the former (as it is universal), this means that the predictive model must be able to outrun itself. This is relevant because universality is abound in systems.

As a consequence of computational irreducibility there can be no easy theory for everything, there will be no formula that predicts any and every observable process or behaviour that seems complex to us. To deduce the consequences of these simple rules that generate complex behaviour will require irreducible amounts of computational effort. Any system can be observed but there can not be a guarantee that a model of that system exists that accurately describes or predicts how the observed system will behave.

The Phenomenon of Free Will
Though a system may be governed by definite underlying laws, its behaviour may not be describable by reasonable laws. This involves computational irreducibility, because the only way to find out how the system will behave is to actually evolve the system. There is no other way to work out this behaviour more directly.
Analog to this is the human brain: although definite laws underpin its workings, because of irreducible computation no way exists to derive an outcome via reasonable laws. It then seems that, knowing that definite rules underpin it, the system seems to behave in some way that it does not seem to follow any reasonable law at all doing this or that. And yet the underpinning rules are definite without any freedom yet allowing the system’s behaviour some form of apparent freedom. ‘For if a system is computationally irreducible this means that there is in effect a tangible separation between the underlying rules for the system and its overall behaviour associated with the irreducible amount of computational work needed to go from one to the other. And it is this separation, I believe, that the basic origin of the apparent freedom we see in all sorts of systems lies – whether those systems are abstract cellular automata or actual living brains‘ [Wolfram 2002, p 751].
The main issue is that it is not possible to make predictions about the behaviour of a system, for if we could then the behaviour would be determined in a definite way and cannot be free. But now we know that definite simple rules can lead to unpredictability: the ensuing behaviour is so complex that it seems free of obvious rules. This occurs as a result of the evolution of the system itself and no external input is required to derive that behaviour.
‘But this is not to say that everything that goes on in our brains has an intrinsic origin. Indeed, as a practical matter what usually seems to happen is that we receive external input that leads to some train of thought which continues for a while, but then dies out until we get more input. And often this the actual form of this train of thought is influenced by the memory we have developed from inputs in the past – making it not necessarily repeatable evn with exactly the same input‘ [Wolfram 2002, p752 – 53].

Undecidability and Untractability
Undecidability as per Godels Entscheidungsproblem is not a rare case, it can be achieved with very simple rules and it is very common. For every system that seems to exhibit complex behaviour, its evolution is likely to be undecidable. Finite questions about a system can ultimately answered by finite computation, but the computations may have an amount of difficulty that makes intractable. To assess the difficulty of a computation, one assesses the amount of time it takes, how big the program is that runs it and how much memory it takes. However, it is often not knowable if the progam that is used for the computation is the most efficient for the job. Working with very small programs it becomes possible to assess their efficiency.

Implications for Mathematics and its Foundations
Applications in mathematics. In nature and in mathematics simple laws govern complex behaviour. Mathematics has distantiated itself increasingly from correspondence with nature. Universality in an axiom system means that any question about the behaviour of any other universal system can be encoded as a statement in the axiom system and that if the answer can be established in the other system then it can also be given by giving a proof in the axiom system. Every axiom system currently in use in mathematics is universal: it can in a sense emulate every other system.

Intelligence in the Universe
Human beings have no specific or particular position in nature: their computational skills do not vary vastly from the skills of other natural processes.

But the question then remains why when human intelligence is involved it tends to create artifacts that look much simpler than objects that just appear in nature. And I believe the basic answer to this has to do with the fact that when we as humans set up artifacts we usually need to be able to foresee what they will do – for otherwise we have no way to tell whether they will achieve the purposes we want. Yet nature presumably operates under no such constraint. And is fact I have argued that among systems that appear in nature a great many exhibit computational irreducibility – so that in a sense it becomes irreducibly difficult to foresee what they will do‘ [Wolfram 2002, p 828].

A firm as such is not a complicated thing: it takes one question to know what it is (answer: a firm) and another to find out what it does (answer: ‘we manufacture coffee cups’). More complicated is the answer to the question: ‘how do you make coffeecups’, for this requires some considerable explanation. And yet more complicated is the answer to: ‘what makes your firm stand out from other coffeecup manufacturing firms?’. The answer to that will have to involve statements about ‘how we do things around here’, the intricate details of which might have taken you years to learn and practice and now to explain.

A system might be suspected to be built for a purpose if it is the minimal configuration for that purpose.

It would be most satisfying if science were to prove that we as humans are in some fundamental way special, and above everything else in the universe. But if one looks at the history of science many of its greatest advances have come precisely from identifying ways in which we are not special – for this is what allows science to make ever more general statements about the universe and the things in it‘ [Wolfram 2002, p 844].

‘So this means that there is in the end no difference between the level of computational sophistication that is achieved by humans and by all sorts of other systems in nature and elsewhere’ [Wolfram 2002, p 844].

Information, Entropy, Complexity

Original question

If information is defined as ‘the amount of newness introduced’ or ‘the amount of surprise involved’ then chaotic behaviour implies maximum information and ‘newness’. Systems showing periodic or oscillating behaviour are said to ‘freeze’ and nothing new emerges from them. New structure or patterns emerge from systems showing behaviour just shy of chaos (the edge of chaos) and not from systems showing either chaotic or oscillating behaviour. What is the, for lack of a better word, role of information in this emergent behaviour of complex adaptive systems (cas).

Characterizing cas

One aspect characterizing cas is generally associated with complex behaviour. This behaviour in turn is associated with emergent behavior or forming of patterns new to the system, that are not programmed in its constituent parts and that are observable. The mechanics of a cas are also associated with large systems of a complicated make-up and consisting of a large number of hierarchically organised components of which the interconnections are non-linear. These ‘architectural’ conditions are not a sine-qua-non for systems to demonstrate complex behaviour. They may very well not show behaviour as per the above, and they may for that reason not be categorised as cas. They might become one, if their parameter space is adapted via an event at some point in time. Lastly systems behaviour is associated with energy usage (or cost) and with entropy production and information. However, confusion exists as to how to perform the measuring and interpret the outcomes of measurements. No conclusive definition exists about the meaning of any of the above. In other words: to date to my knowledge none of these properties when derived from a cas give a decisive answer to the question whether the system at hand is in fact complex.

The above statements are obviously self-referencing, unclear and undecisive. It would be useful to have an objective principle by which it is possible to know whether a given system shows complex behaviour and is therefore to be classified as a cas. The same goes for clear definitions for the meaning of the terms energy, entropy (production) and information in this context. It is useful to have a clear definition of the relationships of some such properties between themselves and between them and the presumed systems characteristics. This enables an observer to identify characteristics such as newness, surprise, reduction of uncertainty, meaning, information content and their change.

Entropy and information

It appears to me (no more than that) that entropy and information are two sides of the same coin, or in my words: though not separated within the system (or aspects of the same system at the same time), they are so to speak back-to-back, simultaneously influencing the mechanics (the interrelations of the constituent parts) and the dynamics (the interactions of the parts leading up to overall behavioral change of the system in time) of the system. What is the ‘role’ of information when a cas changes and how does it relate to the proportions mentioned.

The relation between information and entropy might then be: structures/patterns/algorithms distributed in a cas enable it in the long run to increase its relative fitness by reducing the cost of energy used in its ‘daily activities’. The cost of energy is part of the fitness function of the agent and stored information allows it to act ‘fit’. Structures and information in cas are distributed: the patterns are proportions of the system and not of individual parts. Measurements therefore must lead to some system characteristic (ie overall and not stop at individual agents) to get a picture of the learning/informational capacity of the entire CAS as a ‘hive’. This requires correlation between the interactions of the parts to allow the system to ‘organize itself’.

CAS as a TM

I suspect (no more than that) that it is in general possible to treat cas as a Turing Machine (TM), ‘disguised’ in any shape or, conversely, to treat complex adaptive systems as an instance of a TM. That approach makes the logic corresponding to TM available to the observer. An example of a system for which this classification is proven is 2-dimensional Cellular Automata of Wolfram class 4. This limited proof decreases the general applicability, because complex adaptive systems, unlike TM in all aspects, are parallel, open and asynchronous.

Illustration

Perhaps illustrative for a possible outcome, is, misusing the Logistic map because no complexity lives there, to ‘walk the process’ by changing parameter mu. Start at the right: in the chaotic region, newness (or reduction of uncertainty / surprise / information) is large, bits are very many, meaning (as in emerging patterns): small. Travel left to any oscillating region: newness is small, bits are very few, meaning is small. Now in between where there is complex behaviour: newness is high, bits fewer than the chaotic region, meaning is very high.

The logical underpinning of ‘newness’ or ‘surprise’ is: if no bit in a sequence can be predicted from a subset of that sequence, it is random. Each bit in the sequence is a ‘surprise’ or ‘new’ and the amount of information is highest. If 1 bit can be predicted, there is a pattern, an algorithm can be designed and, given it is shorter than this bit (this is theoretical) the surprise is less, as is the amount of information. The more pattern, the less surprise it holds and the more information appears to be stored ‘for later use’ such as processing of a new external signal that the system has to deal with. What we observe in a cas is patterns and so a limitation of this ‘surprise’.

A research project

I suggest the objective of such project is to design and test meaningful measurements for entropy production, energy cost and information processing of a complex adaptive system so as to relate them to each other and to the system properties of a cas in order to better recognize and understand them.

The suggested approach is to use a 2-dimensional CA structure parameterized to show complex behavior as per Wolfram class 4 as described in ‘A New Kind of Science’ of Stephen Wolfram.

The actual experiment is then to use this system to solve well-defined problems. As the theoretical characteristics of (the processing of and the storage by) a TM are known, this approach allows for a reference for the information processing and information storage requirements that can be compared to the actual processing and storing capacities of the system at hand.

Promising measurements are:

Measurement Description Using
Entropy Standard: this state related to possible states Gibbs or similar
Energy cost Theoretical energy cost required to solve a particular problem versus the energy the complex adaptive system at hand uses See slide inserted below, presentation e-mailed earlier: https://www.youtube.com/watch?v=9_TMMKeNxO0#t=649

Schermafdruk van 2015-06-09 12:56:03

Information Earlier discussion: Using this approach, we could experimentally compute the bits of information that agents have learned resulting from the introduction of new information into the system. I suggest to add: ..compute the bits of information that agents have learned relating to the system…. That subset of information distributed stored in the system representing the collective aspect of the system, i.e. distributed collective information. Amount of information contained in the co-evolving interfaces of the agents or parts of the system equivalent to labels as suggested by Holland.

Turing Machines and Beyond

To put to bed the discussion about companies being the computer – and for me to finalize the invention of yet another existing wheel, find attached this document. The author surveys the latest in computational logic, in the process describing Natural Computation. This is apparently an existing name for the beast I described in the posts categorized under Turing Machines so far!

Networks are capable of processing information in parallel, while interacting dynamically with their changing environment, asynchronously if necessary (companies!). TM as defined here compute solutions for given problems using algorithms and as such are a special case for the general principle of Natural Computation.

SignificanceOfModelsOfComputation

Het belang van Belousov-Zhabotinsky (BZ) reacties

Sommige kenmerken van complexe adaptieve systemen, het onderwerp van dit blog, zijn ongrijpbaar, bijvoorbeeld omdat ze ingewikkelde oorzaken of mechanismes hebben, omdat we dat niet zo hebben geleerd of omdat ze tegen de intuïtie ingaan. Dat bemoeilijkt de communicatie erover. Eén voorbeeld daarvan is dat zulke systemen ver-uit-evenwicht zijn. Dat wordt vaak geassocieerd met chaotische verschijnselen, wanorde dus. Daar hebben veel mensen weinig ervaring mee, onder andere omdat het geen onderdeel uitmaakt van het lespakket op scholen: meestal gaat het over evenwichten.

Een BZ reactie is een voorbeeld van een scheikundig proces dat ver-uit-evenwicht is. Het is belangijk omdat het laat zien dat chemische reacties zich niet altijd in evenwichtssituaties afspelen. Het is een auto-katalytische reactie, waarin reagerende stoffen andere stoffen opleveren die op hun beurt de productie van de eerste weer bevorderen en zovoort. In het eerste filmpje hieronder is te zien dat (onder invloed van licht) in het mengsel  bovendien patronen ontstaan en dat is bijzonder, want er ontstaat orde in een systeem dat niet in evenwicht is en waarvan je wanorde zou verwachten. Het is namelijk wonderlijk dat al die moleculen zich uit eigen beweging op een ordelijke manier ten opzichte van elkaar gaan ‘gedragen’.

Op Wikipedia te vinden bij Belousov–Zhabotinsky reaction: An essential aspect of the BZ reaction is its so called “excitability”; under the influence of stimuli, patterns develop… Some clock reactions such as Briggs–Rauscher and BZ … can be excited into self-organising activity through the influence of light.

In het tweede filmpje is ook een autokatalytische oscillerende chemische reactie weergegeven, de Briggs-Rauscher reactie. Deze reactie is ook ver-uit-evenwicht en levert ook orde op, door in een vaste volgorde gedurende een redelijk lange tijd van kleur te veranderen.

 

Padgett over zelf-organisatie

Deze post is grotendeels gebaseerd op het artikel ‘The Emergence of Simple Ecologies of Skill: A Hypercycle Approach to Economic Organisation’ van John F. Padgett opgenomen in Santa Fé Proceedings, ‘The Economy as an Evolving Complex System’.

Dit artikel is één van de sleutels voor mijn onderzoek, omdat het een antwoord geeft op de vraag hoe er samenhang kan ontstaan in activiteiten waarin die samenhang niet expliciet is. Verder bevat het geresenteerde model een voorstel voor een mechanisme waarmee lokale acties naar globaal gedrag propageren. Het model sluit aan bij mijn ‘velden van activiteiten’ (zie post Simplexity en Complicity), de Concepten van Dennett, de Memes van Dawkins, de Bucket Brigade algorithm van Holland (zie de post Inductie) en voorstellen van  Kauffman. Het model is ingebed in de evolutietheorie en geeft daarin een fundament aan het begip organisatie. Als laatste is er een hint naar een natuurlijke moraal die voortkomt uit de vorm van het proces en daar ga ik nog een post aan wijden. Lees verder Padgett over zelf-organisatie

Gedachtengang Samengevat

Turing machines zijn universele computers: ze kunnen alle goedbeschreven algoritmes in een gekozen domein uitrekenen.

Van systemen van elementaire (1-dimensionele) cellulaire automaten van (gedrags-) klasse IV is bewezen dat ze turing machines zijn.

Het is aannemelijk en logisch dat ook andere systemen die bestaan uit onderling en met andere systemen in hun omgeving interacterende deelsystemen (agent-based netwerk systemen) turing machines kunnen zijn. Dit is op dit moment niet bewijsbaar.

Het gedrag van systemen die turing machines zijn bevindt zich in een fase-overgang tussen ordelijk en chaotisch gedrag, zogenaamd complex gedrag.

Voor NK Boolean agent-based netwerksystemen is bewezen dat een selectieproces het gedrag van die systemen in het complexe gebied brengt en houdt. Daar is de totale fitness van het systeem het hoogst. Dit is op dit moment niet bewijsbaar voor alle dergelijke systemen.

Turing machines kunnen iedere fysieke vorm aannemen, zolang het fysieke voorkomen van de turing machine open is voor uitwisseling van informatie en materie met de omgeving en ver uit evenwicht is. Het gedrag van het systeem kan de kortste beschrijving zijn van het systeem zelf.

Het is logisch en aannemelijk maar niet bewijsbaar dat een bedrijf als entiteit een levend organisme is.

De evolutie van bedrijven is een integraal onderdeel van evolutionaire ontwikkeling en is een extensie van biologische evolutie die door het bestaan van mensen mogelijk is. Dit is niet bewijsbaar.

Technologische ontwikkeling is leidend voor de ontwikkeling van het economisch voortbrengingsproces en dus voor de evolutie van bedrijven. De relatieve fitness van een bedrijf op de langere termijn wordt bepaald door de mate waarin het in staat is om zich te onderscheiden van andere bedrijven.

Een agent leeft in samenhang met zijn omgeving. De omgeving bestaat uit een netwerk van andere agents waarmee hij interacteert en vaste aanwezige middelen. In het geval van een bedrijf zijn die agents andere bedrijven, de middelen zijn bijvoorbeeld grondstoffen en informatie. De interactie bestaan uit de transmissie van informatie en materie. De eenheid van culturele transmissie, concepten, zijn memes.

Een bedrijf co-evolueert als gevolg van die interacties met de andere entiteiten in dat netwerk in een proces van mutatie, en selectie op grond van hun fitness. De aard van het evolutieproces van bedrijven is, anders dan in het geval van biologische evolutie, cultureel en dus niet generatiegebonden en selectie is niet-natuurlijk.

Het gedrag van een complex systeem zoals een bedrijf wordt bepaald door de positie van het systeem in zijn parameterruimte. Het kan worden gestuurd door aanpassingen aan de parameters van het systeem.