Testing Ontological Notions

Metaphysical notions are assumed without proof. They serve as a starting point for thinking, like axioms do for mathematics. They presuppose nothing: the buck stops there. In my PhD I opted for process ontology instead of object ontology because I am attracted to the idea that a firm is never the same twice. I also avoided getting trapped in traditional foundational views and enabled a fresh look. A Christian theologist for example is unlikely to be capable of a strong critique of Christianity, because she has not acquired sufficient distance to doubt the foundations of her research topic. We are however not accustomed to think in terms of causal processes, because the Platonic view that objects and relations have the primate took the upper hand over processual approaches proposed by Heraclitus and to an extent Anaxagoras. Russell (1961) writes that this course of events has held humanity back dramatically.

Deleuze (1968) rejects the primate of objects and the relations between them. He asserts that the primitive is instead that nothing is identical (to something else). Consider the example that not snowflakes nor grains of sand are identical, or tend to an ideal or to perfection. This is of course unknowable, because we can’t know them all or everything, let alone compare them, and we are not at the end of the universe.

Relativity theory teaches that events are different, because every event has a particular location in space-time: some but not all coordinates can overlap. Each one is in a different location and / or at a different time. These coordinates are knowable relative to (in terms of those of) another one.

We must take the observer into account and add her to the system as per rhizomatic theory. Her cognitive capabilities range from a primitive making sense of up until a sophisticated interacting. It is implausible that observations, simultaneous or sequential, are identical. Difference not sameness is the invariable qualifying it as a metaphysical notion.

Deleuze (1968) continues to say that [differences between [series of differences]] account for change. This is also a primitive, because it is impossible that all these differences finally generate stasis, because difference is the norm, whether between or within systems. When circular the behavior of such a series repeats to make a pattern.

Thus we assume the metaphysical notions of difference and repetition without questioning. We assume that the ontological notions of change and pattern derive from them. We know change as [observations of [differences between [series of differences]]]. Take for instance changes in systems’ behavior observed by us, or another system, or by itself.

We know a pattern when we recognise coherent behavior because it has happened before, here and just now or elsewhere in the past. It coheres because we noticed that there is a relation between the series. We don’t see a pattern if there is no coherence, or if we are incapable of observing it (e.g. we can’t observe random behavior, or an atom, or a species). As a byline patterns and change are games for three, not two players (cf. Rovelli 2021).

In regards to coherence, I specifically wish to verify its nature. What test establishes whether a firm is a pattern of coherent behavior that emerges from a causal process and remains self-referencing? It appears that the corresponding design conditions of individuation and autopoiesis offer suitable criteria. I believe that data will not be fit as a source for verification of these premises, because they are usually rubricated and recorded with an object perspective. Interviews will prove more suitable, provided that the interviewees can assume the role of the firms’ spokes person.

About testing

Take the hypothesis that today’s weather is the same as tomorrow’s. It is a rule for generating a prediction or an explanation, in this case, a weather phenomenon. The rule works in a mechanical way: it may have stochastic terms, but its execution is unchanging. If todays weather is sunny and dry, this hypothesis predicts that the weather tomorrow is sunny and dry also. In addition it explains that todays weather is rainy and windy, because this is the same as it was yesterday. Predicting and explaining is treated as the same thing, using different data, namely to generate a relation between stages of a phenomenon or between phenomena.

Such a rule reduces the expressed behavior generated by real processes: making use of it we no longer have to wait until tomorrow to know what the weather will be like. We have designed this hypothesis as a shortcut to the behavior of the weather system between days. This is of course advantageous for many including farmers and sunbathers.

If it repeatedly verifies in a test against reality then the hypothesis may be elevated to become a theory, and if it is false at least once it is a falsified theory. In the first case sunbathers and farmers may rely on it for organising their lives, in the last one they have to look for another. Even having generated correct predictions in many tests the hypothesis may turn out to have been false all along and be amended or scrapped. It may be superseded by another theory which generates predictions that are better in some way, or become part of an overarching theory.

A theory that has frequently predicted correctly can strictly speaking not be said to be true or false, because the event that falsifies it may not have presented itself just yet. The explanations or predictions generated by the hypothesis, however, are true or false. The outcomes that the hypothesis generates compare to a sufficient and pre-agreed extent to the behavior of the phenomenon it seeks to predict on previously agreed aspects.

The hypothesis is explicit, because it is not made of the same stuff that the phenomenon that it seeks to predict is made of. The Navier-Stokes theory for example contains equations not water. An hypothesis to generate predictions of the weather is not made of (the constituent components of) weather but words. Those are intended to identify relations between the behavioral patterns of the phenomenon that render it sufficiently recognizable for the human observer to enable comparison with the outcome generated by the hypothesis. Even if represented in a binary system or a software code the objective is to establish a connection between the phenomenon to be predicted and the observer.

This makes at least manifest that words may not be fit to represent the phenomenon and that the hypothesis is man-made and depends on human observation and cognition for assessment of its veritability. An example of the first is that nature does not restrict the number of decimals as it generates behavior of a given natural process, whereas a practical computer may truncate a number simply because the hardware does not accommodate 3 million decimals. Chaos theory teaches that different approaches to computation of the hypothesis and the subject is problematic. Moreover, chaotic effects can take place even in simple deterministic systems – observed and observing. Secondly, the make up of the hypothesis reflects the cognitive domain of the designer of the hypothesis. But that depends on his life experience and world view, not the topic. The designer runs the risk that his world view is tested and not weather phenomena, say. In case the testing involves answering by people then the interpretation of the questions by the interviewee depends on their worldview and thereby not necessarily represent reality. The testing of the hypothesis may result in testing their particular worldviews or the common opinion.

My research topic is the firm. In a previous post I summarised my hypothesis about the nature of the firm as: ‘.. the topic of my thesis is the firm as an emergent phenomenon. I see the firm as an evolutionary developing self-referencing cultural system. It is constituted of a bunch of ideas in the sense of answers that guide people’s thoughts and their behavior. I hypothesise that those ideas constituting it are widespread and do not mention the firm‘. These statements are founded on ontological, epistemological and phenomenological assumptions. Testing it requires a methodology that takes these into account, if not addressing them directly.

Suppose I wish to test this hypothesis in order to progress it to a theory. In my thesis I demonstrate that the hypothesis is internally consistent. This means that the constituent statements are not incoherent according to the definition of Thagard, although they lack the bonus points of evidence. This is provided if it explains a idea range of behavior of the topic than other theories (widening). If the statements of which the hypothesis is made up are explained by other theories or by evidence (deepening).

The hypothesis as such must predict or explain something (in this case the behavior of the firm), as well as but preferably better than others. Evidence that corroborates underlying assumptions make it more coherent as a theory. Going by the above categories of knowledge, first evidence would be welcome for the ontological assumption that the firm is a pattern emerging from a cognitive causal process.

Next the epistemological capability of the hypothesis to ’take the meme’s eye view’ must be tested. The firm is presented as a cognitive entity. This means that it is capable of making its own observations, or in other words to attribute a particular meaning to what it observes independent of the members of its population, people. How does a firm take decisions that the population would not take, albeit that they are taken through people? What interactions does it engage in that individual people would not?

Last, the assumption that a firm is a phenomenon is supported by evidence that the firm is knowable to a human observer because it behaves in a certain way. How can evidence be generated that corroborates that? What kind of observed behavior is specific for a firm and how can it be measured in reality, and what kind of observable behavior does the model predict? The pivot in this question is the nature of the observation and what that means to people.

One source of evidence are past data generated by the business processes of firms on record with them or in public institutions such as the Companies House. Another source is provided by the population of the firm: the people interacting with it, or in fact the ideas they hold in regards to the firm and how it develops. More specific this concerns the way that ideas are selected to become a part of the body of ideas that guides them and thereby generate the firm’s behavior. How is it that particular idea are selected into the memeplex and people feel compelled to adhere to them and others are not and they are eschewed?

The methodology specifies how the hypothesis is verified: what is tested and in what way. It specifies which business data is compared to what input to the hypothesis, to what extent it is quantifiable and where it is limited to qualitative data. The sources of (business) data are identified and selected, and how they are collected and curated for the task in hand. This includes data drawn from databases concerning past decisions and data yielded from interviews. It organises the activities of the testing, starting from collection and treatment of data up until the comparing of the outcomes with the predictions generated with the hypothesis, their interpretation, and an assessment of the viability of the hypothesis and its constituent parts. Standards are set for the categorisation of the generated data as verifying or falsifying (and probably in between). It indicates how the hypothesis is tested and not the world-view of the designer of the hypothesis or the interviewees, or the designers of the structures of the selected data.

The outcome of this procedure answers the question how we can come to be sure of the viability of the hypothesis, or in other words: does it hold water? It might, or it might not, but most likely the outcome is unclear in some respect and additional research is required. I believe that a major task in this project would be for the participants to keep seeing beyond the preconceptions of the current version of liberal capitalism that seems to occupy the minds of many.