Epistemology is about the way we know things. Ontology is about what things fundamentally are. Ontogeny is about the history of changes that preserve the integrity of something. What does this have to do with testing? Everything. But let’s talk about it.
Fair warning: this post is going to be a bit on the philosophical side. But, in being so, I actually feel it’s that much more important than some of my “practical” (if such they can be called) posts.
I’ll spoil the ending here a bit: much of the application development world (i.e., software running on technology distributed to consumers) is built in a context that suffers from a crippled epistemology. When epistemology becomes compromised, our understanding of ontology and ontogeny suffers.
In a general sense, when we don’t know what we know or how we know what we know, it’s impossible to state what actually is and how what actually is changes in some aspects but still stays the same thing, at least in essence.
In a more specific sense, when the specialized disciplines of testing and quality assurance become divorced from an understanding of what it means to operate with a crippled epistemology, those disciplines suffer two problems: (1) they become conflated and (2) come to be thought of as just “manual labor” that can be automated.
Setting the Context
Previously in my post about testing (in the ontological sense), I talked about describing my role this way:
I help reduce the epistemological opaqueness and ontological confusion that stems from the cognitive biases that all of us have when we try to build anything even remotely complex.
In this post, I might reframe that slightly as such:
I reduce the epistemological opaqueness and ontological confusion due to cognitive biases we have when we perform ontogenetic change across the boundaries where humans and technology intersect.
Okay, wait. Hold on a second here. Am I just sitting here trying (and certainly failing) to sound clever? Well, let me provide to you the following thought from R.C. Sproul:
Empirical scientists may disparage philosophy, ontology, and epistemology, but they cannot escape them. Science involves the quest for knowledge. Any such quest, by necessity, involves some commitment to epistemology.
The emphasis there is mine. This comes from his book Not a Chance: God, Science and the Revolt Against Reason. The same sentiment applies to testing. Which shouldn’t be a surprise as testing is based upon the application of the scientific method in a wide variety of contexts. In a very real sense, testers are empirical scientists.
Incidentally, an excellent and wide-angle lens view of the scientific method is presented in The Scientific Method: An Evolution of Thinking from Darwin to Dewey by Henry Cowles, which I highly recommend. What we have to consider is that the “scientific method” itself has evolved over time and continues to do so. That’s an important point for anyone working in a discipline that utilizes it.
In fact, in physics right now there is a bit of a crisis among those who are concerned that the scientific method is being stretched to the breaking point. Richard Dawid’s String Theory and the Scientific Method argues that a new paradigm of scientific methodology is required now that experiment and theory are so out of step with each other. The idea of “theory” and “experiment” being out of step is a crucial point to understand regarding why disciplines based on the scientific method suffer from a crippled epistemology.
I urge all testers to think about what that idea means in the context of our testing discipline. How would such a disconnect manifest and what would it say about our epistemology or, perhaps better, our ability to have a sustainable epistemology?
Expanding the Context
I’ve often drawn the correlations between testing and science, particularly on the point of whether or not testers understand testing in that context. Along those lines, let’s consider another thought, this time from Sean Carroll:
As our knowledge grows, we have moved by fits and starts in the direction of a simpler, more unified ontology.
Again, the emphasis is mine. This is from his book The Big Picture: On the Origins of Life, Meaning and the Universe Itself. Let’s take a moment and think about what this means in a science context.
Galileo, for example, observed that Jupiter had moons. This may seem a little banal but what this did is imply that Jupiter was a gravitating body just like the Earth, which set the stage for rethinking how things work in the solar system.
Isaac Newton showed that the force of gravity is universal, underlying both the motion of the planets and the way that things fall closer to home. This set the stage for thinking about the “Heavens” and the “Earth” as united under a common set of forces. And later, via the work of many astronomers over the course of time, analysis of starlight revealed that stars are made of the same kinds of atoms as we find here on Earth.
John Dalton demonstrated how different chemical compounds could be thought of as combinations of basic building blocks called atoms. In other words, there was a central unifying principle behind all of the differences we see. Something similar happened with Charles Darwin, who established the unifying principle of biological life by showing descent from common ancestors. In a similar way, those working in particle physics have shown us that every atom in the periodic table of elements is an arrangement of just three basic particles: protons, neutrons, and electrons.
Michael Faraday, Heinrich Hertz, and James Clerk Maxwell allowed us to see disparate phenomena like lightning, radiation, and magnets as being part of a central concept called electromagnetism.
Continuing the theme, Albert Einstein unified space and time which also showed us how matter and energy are unified as well.
Great. Wonderful. But what’s the point?
Ways of Talking
All of these examples provide us with “ways of talking” which provides us with an ontology. This is a crucial aspect of how we come to understand whatever reality we are dealing with and convey that to others. This is, in part, why it’s important for testers not to fall into the trap of, for lack of a better phrase, “semantics dismissal.”
A way of talking about things isn’t just some way of listing concepts. Instead, it will generally include a set of rules for using those concepts. It will also generally include the relationships among those concepts. Think of how that applies in your testing and development contexts, where we might be talking about abstractions all across the board, such as user stories, persistence mechanisms, object-relational mappers, acceptance tests, components, and so forth.
All of that has a situational context; each of those are an ontological aspect of the environment we are working in. Thus does an epistemology form, by which I mean we come to some shared understanding of what is justified belief as distinct from an opinion. This is the basis for our semantics and it’s how we base those semantics on what’s called evidentiality. And that just means a working assessment of the evidence for any statements we make.
Not relevant? Of course it is. It’s why a statement of zero defects is a fallacy, as just one example. It can be an opinion and it might even be a justified belief, but it most certainly will avoid a critical threshold for evidentiality.
Yet, zero defects, fallacious as they may be, are certainly a way of talking about our context. And, in fact, every scientific theory is a way of talking about the context we are embedded in. For the above scientists, that context was the world or the universe. For us, in our careers, this is about our business domain and the technology that supports it and, crucially, about the values others receive from what we provide.
When you don’t have common sight lines along your business domain as well as the technology layer that supports it, you have the possibility for your environment to become epistemologically opaque. That manifests as areas with intransparency that make it difficult to reason about the system as a whole and thus decision making and problem solving are compromised.
The same applies with duplication. When knowledge is duplicated, that means there are multiple places to reason about what we think we know and thus multiple places where the two points of knowledge can drift apart.
What’s interesting is that in our business contexts, our ways of talking can change. Remember how the scientific method can change? We’re not going to that level here; we’re simply looking at how our ways of talking within our scientific method can change. That distinction gets to the heart of the basis of testing, if we understand that the basis of testing is about observability and controllability, which gets into reproducibility and thus into predictability. (My plea for testability covered that aspect in some detail.)
Compatible Ways of Talking
Let’s go back to our scientists for a second and think about this in their context. We can easily say something like:
“There are these big ol’ things called planets and there’s also this huge ball of fire called the sun, which is a type of star. All of these things move through something called space. The planets, for their part, do a specific action — called orbiting — and they do that around the sun. That action leads to an observable thing — called orbits — and those orbits describe a particular shape in space called an ellipse.”
What do we have there? In essence, we have Johannes Kepler’s theory of planetary motion. Specifically, the one developed after Copernicus argued for the sun being at the center of the solar system but also before Isaac Newton argued for it all being explainable in terms of the force of gravity.
Why the “before” / “after” callout there? Because we generally say that Kepler’s theory is fairly useful in certain circumstances, but it’s not as useful as Newton’s. And Newton’s isn’t as broadly useful as Einstein’s general theory of relativity.
Now this leads to an interesting philosophical point which, in turn, does very much relate to our context as developers and testers. Wilfrid Sellars, a philosopher, talked about something called the “manifest image.” This concept referred to the sort of folk ontology that is suggested by our everyday experiences. This is contrasted with a “scientific image” that is provided by the unified view of the world (and universe) that is established by disciplines predicated upon the scientific method.
Here’s a key point: the manifest image and the scientific image use different concepts and vocabularies. However, when all is said and done, they should fit together as compatible ways of talking.
And that, my friends, is very much what testing does in the context of, say, aligning development and product so that they have compatible — not identical; compatible — ways of talking about what we all deal with in terms of features that deliver value. This is very much what happens when we align understanding around the idea that there can be a phrase like “automated testing” — so long as we agree that not all testing can be automated. In fact, not even most testing can be automated.
Those nuances, right there, is a key part of the basis of testing.
And let’s not forget the idea of the “crippled epistemology.” I’ve been using that term freely but it’s very much borrowed from the essay “The Crippled Epistemology of Extremism” by Russell Hardin. The general idea is that we often lack direct or personal evidence for most of what we think; thus there is a lot that we don’t know and we have to rely on people we trust. Or … crucially … we have to investigate and find out for ourselves.
You can counteract crippled epistemologies through “cognition infiltration,” yet another borrowed term, this one from Cass Sunstein, which he talks about quite a bit in his book Conspiracy Theories and Other Dangerous Ideas. The idea of infiltrating cognition is where you allow good information to drive out bad via investigation and discovery and, crucially, by the ability to frame concepts. So, again, we can say “automated testing” or “manual testing” are viable concepts (thus we avoid the denier problem) but we have to infiltrate the idea that automation can only do a very small subset of what falls under the banner of testing.
Fighting crippled epistemologies via cognition infiltration is a key part of the basis of testing.
Everything we do in the act of testing hardware or testing software is very much about epistemology: what we think we know about what we are building and how we know it. This is where we align with our subject matter experts. This process builds up our ontology: what things fundamentally are within our business domain. And, of course, this brings up the ontogeny as we determine how concepts evolve and yet still remain fundamentally the same.
All of this is the basis of testing.