I want testers to stop trying to “solve” the problems they’ve allegedly been trying to solve for decades now. I want testers to start looking at testing as a discipline that has a broad-focus, wide-angle lens. I want testing to start solving the real problems, including the ones that it has painted itself into. I want testing to get out of the reductionist and into the ecological. Let’s talk about this a bit.
I’ll admit to generalizing with this first point, but many in the testing industry never seem to get past the mechanistic stage. It’s always about “how many test cases” or “should I write a test plan or just an approach” or “how should I tie test subtasks in my tracking system to requirement tasks” and various other things. This industry has been around a long time and while developers are continuing to refine and hone their craft, it always feels to me like testers are rehashing the same material over and over again and very rarely innovating.
I’m not saying some of these issues aren’t important but when you’re asking the same questions the industry asked decades ago, and basically coming up with the same exact answers over and over again, it’s time for a reboot. The fact that this doesn’t seem to happen a lot in testing shows me that testers have become Newtonian reductionists. We are routinely “solving” the same problems over and over again by attempting to look at the constituent parts and see how we can get them to work in a clockwork-like fashion. And, with some modern embellishments, we basically end up with the same old machinery.
And then people wonder how and why testing is treated as clerical and/or bureaucratic. Or they wonder why “testing” becomes equated with a “manual labor” approach that just demands to be automated away — you know, so we can get to the real work.
I firmly believe, and see it reinforced almost daily, that a large part of the problem is that testing is not situated in a wider context of other domains that inform it, both as a practice and as a discipline.
“Is Like” Requires an Ecological View
A couple of years ago I talked about the tester as learner. But, really, we can simplify that. Specifically we can realize that how we learn — and by “we”, I mean all of us — is based on what things have in common. We quite literally depend on metaphor, on the recognition of patterns, on the recognition that something is ‘like’ something else and then the exploitation of that recognition to further more insights.
John Ziman’s Reliable Knowledge: An Exploration of the Grounds for Belief in Science is an excellent book and in that book John points out that scientific insights often arise from such realizations.
The behavior of an electron in an atom is ‘like’ the vibration of air in a spherical container, or that the random configuration of the long chain of atoms in a polymer molecule is ‘like’ the motion of a drunkard across a village green.
Now let’s consider this: there’s the reductionist view of reality and the ecological view of reality. I’ve said these terms a couple of times now but what do they mean? Well, much of our work in the testing field — including, I must admit, a great deal of my own — has been of the reductionist sort. This is the belief that you can understand reality by breaking it up into its various parts. John Lewis Gaddis, in his book The Landscape of History, says:
It’s critical to reductionism that causes be ranked hierarchically. To invoke a democracy of causes — to suggest that an event may have had many antecedents — is considered to be, well, mushy.
People often don’t like that “mushy” view and thus turn to reductionism as a means of control. An ecological reality, on the other hand, is one where you recognize that it’s hard to break things up into their component bits because so much depends upon so much else. Again, quoting Gaddis:
For while the ecological approach also values the specification of simple components, it does not stop with that: it considers how components interact to become systems whose nature can’t be defined merely by calculating the sum of their parts.
Speaking somewhat to this point, Clayton Roberts, in his book The Logic of Historical Explanation, makes a lot of good points about the overall ecological view. For example, in accounting for what happened at Hiroshima on 6 August 1945, we attach much greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders. It’s a bit of a harsh example but that is also what drives home the relevance of the point.
Ecological Views Lead to Unification
Just recently I put up a post about how testing helps us understand physics. My wording in that title was deliberate. Rather than looking at how physics to help us understand testing, the focus was on how testing can help us frame other domains as well.
Now, I bet a lot of people thought that post was just me being entirely irrelevant. (A trait they may feel is shared by this post.) At a high-level the post was showing the unification of ideas between testing and physics, particularly if we take an ecological view of reality. So now let’s talk about the idea of “unification.”
The Basic Idea of Unification
The general concept of unification is fairly simple. If we have two theories that explain two apparently different behaviors of whatever it is we’re studying, it’s sometimes possible to replace both of those theories with a single theory. This single theory still manages to explain the different behaviors. This is what is meant by unification. It may seem counterintuitive but generally the resulting “unified theory” will be simpler than either of the previous two theories. The reason for this is because the unified theory is revealing a deeper, underlying truth.
Some Examples of Unifiers
Consider Galileo Galilei. Around 1638, he realized that constant velocity was an underlying truth. Assuming such constant velocity, there was no experiment you could perform that would tell you whether you were moving or standing still. This essentially unified the ideas of “moving” and “stationary.”
Consider Isaac Newton. Around 1665, he wondered if the force that kept the moon in orbit around the earth (and the planets around the sun) was the same force that tugged on objects on Earth, drawing them toward the ground, such as an apple falling out of a tree. This unification — of the forces of celestial mechanics and earthly mechanics — led to the formulation of a specific force called gravity. This made people realize we can treat the “heavens” with the same mechanics that we do events on Earth.
Consider Michael Faraday. Around 1831, he showed that if you push a magnet through a coil of wire, an electric current flows. Similarly, if you pass an electric current through a wire it can deflect a nearby magnetic compass. The idea was that electric currents create magnetic fields and moving magnetic fields create electric currents. This was electromagnetism, effectively unifying electricity and magnetism.
Consider James Clerk Maxwell. Around 1885, he refined Faraday’s work showing that an electric field would generate a magnetic field and, conversely, a magnetic field would generate an electric field. Maxwell specifically showed that they would do so in an oscillatory manner. Thus was the idea of a self-sustaining electromagnetic wave born. This was a wave that traveled through space. And since it traveled, it had a speed. When that speed was calculated, Maxwell found it was the same as the speed of light. This meant that light was a form of electromagnetic wave. The unification here was the field of optics with that of electromagnetism.
Consider Albert Einstein. Around 1905 he combined Maxwell’s results with Newton’s laws. This showed that the speed of light did not depend on the motion of an observer moving relative to the light. Einstein showed that you could combine the law of motion of Maxwell with the laws of motion of Newton — if you were willing to consider space and time dynamic properties of the universe. This unified just not electromagnetism with mechanics, but also space with time. This also unified the concept of mass with energy.
Unifications Show the Wider Ecology
In 1916, Einstein pulled a Galileo and said that the force experienced by an observer undergoing constant acceleration was indistinguishable from the force of gravity that Newton talked about. So gravity and accelerated motion were unified.
But it even got a bit more interesting. It turns out that if objects were in free-fall they would not feel any force of gravity at all. But this meant that gravity didn’t exist as a “force” in its own right; rather, gravity was the curvature of space. Objects traveling in a straight line in curved space appear to be drawn towards the center of curvature. This motion of being drawn along lines of curvature is interpreted as the force of gravity. So here the unification was the notion of gravity with the geometry of space itself. And since time was already linked with space, this meant gravity — or, rather, curvature — could impact time as well.
Please notice that as we keep going along these lines of thinking, the unifications get a little less intuitive, a little harder to reason about. But they are all critically important. Let’s consider one more stop here.
In the 1960s, when the study of particle physics was really heating up, there was a perceived symmetry between the particle associated with electromagnetism (the photon) and the previously discovered nuclear force. The particles associated with the latter force were given the odd names W+, W-, and Z. The idea was that all four particles would be massless — thus all symmetrical — at high energies. But as the energy decreased, this symmetry would be broken and the particles would take on different values.
As it turned out, the photon was left with a mass value of zero but the W+, W-, and Z particles all ended up with mass. What this meant was that this nuclear force could only operate at short ranges. (In fact, that’s why it’s called “weak.”) Whereas massless photons, as part of the electromagnetic force, can carry light across the entire universe.
The point here is that the two forces of electromagnetism and the weak nuclear force were, in fact, the same force behaving in two very different ways at the low energies we are mainly aware of. But if we have a realm that is sufficiently energetic enough, and thus hot enough, the electromagnetic force and the weak force would merge into a single force. This force was eventually proven to exist in the 1980s and called the electroweak force.
Again, notice how even this one more leap up the unification chain took us into even greater realms of complexity — i.e., of being able to even recognize the unification, much less reason about it. Yet this greater complexity is actually about a concept that is providing more simplicity for the overall picture. Dealing with that balance takes good practitioners of a discipline.
Finding “Is Like” Takes Good Practitioners
What happens in all these cases is the use of imagination in terms of saying how something might be “like” something else. Unification, which is a refined form of saying what something “is like,” requires having a certain amount of intuition that you harness to gather insight. It’s important to note something that Andrew Thomas says in his book Hidden in Plain Sight:
The ideas by which unification is achieved are generally extremely simple ideas — anyone understand these ideas. In fact, we could say that any unifying idea has to be simple.
Speaking to many of the examples I provided above, the author says:
It was as if these great unifiers picked up on something which was under our noses all the time, something which was missed perhaps because it was too simple. Something hidden in plain sight.
The Wide-Angle Lens of Testing
Okay, so the above was a broad outline of the ideas of “is like” and unification. But how do you make this actionable for a testing career? Well, I’ve tried to do my part by talking about a lot of seemingly unrelated things, such as:
Bringing this around a little closer to home, in my Modern Testing posts I talked about the reduction of sources of truth. I asked people to consider: what does your world look like if you have only one source of truth? How much can we unify our concepts? If statements of tests are really just a refined statement of requirements, then can’t we unify them? Some of these ideas aren’t even really new. (Anyone remember Tests and Requirements, Requirements and Tests: A Möbius Strip [PDF link there] by Robert Martin?)
I have long said that testing is most often in danger from its own practitioners rather than from outside forces. I think as a discipline that has long been “ruled” (for lack of a better term) by development and driven by developers, we need to adopt the approach of Albert Szent-Györgyi when he said: “Discovery consists of seeing what everyone else has seen, but thinking what no one else has thought.”
Beyond this, I believe it is important, and in fact critical, to hone our skills in ideas like deriving processes from structures, fittings representations to realities, learning to privilege neither induction nor deduction, and so on. The goal is to remain open to what insights from one field can tell you about another. These are concepts I plan to talk about more in the coming months.
The very notion of testing undergirds disciplines like physics, geology, historiography, paleobiology, chemistry, social science, archaeology, linguistics, and so on and so forth. Our discipline has an incredibly rich pedigree that can inform us. Let’s start acting like it.