The more I talk with testers, the more interesting it becomes to consider how the industry has evolved — and often how testers failed to evolve with it. We still see testers talking about concepts in testing as if this was the early 1980s. Is this a bad thing? I think so but let’s talk about it.
Apropos of my “talking about testing as if this were the 1980s”, there is a play from 1980 called “Translations” by Brian Friel and one part of it always stuck with me:
What is happening?
I'm not sure. But I’m concerned about my part in it. It’s an eviction of sorts.
We’re making a six-inch map of the country. Is there something sinister in that?
And we’re taking place names that are riddled with confusion and...
Who’s confused? Are the people confused?
And we’re standardising those names as accurately and as sensitively as we can.
Something is being eroded.
The context is that Owen comes to a certain small Irish town with a group of Royal Engineers. Their job is to map the Irish countryside. As part of this project, Owen helps this group “anglicize” the Irish town names. Yolland, however, who is one of the engineers, ends up becoming extremely interested in Irish culture and believes that the work being done is an act of destruction.
In one sense the play is about the relationship between language and culture. I risk drastically simplifying the play by focusing on just one aspect here: which is the erosion of a particular culture by the way it is talked about but also in terms of what is done to it, particularly by those who are part of the culture. Well, in the words of Yolland, I believe something is being consistently eroded with the discipline of testing. This is largely based on how testers are able to articulate their own discipline, much less practice it.
We Blunder Often …
One of the reasons I have a concern about how testers speak about our discipline is summed up well by Stephen Toulmin in his book The Uses of Argument:
To think up new and better methods of arguing in any field is to make a major advance, not just in logic, but in the substantive field itself.
This is something I’ve found to be consistently lacking with the vast majority of testers I deal with. I have not found this to be the case with developers, by and large, who tend to drive many of the more innovative testing concepts. This reminded me of something else I read in Max Weber’s On the Methodology of the Social Sciences:
The poor condition of the logical analysis of history is shown by the fact that neither historians, nor methodologists of history, but rather representatives of very unrelated disciplines have conducted the authoritative investigations into this important question.
The thought isn’t quite consonant with what I’m saying since developers, as just one example, are not representatives of “very unrelated” disciplines. Still, the message resonated with me: others are often doing more, and better, thinking about testing than testers themselves are doing. We’ve certainly see this in an industry where there is a push to equate testers with developers and conflate the roles.
… But We Can Learn From That
Another idea that resonated with me was that of Alan Simpson from The Wealth of the Gentry where he said:
Our present state of knowledge is one of mitigated ignorance. In such situations, the honest enquirer always has one consolation — his blunders may be as instructive as his successes.
I came upon this as I was also re-reading David Hackett Fischer’s Historians’ Fallacies, which talks about the mistakes people make in their thinking. The idea of the blunders was interesting to me. And this kind of gets into the point of this post, at least relative to its title. Fischer’s book in particular is a good guide to my thinking. So here’s what he says:
Assuming that this logic of historical thought does tacitly exist, the next question is how to raise it to the level of consciousness.
That’s largely been my focus as well. Assuming there is some “logic of testing thought”, and assuming that this is not displayed well by the testing industry as a whole, how do we bring it to the level of consciousness where we’re all taking with each other rather than at or past each other.
As far as a method to the madness, Fischer continues:
If there is a tacit logic of historical inquiry, then one might hope to find a tacit illogic as well, which reveals itself in the form of explicit historical errors. On this assumption, I have gone looking for errors in historical scholarship, and then for their common denominators, in the form of false organizing assumptions and false procedures.
So there it is: instead of finding the logic of historical thought by looking at the cases of it being practiced, instead look at the cases of it not being practiced and work from that. And this brings to mind the words of Willard Van Orman Quine in Methods of Logic:
Truths are as plentiful as falsehoods, since each falsehood admits of a negation which is true.
But why do this? Well, again going with Fischer’s idea:
First, it may clearly indicate a few mistaken practices that are not sufficiently recognized as such. Second, it might operate as a heuristic device for the discovery of a few constructive rules of reason.
So is there a “logic of testing thought”, similar to what is being sought in the a “logic of historical thought”? Well, truth be told, I have no idea. But I do think so. And I do think that looking at the way testers have traditionally thought wrong can be helpful. Wait! Who am I to say that testers have “thought wrong”? Fair point. But I’m basing that statement not just on my opinion but on a few facts that are demonstrably empirical in our industry.
Consider the fact that testing has routinely gone through periods where it becomes nothing more than a clerical or bureaucratic discipline. This is often when companies feel they don’t really need testing because it’s too much overhead. Then we routinely go through periods where testing is a “technocratic” discipline. This is often where companies often feel that developers can do this stuff just fine. Testers routinely equate their work with “manual testing”, suffering the connotation that “manual” effort can be automated away. Thus is “testing” turned into a programming problem. We have a testing industry where the thought leadership, the methods, and often the very tooling is driven not by testers, but by developers.
Some might argue that this is a natural growth of the tester into developer. That’s an interesting argument and not one I’m going to tackle here.
But, yeah, in my view, testers have “thought wrong”. At the very least, they have communicated wrong which has led to an industry thinking that they have thought wrong.
Reducing Scope of the Problem
Assuming you’ll at least give the above argument of mine a bit of credence, then does it make sense to investigate how this manifests?
One of our challenges is there are a near infinity of wrong ways to do things but there is no one right way. There are, instead, a plurality of right ways. But it’s also a truism that of those infinity of wrong ways, we tend to see only a finite number of them actually occur with any sort of regularity. So we can reduce the scope of the investigation in that sense.
And this also speaks to a wider goal: it’s not just to look at the errors we tend to commit as part of our discipline, but rather the ways we start to slide into error. If I had to draw a relevant comparison, I might say that this is sort of like how in performance testing we often look for degradation points rather than outright breakage.
But Let’s Not Get Carried Away
Now, please note: what I’m advocating here isn’t a clarion call for logic at all costs and a “back to formulas and metrics” approach. As Fischer says:
Though logic can distinguish error from truth and truth from truism, it cannot distinguish a profound truth from a petty one. A good many historical arguments are objectionable not because they are fallacious but because they are banal, shallow, or trivial. As a remedy for these failings, logic is impotent.
That’s exactly how I see many tester discussions: dotting the same i’s and crossing the same t’s that we have been doing for well on thirty years now. Thus much of what I see is “banal, shallow, or trivial.” That’s a bit harsh and obviously like all such blanket generalizations, it should be taken with a healthy dose of skepticism.
But Let’s Elevate Our Dialogue and Thought
I go back, however, to something that Karl Popper said in Conjectures and Refutations:
The way in which knowledge progresses, and especially our scientific knowledge, is by unjustified (and unjustifiable) anticipations, by guesses, by tentative solutions to our problems, by conjectures. These conjectures are controlled by criticism; that is, by attempted refutations, which include critical tests.
And I believe this applies not just to how we do our work as testers, but also how we think about our work; how we test our own assumptions and biases in terms of our discipline and how we practice it as well as how we promote it.
It’s really important to keep in mind the idea of “conjecture” from Popper. This is not something that is amenable to or enhanced by any sort of logical analysis. We cannot micromanage creative thought. I talked about this quite a bit in a few earlier posts:
You might notice most of those posts are from many years ago. This is an area I was investigating early on and then I have to wonder if I fell into the lulling pattern of simply “moving with the industry” rather than applying a good dose of gadfly-like criticism while selectively adapting.
I’m still not sure on this but my idea now, as it was then, is to push ourselves to look at some familiar issues in unfamiliar ways. We need to seek out cognitive friction.
In the book The Landscape of History, John Lewis Gaddis says:
History as a discipline is the means by which a culture sees beyond the limits of its own senses. It’s the basis, across time, space, and scale, for a wider view.
I do believe testers have to start thinking more like historians. This is why the books I quoted in this post are quite relevant to me. Speaking about history, John Dalberg-Acton (sometimes better known just as Lord Acton) said this:
History is not only a particular branch of knowledge, but a particular mode and method of knowledge in other branches.
I believe that is entirely true of testing as well.
Testing, as a discipline rooted in storytelling, narrative and experimentation, is a mode of thought and being that has existed since humans began to reason about their place in the world. As such, it literally predates every other discipline that we work with, whether that be data scientists, business analysts, developers, and so forth.
I’ll close with one more quote from Fischer:
To argue that there is a tacit logic of historical thinking is to assert that every historical project is a cluster of constituent purposes, and that each of these purposes imposes its own logical requirements upon a thinker who adopts them. Whether the purpose at hand is to design a proper question, or to select a responsive set of factual answers, or to verify their factuality, or to form them into a statistical generalization which itself becomes a fact, or whatever — it always involves the making of purposive and procedural assumptions that entail certain logical consequences. Every historian must learn to live within the limits which his own freely chosen assumptions impose upon him.
Replace “historical” in that quote with “testing.” Are testers aware of the cluster of constituent purposes on their projects? Are testers aware of how the logical requirements of those purposes demands a distinction between, say, developer and tester, as opposed to their conflation? Are testers aware of their various freely chosen assumptions? Are testers able to consistently, and persuasively, articulate the logical consequences of those assumptions? And are testers, as I fear, learning to live within the limits of certain very constrained assumptions that they impose upon themselves?
Welcome to the challenge of being a well-rounded tester in the modern industry.