Lately I’ve been seeing that the whole “testing” vs “checking” debate is now more used as a punchline than it is for any serious discussion around testing as an activity and tests as an artifact. Regardless of my perception, which may not be indicative, I believe that this distinction has not been very helpful. But let’s talk about it. Maybe someone will convince me I’m wrong.
Usually this semantic debate gets framed around whether “manual testers” are still needed or as a polemic against automation. A useless polemic at that, given that automation is a viable, leveragable strategy. Granted, most proponents of this distinction are not arguing against automation as a technique or as a form of test-supporting tooling. Yet that sometimes gets lost in their desire to engage in the semantic debate. And yet it is undeniably true that there is, in the industry, a trend towards some unreflective use of automation. You just have to look at how many companies interview to see this in action.
The Basis of My Stance
Perhaps the problem I have with how this argument is framed is that I come from a scientific background. I worked at FermiLab for awhile during their time of finding the top quark. I have also published scientific papers based on experimentation and I have conducted experiments in artificial intelligence and machine learning contexts. I would argue I have conducted experiments in theological contexts as well, in terms of, for example, analyzing extant ancient manuscripts to better understand the context in which they were written. My only point here is that experimenters in these fields don’t feel the need to break up “testing” (experimenting) into “testing” and “checking.”
For example, when we were analyzing graphs for top quark interactions or running conduit tests on the pipes or figuring out what kind of experiment to run in the first place, we always said we were “testing.” We said that even when a particle accelerator was doing all the work of colliding particles together at very high speeds.
But we did have a framing mechanism for all this and it’s one I’ve certainly found useful in the software testing world.
My Personal Framing Attempt
I’ve found it much better to frame testing as two distinct activities: an execution activity and a design activity.
When that gets into automation discussions, in the latter case, you don’t automate the design activity. You can’t. It involves humans putting pressure on design. They may encode the results of that pressure as automation but it never starts out that way.
Further, when you treat testing as a design activity you can ask people this: “When do we make our most mistakes in software?” Answer: When we’re talking about what to build and then when we’re building it. That’s two levels of design abstraction — neither of which automation applies to at all.
This is how I’ve helped people see that human-based testing and tool-based testing do not have to fall victim to the Tyranny of the Or. There is a Genius of the And to be had there. (Heeding my own advice from many years ago.) It’s also how people can see that testing by humans will never go out of style. Automation is one particular technique we leverage within the context of testing as an activity.
Am I just saying the same thing a different way?
Now, someone could argue: “Well, ‘testing vs checking’ is a lot easier to say than ‘testing as a design activity vs testing as an execution activity.” Perhaps it’s easier to say. But I’ve found it educates people less. Further, I’ve found it’s an argument many people aren’t as receptive to. I’ve found it’s more often a way to shut down discussion than it is to actually dig into the experimentalist aspects of testing.
This is why I question whether the argument, as framed, is a bit flawed. It has been in my experience. With the exception of some overseas companies — I write from the United States, where I primarily work — I often see a lot of eye-rolls or chuckles when a “that’s not testing, it’s checking” statement has come up. I’ve personally seen, but also heard about and from, testers who lost a lot of credibility with this argument.
Yet that credibility hit is a bit unfair. I still think there’s a good point there: there are different aspects to testing. “Checking” could be said to be one of those. It’s the putting it up as a thing distinct from testing that I find often doesn’t sit well with people. And that’s the case mainly because it’s not really illustrative of anything at all.
Consider, as just one example, this write up of testing vs checking. Look partway down the page for the “Testing” / “Checking” table. Does that breakdown really help? I’ve never found it to.
Update to this Post (12/31/2017): Is that source I just quoted actually indicative? Well, some of the ideas it throws out there are those I have seen as the idea of “Testing vs Checking” has made its way from its core proponents to the wider community.
I will say that Michael Bolton in particular has specifically told me that this reference has “plagiarized and misrepresented” his work. As such, I want to make sure that’s known. But I also think it sort of reinforces at least part of my point. If what should be a simple distinction is so capable of being misrepresented, I’m not sure that argues against it being flawed. But in case people aren’t aware of the provenance of this idea, check out Bolton’s original Testing vs Checking and do note that it references an updated version as well. Also check out Testing and Checking Refined by James Bach.
Aren’t I painting with a broad brush?
It’s always easy to throw up straw-man arguments when you’re trying to make your own point. So, to be fair, one of the contentions is that by polishing up our terminology when talking about testing, we will make it clearer. That’s true; I agree with that. That, in fact, is why I make the distinction I made above.
But the corollary here apparently is that when many testers speak about testing, they oversimplify it and thus cause confusion about what testing is. Thus the “testing” and “checking” distinction. However, that “oversimplify” part is actually not what I tend to see.
If anything I’ve seen the opposite: testers who listen to people who own, run, or work for consulting companies who want to propogate their business and thus focus on semantics over and above getting things done. Said testers regurgitate what they hear from these companies often to the detriment of their credibility. I feel somewhat comfortable saying this admittedly charged statement because I ran a test consultancy as well as worked at some.
So that’s what I’ve seen hurting the industry. And this does tie in — perhaps peripherally; perhaps centrally — to my point about some testing fundamentalism.
Now, all that being said, it is absolutely possible — and useful — to sharpen our terminology and make distinctions such that we can provide nuance to our discipline. I just feel that “testing / checking” is the wrong one. I don’t think it aligns with the many other disciplines out there that use an empirical and scientific methodology for experimentation.
And that experimentation is always referred to as testing, pure and simple.