I was giving a presentation to developers as well as engineering hiring managers who make decisions around hiring test practitioners. This came about regarding recent decisions in hiring, or rather, lack thereof. Brought up to me numerous times was the idea that testers are not being hired if they even hinted at the idea of testing as distinct from checking. So let’s talk about this.
Setting Up the Context
First, why was this happening? In querying those I was engaging with the answer was immediate and unequivocal: these testers routinely sounded like they wanted to discuss semantics more than actually get work done.
That can be unfair and, quite a bit later on, I did reference my post on dismissing semantics. And yet, as many people who know me or who have read me may know, I do in fact think that this “testing vs checking” is a flawed argument.
But, on the other hand, while I do think it’s a flawed argument it would not be one that — just by itself — would make me avoid hiring someone. Rather than provide a bunch of anecdotal statements around what was said to me in this context, let me instead show you how I started to talk with this group regarding how I framed all this.
My Schematic View of Testing
First I said that how I schematically view a test is as such:
- context, action, check (assertion / expectation), teardown
I said that all of that forms an interaction with an observation. Fundamentally, that’s it.
Notice the use of the term “check” in there? So how I framed this was that just like any experiment, a test has a component of checking. You have to make some observation (call it a “check”) and then make a judgment about that observation. But the “check” is very different from the totality of the “test.” It’s simply one component of a test, albeit an incredibly important one.
This could be seen as a very subtle difference between those who advocate the “testing vs checking” argument. But it’s not. People who argue the “versus” are saying that testing is ontologically different from checking. I’m not saying that at all. And I think doing so is flawed.
What I am saying is that checking — or a check — is part of the overall ontology of a test. Without having a component of checking something, you don’t have a test. Again, change the word “checking” to “observing” and that becomes pretty clear.
I further told this group that only in the software testing industry do these kinds of debates tend to happen. You don’t find people in any discipline predicated upon experiments saying “experimenting vs testing” or “experimenting vs checking.”
When I framed the above — and note that it can be applied equally to automated tests or human tests — I was told: “Okay, yes, if testers had said something like that we would have been less concerned.”
Reframing Narrative: Tests and Checks
That is an example of framing a narrative and taking into account an emic/etic distinction. I was speaking to “outsiders” — at least to the craft of testing — and giving them an insider view. But, crucially, I was giving them a view that could resonate with them and would prompt further discussion rather than shutting discussion down.
I was also able to use this schematic narrative as a springboard to show that while my above schematic appears formulaic or algorithmic, it doesn’t solely have to be. It can be applied in an exploratory context as well. When we explore, we think and act experimentally which means the above holds.
In a scripted test, the above would be the same set of interactions and observations each time. In an exploratory test, the above would be more of a guide but would likely have many different interactions and observations, some of which we make up on the spot as ideas come to us.
In all cases, it’s really easy to say the following: When I’m doing testing, I’m applying some form of test, and all forms of test necessarily have checks.
That statement has the benefit of being manifestly true and resonates with just about any other discipline that has experimentation as a focus. Notice I’m using just as much semantics as someone framing a “testing vs. checking” argument but — I would argue — I’m using more relevant semantics. And I’m framing those semantics in a way that aligns with a lot of intuition.
What I just said is, of course, arguable. The reason I make the argument as I do is based on numerous feedback I have gotten over the course of many years from those “outsiders” to the testing craft. And even the “insiders” can often see the validity of this. I’m not saying this is “more true” than other arguments. What I’m saying is that it is a true argument and it’s one that, in my experience, I’ve found provides a better narrative to convince the people that most need convincing.
Reframing Narrative: Quality
Let’s look at another example and this came up in the same context as above.
The other thing that people told me they hear all the time from way too many testers is “Testing doesn’t have anything to do with quality” or some variation on that thought. This sentiment is not one that I think holds water. In fact, it’s demonstrably untrue. But rather than just say that, here’s how I reframe this.
Quality, in the context of a product or system, often refers to its ability to meet specified requirements, fulfill its intended purpose, and provide valuable outcomes to its users.
Testing plays a crucial role in assessing these aspects. This is often framed around the concept of risk. I talked about this quite a bit in my post “What Actually Is Testing?”
By identifying and addressing risks to those abilities, testing helps discover problems closer to when they’re introduced, thus often helping to improve functionality, and enhance user experience, all of which contribute to the overall quality of the product or system. Note that I’m not saying testing, in and of itself, improved functionality or enhanced the user experience. What I am saying is that it helps to do so. And thus testing does very much “have something to do with quality.”
The term “quality” if we get a little reductionist for a moment often refers to the validity, reliability, and overall value of our experimental results. This includes the accuracy of the data, the reproducibility of the results, the soundness of the conclusions drawn, and the contribution of the findings to the advancement of knowledge we have about the outcomes we provide.
Well-designed and well-executed experiments can produce coherent data that leads to meaningful insights that allow us the possibility of coming to trustworthy conclusions.
Coherent data and information is crucial for making informed decisions. And testing most definitely allows people to make informed decisions. Whether they do so or not is orthogonal to testing’s ability to allow them to do so.
After elaborating some of the above, I told my audience this: “I can already tell you what some test specialists out there will say. They will point to my above statements of improving functionality, and enhancing user experience and say: ‘But testing doesn’t do that! Other people do that. We just surface risks.'”
My response? Yes, that’s true, up to a point. But when we surface risks with our teams, that allows us the ability, collectively, to improve functionality or to enhance the experience. Of course, this isn’t a given. After all, we don’t have to listen to the results of our experiments. But, crucially, with appropriate testing in place, we can’t claim we weren’t informed of the risks. Assuming, of course, that we craft experiments (tests) that are effective and efficient at ferreting out various types of risks, by which I mean degraders for different qualities.
I also told my audience that one of the reasons I speak and think this way is because I don’t treat testing as just an execution activity. I also treat it as a design activity. By which I mean an activity that helps us, along with our teams, put pressure on design.
Thinking about software as designed artifacts helps frame them as systems that build behavior via interaction. With testing, we can identify feedback systems to determine how particular states in a system change or how certain changes affect the overall state.
Think about that in relation to my “schematic formula” above. Crucial to these feedback systems is having checks as part of any tests — whether scripted or exploratory; human or automated. However, we approach this, the schematic aligns with the idea of testing for outcomes.
Let me use a gaming example for a second. From the game designer’s perspective, the mechanics give rise to dynamic system behavior, which in turn leads to particular aesthetic experiences. From the game player’s perspective, it’s the opposite. Aesthetics set the tone, which is born out in observable dynamics that stem from operable mechanics. Two different models of causality. Yet the way to test for the outcome can be identical.
Have I Reached Truth?
Am I saying this is the way it is? Am I saying I’m right in the above and any deviation from my thoughts is wrong? No, I’m not. But I’ll ask readers to be very cautious of those who act as such. And we all know there are very vocal test practitioners out there who act exactly that way. And these are usually the ones who are framing narratives that a new generation of testers is latching onto and using.
Those narratives are what are being noted by many non-testers. And those non-testers, many of whom make decisions about how, when, and to what extent testing should be applied to projects, are taking note that the narratives just don’t work. And thus those same people are finding testers — not testing, but testers — to be largely irrelevant. This speaks a bit to the abstract battle for irrelevancy that I talked about.
We are seeing the continued rise of a technocracy that often puts people second to the technology they use. This means it is critical that not just testing as an activity but test specialists who treat testing as a craft and a discipline are front-and-center in the industry. We can’t afford to be seen as irrelevant.