Testers and Received Wisdom

Like most disciplines, testing has some so-called truisms that get passed around, many often being blindly accepted. The problem is often that our discipline requires a bit of nuance that the truisms — even if accurate — tend to gloss over. So let’s dig into a few of these.

Earlier I talked about the tradition and dogma that we often pass off as wisdom and I also talked about broadening our wisdom. So let’s get a bit tactical here and consider a few specifics across the spectrum of testing, but then gel all these ideas into an overarching theme: which is that of providing a narrative.

Let’s set the stage a bit. Cem Kaner is someone that testers like to quote a lot. And this isn’t a bad thing. Cem Kaner knew his stuff and had many good things to say about the discipline of testing. I’ll take on two examples here.

Pools of Change Detectors

Cem Kaner referred to your overall test library as your “pool of change detectors.” And that makes sense, in a way, right? It even sounds good.

Now, if in fact that’s what your tests are, you have to run them more frequently to find changes to see if those changes caused mistakes. And this leads directly to the idea of “more tests, more frequently.” The supposition being that this is “more feedback.” But is it?

Well, it can be. But it gets tricky. The tricky part comes in when we want to tighten our feedback loops as much as possible and when we introduce cost of mistake curves (rather than cost of changes curves). This gets into the idea of testing at different levels of abstraction.

We’ll talk about this a bit more with some other examples here. For now, let’s consider another Kaner quote.

Assist or Assure?

An oft-used quote from Kaner is:

“We don’t own quality; we’re helping the people who are responsible for quality and the things that influence it – Quality assistance; that’s what we do.”

And, interestingly, this one has testers either actively defending it or actively denouncing it. The trends lately, it seems, have been the latter. Yet, there is a certain amount of truth to this statement.

If we accept that quality is a viewpoint (it is) and if we accept that this viewpoint has objective and subjective components (it does), then we are led to the idea that we can’t “own” a viewpoint. We are also led to the idea that quality assurance (traditional term) is a distributed function across teams. Everyone on our teams has various insights into what makes a product or service provide value to people such that they can use it to accomplish their goals with the minimum of frustration.

Thus, in a very real sense, we are all responsible for the quality; it’s just that we have different areas of responsibility and different impacts on elements that influence the quality. Individually we are not responsible for all of the quality.

Where a lot of people get hung up is on the notion of the quote simply being a semantic shell game, turning “assurance” into “assistance.” And I get that because while semantics are important, debates about them can get in the way of getting work done.

But let’s consider that the definition of “assist” is “help (someone), typically by doing a share of the work.” Well, that is what we do, based on what I just said above. The definition of “assure”, on the other hand, is either “tell someone something positively or confidently to dispel any doubts they may have” or “make (something) certain to happen.” That first definition is not always what we do and the second is something we can’t do as a general rule.

Leaving aside the definitions, I just said a lot there in comparison to Kaner’s quote. You could argue that Kaner’s quote is simply a condensing of all that context I provided and thus is much simpler. But is it? That’s a trick for us in the industry. We have to be better at condensing our message but without oversimplifying it. That’s a balance I don’t believe specialist testers — myself very much included — have done a very good job at.

Okay, so there’s the lead-in. Let’s take a few other examples, not from Kaner but rather just from the wider thought-space.

Presence not Absence

Here’s another one that gets passed around a lot:

“Software testing proves the existence of bugs not their absence.”

Well, the fact is testing can do both. What matters is the scope over which the testing is considered. Assuming we’re not getting into mathematical notions of the term “prove” here, testing can prove the absence of certain bugs if the scope is small enough. Let’s say I give you this:

Are you telling me you don’t think testing can prove the absence of bugs at this level of abstraction?

I harp on this because as testers we often have to test at different levels of abstraction. As the abstractions go up, the ability of a test — or even a set of tests — to prove absence becomes more and more difficult; eventually becoming impractical and from there straight on to impossible. So there is truth to this statement, but it’s a nuanced truth. And that nuance can help us understand how and where to frame testing.

Detection and Prevention

Here’s another one:

“Testing is about defect detection, Quality Assurance is about defect prevention.”

Both are about so much more than that.

Testing is (ultimately) about putting pressure on design, gaining insights, and then using those insights to enable business value. One of the effects of that is that we can detect certain defects, both in concept and in implementation; both in our ideas and what we turn our ideas into. But it’s important not to let the simple overshadow what makes testing a discipline.

Likewise, I would say that quality assurance is not “about” anything, per se. It’s a distributed function over many people and, if anything, it’s “about” the idea that quality is a complex viewpoint held by different people, at different times, and for different reasons — often subject to various human tolerances and technology sensitivities.

So what’s important there is figuring out how we can harness those different viewpoints of what quality means and come up with a shared way to reason about how to provide it, particularly in light of the intersection between humans and technology.

Agile Testing; Agile QA

I talked about how a possible false dichotomy was often propped up between agile and waterfall. Testers now debate what being an “agile tester” is all about or what “agile QA” is. Or, even worse, they don’t and just use those terms as if they actually meant something.

Agility is an approach to development. Testing is a discipline that is distinct from the approach. Quality is a viewpoint and Quality Assurance is a distributed function over teams or individuals that have different viewpoints about quality. So what does it really mean to call something “Agile QA” or “Agile Testing”?

It means pretty much nothing. Or everything (which can mean nothing).

I prefer to frame this as providing mechanisms that let us — as a delivery team — make better decisions sooner. That means removing some of the major enemies of quality: intransparency, churn, and duplication. That means (ideally) minimizing sources of truth and reducing artifact crutches.

I can do those things in both agile and waterfall. The places where I most often can’t do them is in large organizations or in heavily regulated ones. Those places often happen to be the ones doing more waterfall. But that’s often not a fault of waterfall as it is the company in which waterfall is embedded. The same could be said of faulty agile practices. It gives even worse, in my experience, when companies try to “do both” and subscribe to the so-called “scaled agile” approach.

So I think it’s important to look not just at the approach being taken but the constraints (or lack thereof) from the organization on that approach.

This is a bit of what I was talking about with reframing agile. It’s focusing on how we help people make better decisions sooner. Doing so requires that we are reasoning about a manageable amount of scope. “Manageable” — in part — being defined by how hard it is to have a shared understanding of what quality means (how the product adds value) in the limits of that scope.

And there’s scope again! See how that could relate to the “absence or presence” issue and “detection or prevention” issue?

The bottom line is — or should be — that we specialist testers look for ways (1) of a having a shared understanding of what quality means, (2) understanding what the tolerances and sensitivities are that impact or influence quality, and (3) understanding how we harness different views of quality among people in different teams or within cross-functional teams.

Now let’s turn the tables a bit.

Time Waits For No Tester

We often here the idea of “There’s no time for testing.” This is received “wisdom” we, as testers, sometimes get and are expected to understand and even agree with. But how do we counter it?

When I hear this view expressed, my first thought is to ask: “What do they mean by the word testing?” Because testing occurs at two different levels: design and execution. So which one are they saying we have no time for? If testing is being relegated solely to the execution activity, it’s quite possible that it is, in fact, taking too much time. So maybe we shift testing more towards its design aspect.

The second thing I like to bring up is that we are always testing. It’s not possible to be human, to talk about or build things, and not do some form of testing. It’s literally built-in to how we turn ideas into working implementations. Perhaps it’s not recognized as testing; but it is, in fact, that. And, yes, sometimes that can be inefficient and, if that’s the case, we can talk about how to make that more efficient.

Testing has to be framed as putting pressure on design at various abstraction levels. There’s those abstraction levels again; just like scope we see this as at thread running throughout the bits of “wisdom.” If we treat testing as such, testing can be framed around the idea that there are two points where we, as human beings, make the most mistakes: when we talk about what to build and then when we actually build it.

What that means is that testing can save us from talking about the wrong things and building the wrong things.

Are we sure we don’t have time for that?

Framing things that way tends to be a good way to counter the “wisdom” of saving time by doing away with testing.

Simplification Caveats

I get the idea of being simple to make a point. But our disciplines have a great deal of complexity to them, which is what, in fact, makes them disciplines. We need to be careful not to avoid that fact. Avoiding that is what allows the industry to dismiss testing as just something “anyone can do.” After all, how hard can it be, right?

I get the need to be concise and avoid overcomplicating an issue. The problem is that simplification does not mean accuracy in all cases. And accuracy can matter. If people go in thinking “testing ensures quality” (another bit of so-called “wisdom”), then it can mean just having testing in place will mean we “have quality.”

But we know that’s not true. So then you end up having to explain the nuance anyway. But now you have a situation where you have to provide more complication to something that someone thought was simple. And that rarely goes over well.

It’s often better to start with a little complexity. Or, rather, think of it as nuance. Phrase the nuance in such a way that it encourages questions. Then provide more nuance around the questions. That way you make sure to get good habits of thinking in at the start.

I’m harping on this so much because, as we’ve seen in the testing industry, when accurate messaging doesn’t happen at the start, testing is marginalized, conflated with other roles, or deemed to be something “manual” that can simply be automated. You don’t know which of your material is going to be passed around and quoted. Hence the need for relatively consistent messaging that provides the nuance.

I guess bottom-line is: we shouldn’t state something that we know (or believe) is oversimplified — or worse, inaccurate — just because the accurate version requires facts and jargon or explanations that won’t fit into our sound bites. We should rather think about how to better condense the accurate message.

Okay … But How?

I keep talking about getting better at articulating value and perhaps finding better sound bites. So how about I talk about how I do this. You’re certainly not expected to agree with my statements. But I hope you will agree that I am trying to frame statements in a way that is reflective of, and responsive to, some of the above received wisdom.

I treat testing as a mechanism that puts pressure on design at various abstraction levels. These levels can be the human language we use to talk about what we're going to build to the various points where we actually build it.

This means testing, as a primary activity, is composed of two parts: testing as a design activity and testing as an execution activity. This democratizes testing a bit. Everyone is a tester, but not everyone is a specialist tester. That helps people avoid treating "QA" as a team (which it's not) and instead as a distributed function (which it is).

What specialist testers do is harness the various viewpoints of quality everyone in the project has and the inherent testing that everyone in the project does -- but then provide a systematic basis for how testing as a whole communicates a shared vision of quality. That requires helping all teams minimize sources of truth and reduce artifact crutches.

That specialist also provides tooling that supports testing when that makes sense, particularly at those different abstraction levels.

One thing to note is that I use the term “design” twice: in the context of describing testing (a way of doing testing) and in the context of carrying out an aspect of testing (putting pressure). I use the word “design” like this because designers and developers immediately cue into the word design.

Now, consider this: TDD is using tests to put pressure on code design. BDD is using tests (‘behavior’) to put pressure on business design.

Even if you don’t “do TDD” or “do BDD”, the ideas are still relevant. With this terminology, everyone is somewhat forced to associate design (not just implementation) with testing. I have yet to find a conversation that mattered where any of the above would have been served by using other distinctions, such as “testing or checking.”

I say that because by the time we get to tool usage (automation), people have no doubt about what is human level testing and the approximation of it that a tool can do. And, importantly for me, I provided that by expanding people’s vocabulary (what testing is) rather than reducing it (what testing is not).

Narrative Forces

I’ve talked about project forces before as well as the idea of tester narratives. Narrative is another force within projects. It’s one that testers, acting in the discipline of testing, can harness.

All of our projects have a cloud of narrative energy attached to them. The flow of story, particularly as part of a process, is very strong. It also becomes very inertial. Once set in place, it can be hard to change the story. So it’s critical that we tell the stories we want to tell, backed up by a solid narrative that others can get behind.

That narrative should be as simple as we can make it but not at the expense of compromising what the story actually is. There is a certain optimum balance for narrative in communication. By that I mean there is a certain amount of story complexity that is enough to be engaging while nuanced, but not so much as to be confusing.

As testers, we need to get better at the narrative and, ultimately, the stories that our narrative supports. And that, I believe, means getting better at framing — or reframing — that which we consider to be wisdom.


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.