Select Mode

Scrutinize, Stabilize, Sustain

A lot of talk in the testing industry still focus on that divide between “automation” and “manual testing.” A lot of talk also focuses around how much and to what extent developers do testing. Here I want to provide a short post that indicates what I’ve done in my career, either as an individual contributor, a manager of teams, or a director.

The reason for the title will be obvious momentarily but as I’ve been reintroduced to college life recently, I remember the mantras you would use for your work, which were often something like:

  • Record, Reduce, Reflect
  • Review, Revise, Revisit

This post could probably be seen a bit as an extension of my reframing testing arguments post. In some ways, this might even be a compactification of some thoughts I had around the artifact crutch in testing. And maybe even a little around my applying test thinking with code.

I’m necessarily going to be simplifying things here. I’m doing this just to focus on the broad ideas rather than the implementation of them.

I like to do three things when I’m working with teams to build software.

  • Focus on testing (scrutinize)
  • Focus on testability (stabilize)
  • Focus on automation (sustain)

The idea here is really simple although there’s a part that can get a little confusing. I believe you are testing all the time. It’s just a question of how much and to what extent. I won’t enter into that debate here. What I will say is that the scrutinize part is meant to capture the essence of a thorough examination and understanding of what’s going to be built, the context in which its going to be built and the outcomes that we prefer.

This implies a careful and detailed analysis, aligning well with the critical nature of testing activities particularly around the idea of exploration in a discovery context versus an elaboration context.

Whatever you do end up building, you make it stable by making sure that testability is one of your primary quality attributes.

In fact, I believe in only two primary quality attributes: testability and trustability. In my way of thinking, all other qualities fall out of those. This post isn’t the place to go into that distinction.

The “stabilize” is also where a lot of exploratory testing comes into play. You can make something testable but how do you verify you’ve done so? Keep in mind, this applies at different abstractions.

What’s “testable” to a developer in terms of code can be compromised in terms of testability when considered from the standpoint of that user interface that the code enables. Likewise, a user interface can be very testable but held together by code that is itself very difficult to test.

The sustain is the part where I put heavy focus on automation. And this is often a sticking point for testers who feel too much of the industry is predicated upon automation to the exclusion of testing. I agree that can happen. But only if you let it.

The scrutinize and stabilize in my above list cannot be automated. They require the deliberate, creative action of human beings. This means all of your primary testing is occurring at the points where we make the most mistakes: when we discover what to build and when we elaborate on that discovery by building it. Finding as many of your mistakes in those two areas is crucial to keeping your cost-of-mistake curve tight.

By that I mean, the time where you make a mistake and the time you find it should be very close together.

So the sustain part can be thought as scaling the rote, algorithmic, scripted aspects of the prior decisions related to testing. What I want is humans who are constantly free to do the scrutinize and stabilize. To do that means putting heavy emphasis on automation to do the sustain, which you can think of as scaling the testing.

Thus, as many testers are starting to now more articulate, we don’t have to talk about “manual tester” versus “automated tester.” We can simply talk about testers. And one skill set of those testers is the ability to not just consider aspects of good development (building in testability) but also effective programming skills (writing automation).

There’s more to say on this topic but here I just wanted to introduce the broad organizing principle: scrutinize, stabilize, sustain. That has been, and continues to be, how I lead teams in a testing context.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

3 thoughts on “Scrutinize, Stabilize, Sustain”

  1. It would be interesting to expand more on the nature of ‘sustain’. Is it just mimicking the published requirements? Is there any test which is not worth writing or which is not effective? Why are most test automation really dull?

    1. That’s a great question. Where my head is usually at is that once we’ve made it through figuring out the experience we want to provide (the outcome) and we’ve done the human level creative thinking and exploring, then we can scale (sustain) our rote confirmation that all of it still works as we make other changes.

      This is where automation can help greatly. It scales (sustains) testing so that testers can do the more important scrutinizing and stabilizing (that tools just can’t help us with). In general, automation requires a distinct source of truth and that would be the agreed upon requirements, with the idea being that those requirements should be the shared understanding of quality that we agreed to provide.

      To your question of a test not worth writing or that is not effective, I would say this: (1) any test that won’t tell you anything useful and (2) any test, that even if it does tell you something useful, you won’t listen to it.

      If either (1) or (2) hold, then I would question why taking the time to create the test even matters. In terms of test automation being dull, I think it should be. It’s effectively just acting as a way to check if we are continuing to provide what we said we want to provide or check if we’ve regressed in some way.

  2. It scales (sustains) testing so that testers can do the more important scrutinizing and stabilizing – Have you really seen this happening in your experience?
    What I have experienced is most people consider automation as an end goal in itself and more time and resources are spent on building and maintaining the automated suite which rarely leaves time for the scrutinize and stabilize part.
    Also do you think automation cannot help us in any way with the scrutinize/stabilize part what some people refer to as partial automation

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.