A lot of talk in the testing industry still focus on that divide between “automation” and “manual testing.” A lot of talk also focuses around how much and to what extent developers do testing. Here I want to provide a short post that indicates what I’ve done in my career, either as an individual contributor, a manager of teams, or a director.
The reason for the title will be obvious momentarily but as I’ve been reintroduced to college life recently, I remember the mantras you would use for your work, which were often something like:
- Record, Reduce, Reflect
- Review, Revise, Revisit
This post could probably be seen a bit as an extension of my reframing testing arguments post. In some ways, this might even be a compactification of some thoughts I had around the artifact crutch in testing. And maybe even a little around my applying test thinking with code.
I’m necessarily going to be simplifying things here. I’m doing this just to focus on the broad ideas rather than the implementation of them.
I like to do three things when I’m working with teams to build software.
- Focus on testing (scrutinize)
- Focus on testability (stabilize)
- Focus on automation (sustain)
The idea here is really simple although there’s a part that can get a little confusing. I believe you are testing all the time. It’s just a question of how much and to what extent. I won’t enter into that debate here. What I will say is that the scrutinize part is meant to capture the essence of a thorough examination and understanding of what’s going to be built, the context in which its going to be built and the outcomes that we prefer.
This implies a careful and detailed analysis, aligning well with the critical nature of testing activities particularly around the idea of exploration in a discovery context versus an elaboration context.
Whatever you do end up building, you make it stable by making sure that testability is one of your primary quality attributes.
In fact, I believe in only two primary quality attributes: testability and trustability. In my way of thinking, all other qualities fall out of those. This post isn’t the place to go into that distinction.
The “stabilize” is also where a lot of exploratory testing comes into play. You can make something testable but how do you verify you’ve done so? Keep in mind, this applies at different abstractions.
What’s “testable” to a developer in terms of code can be compromised in terms of testability when considered from the standpoint of that user interface that the code enables. Likewise, a user interface can be very testable but held together by code that is itself very difficult to test.
The sustain is the part where I put heavy focus on automation. And this is often a sticking point for testers who feel too much of the industry is predicated upon automation to the exclusion of testing. I agree that can happen. But only if you let it.
The scrutinize and stabilize in my above list cannot be automated. They require the deliberate, creative action of human beings. This means all of your primary testing is occurring at the points where we make the most mistakes: when we discover what to build and when we elaborate on that discovery by building it. Finding as many of your mistakes in those two areas is crucial to keeping your cost-of-mistake curve tight.
By that I mean, the time where you make a mistake and the time you find it should be very close together.
So the sustain part can be thought as scaling the rote, algorithmic, scripted aspects of the prior decisions related to testing. What I want is humans who are constantly free to do the scrutinize and stabilize. To do that means putting heavy emphasis on automation to do the sustain, which you can think of as scaling the testing.
Thus, as many testers are starting to now more articulate, we don’t have to talk about “manual tester” versus “automated tester.” We can simply talk about testers. And one skill set of those testers is the ability to not just consider aspects of good development (building in testability) but also effective programming skills (writing automation).
There’s more to say on this topic but here I just wanted to introduce the broad organizing principle: scrutinize, stabilize, sustain. That has been, and continues to be, how I lead teams in a testing context.