What do people tend to think of when they hear “tester”? More specifically, what do they think a person called “tester” primarily does? Arguably, more often than not, you’re going to hear something like “a tester’s role is to find bugs” or “a tester helps surface issues.” As an approximation, that may be accurate. But it’s only a very rough approximation. And a dangerous one. So let’s talk about this.
The danger of this “rough approximation” is that it leads to some other traps in thinking about quality assurance and testing. Those traps are:
- Quality is everyone’s responsibility.
- Only testers actually test.
- Quality assurance is a team.
- The conflation of QA and Testing.
I’ve already talked about the the first and third points a bit. You can check out
Is Quality Really Everyone’s Responsibility? and
Quality Is As Quality Does, if you are curious. I’ve also talked a lot here and there about the second point, arguing that
everyone tests but there is such a thing as a
specialist tester. I even blabbed a bit about the
conflation issues.
So I don’t want to bore you repeating things I’ve already said. Instead, as you see, I’ll bore you by linking to things I’ve already said!
Do Testers Find Issues?
Allowing for the fact that we do have to account for what someone means by “issue,” yes, that
is one part of the role of a tester. To help people see where there are problems (issues). Those problems may be with what we build, but also with how we build it, how we talk about it, how we monitor it, how we deploy it, etc. Specialist testers look at internal and external qualities and the various risks (issues) that can surround those.
Specialist testers are investigators (finding issues from various clues), experimenters (seeing what issues might occur given certain conditions), and explorers (seeing what issues may exist in conditions we have yet to try or perhaps even understand).
Hopefully just by saying things that way, you can see what I mean by a “rough approximation” when it’s said that “testers find issues” or “testers hunt bugs.”
Do Testers Hunt Bugs?
As far as the bug hunting goes, I think most importantly it’s necessary to keep a broad definition of what a “bug” is. I rather like to say that specialist testers are looking for qualities — or the lack of them. There are external qualities and there are internal qualities. When we find those being compromised we have found a “bug.” Those qualities often threaten value in very different ways.
So “how to find (hunt) bugs” is really learning about what it means to be a good experimenter. How do you conduct a good experiment? How do you know when your experiment is good or bad (even as an approach)? How do balance exploration with exploitation in your test strategy? How do you search for “bugs” around categories of error? (Meaning understanding where and how most humans create problems/issues/bugs when building complex things.) How do you inject personae and workflows into your tests such that you capture both spatial and temporal bugs?
That’s where you get into specialist testing.
Testing is not solely done to “find bugs” nor solely just to “check if bugs are fixed.” Testing is a way of surfacing risks, some of which may lead to bugs. Testing is also about helping provide a shared understanding of what quality means, given that quality is subjective and objective. Thus the whole notion of what is and is not an “bug” can morph considerably.
Do Tools Find Bugs?
Well, this is one of those interesting discussions you can get into regarding automation and its efficacy. Here I actually want to focus a bit more on another term that gets thrown around a lot but I’ll come around and also relate it to tools. So let’s look at that term first.
Do Testers Shift Left?
The earliest time bugs (issues/problems/risks) start creeping in is when we are talking about what to build. Whether you have a “requirements phase” or “sprint planning” or “use case sessions” — whatever. When decisions are being made about what to implement, that’s when we start seeing the introduction of bugs. This is all about how humans think, organize, plan, and communicate.
So no automation (tooling) is going to help with that. That sort of “bug hunting” requires humans reasoning about things with other humans. That’s using testing to put pressure on design at the business level, hence shifting left in that context. Tooling may shade into this in terms of using tools like SpecFlow or Cucumber in the context of approaches like BDD.
The next place where bugs get introduced is when we build the thing we talked about. So this is where testing has to be used as putting pressure on design at the code and architecture level. That, again, requires humans and is one of the core aspects of TDD. But this is also where tooling shades into the mix a bit quicker as the thinking behind TDD quickly becomes encoded as tests (
either test-first or test-concurrent) that are executable as automation.
Any automation here will certainly be unit based but also this is where
integration and integrated start to come into play and where we should be thinking about interface testing, contract testing, property testing, etc.
There are (at minimum) two groups of people here: the developers who are creating code based on representations and the business analysts or product owners who are providing the representations in the first place. So the best time to be catching problems is the earliest time we are making them. This can vary quite a bit depending on how interleaved our business definition and development tasks are. Which will depend, likely, on what kind of “agile” you find yourself doing.
But a key point to notice here is that the “bugs” here are not just the localized bugs that you might find in an implementation. They are the bugs in our thinking, in the way problems and their solutions are framed, in the way conditions are applied to complex systems, and so forth.
So there is some “shifting left” going on here in the sense of “tests” (even if called by some other name, like “behavior”) are being framed and executed earlier. Testers — as opposed to just tests — do have to “shift left” in this context along with developers. The industry as a whole is still grappling with what that
actually means. Specialist testers need to be guiding much more of those discussion than they currently are.
So Back To Tooling
Selenium — and Appium, and Winium, and whatever else — are only going to come in after these two areas I mentioned above are well underway. So, with this tooling, you
are putting pressure on the design as it exists along a spectrum of implementation: different browsers, different mobile devices, different operating systems, etc. The question then is the tightness of the feedback loop given that you are already past the “talk about it” and “implement it” parts.
And Back to Shift Left
That feedback loop is really what “shift left” needs to be about. In fact, it’s really “shift left <--> shift right, shift left <--> shift right”. And so on. Only with that interplay —
shifting between polarities — can you tighten the feedback loop and morph the cost of change curve into the cost of mistake curve.
Avoid Conflation
And thus we do come around to why it’s important not to conflate Quality Assurance and Testing. The easiest way to spot someone doing this conflating is when they something like “Testing ensures Quality.” That’s not true. You can have testing and still have mediocre quality. You can even have testing and have terrible quality.
What testing does is provide a mechanism by which measures of quality can be made demonstrable and empirical. That provides people with the means to reason about those aspects and make decisions about them, ideally as soon as possible. If we introduce something that threatens value, the feedback loop from when we introduce it to when we discover it should be very short.
This testing is done by everyone on the team. But test specialists — using the specialized discipline of testing — harness the various viewpoints of quality that exist and the various types of testing that are done. This is what makes quality assurance a distributed function among teams rather than a team in and of itself.
Beware Approximations
Look how much ground we covered in this post. Yet consider what I titled this post.
What I titled it and the range of things we talked about is very different.
That is the best way I could think of to demonstrate how the “rough approximation” is extremely rough and potentially dangerous.