You’ll often see questions about the practice of software testing that essentially boil down to this: “Which is better: manual testing or automated testing?” This is how many engineering managers do view the world and while they may throw in more words, what they are asking is that question. And the answer they generally come down to is: “Automated testing is better.” How do testers combat that? And should they?
This is where many testers take the consultant approach and start talking about how automation is not really testing; instead, it’s checking. It’s an argument that is almost guaranteed to hold no sway over non-testers.
Instead, I would argue that a better approach is to reframe the question a bit and ask it this way: “Which is better: humans doing testing or tools doing testing?”
When phrased that way you see the answer is that both are good, both are likely needed, but tools only do what humans tell them to. This is “obvious” and we often think that people should intuitively understand this. The problem I’ve found is that their intuition is compromised by a limited view of testing.
But if we start with the above premise, “obvious” as it may be, it lets us do another bit of reframing.
We can ask people to think about this: testing is about design and execution. The tools cannot do the design part; just the execution. So, in that case, since humans can do both (design and execution), they are clearly “better.”
But, of course, tools can do that testing much more rapidly and without getting tired. So a tool can be “better” in that limited context. Tools can also be good in the context of fuzz testing or generative testing but not so great at exploratory testing.
Also, human testing extends beyond just code. We also test requirements, for example; we test assumptions; we test our knowledge as a team. So, again, human testing would be “better” there because tools can do none of that.
My overall point here is that any such question has to be clear about its premise regarding what testing is and is not.
So the false dichotomy that I’m referring to here is this notion of a split between human testing and automated testing such that we can ask which is better in some categorical sense. They both have their domains of applicability. They should not be set up in opposition to each other.
When you perform testing as a human, you are engaging in a process that is part of every scientific discipline out there. You are part scientist and part artist. You engage in experimentation, exploration, investigation, and discovery. You help others discover truths, chart out areas of ignorance, and show how and when our minds fail us due to cognitive biases and errors, and when our technology fails us due to sensitivities and tolerances. You help people who are building complex things communicate better by understanding how and when humans make errors in their thinking and how technology can amplify those errors.
This goes beyond just “having just an eye” for testing. It’s about having a skill set and mind set for experimentation and communication. It’s about having core skills related to analysis. It’s understanding how and when testing puts pressure on design at appropriate abstraction levels.
All of that is human work. In the context of that work, we utilize different techniques.
So part of this is getting people to realize that automation is simply that: a technique. It’s one way of doing testing. And, even then, it’s only one way of doing a limited part of testing: that which can be treated as programmatic and/or algorithmic. But testing also has a very large human component because it is based on investigation, exploration, experimentation, and discovery. No automation tool — not even the so-called “AI-based” tools — can do any of that. Not everything can be done by automation, not should it. But automation does have a rightful place.
It’s finding that balance that matters. Finding that balance is how we avoid the false dichotomy of which is “better.”