The question of this blog title comes up often. The worst answer that can be given is: “When there are no more bugs.” It’s the worst answer because the inevitable follow up is: “But how do you know?” On the other hand, some people, upon answering this, begin providing a very convoluted answer. Here’s my take.
I honestly don’t know that I ever stop testing, per se. After all, testing extends back into the specification process before implementation even occurs. Testing also extends forward into deployment and involves the application living in production, including such aspects as coordinating with support.
But that can sound like I’m avoiding the question. So if reframe this a bit, I can ask not when I stop, but rather: When do I get to the point of saying “I’ve tested enough for this particular feature”? I would say that moment occurs when there is a preponderance of evidence that strongly suggests a low likelihood for finding any more impactful enough bugs.
Here “impactful enough” means bugs that would severely diminish or entirely reduce the value experience people get from the product or service. That’s all, in part, a value judgment and that judgment may change based on the type of feature or the context of the feature. Maybe it’s going to be part of a demo at a conference. Maybe it’s being provided to some early adopters. Maybe it’s part of a promised delivery to a third party.
One of my concerns, though, is that when this “when do I step testing” question is asked it also seems to be implicitly framed as treating testing solely as an execution activity. If you also treat testing as a design activity, then I would argue that you never stop.
Well, unless your company has stopped designing features entirely, which is unlikely.
I think it’s important for testers to keep in mind that testing is a broad-spectrum, continuous activity that — for specialist testers, anyway — never truly stops. As an activity, it simply undergoes shifts of focus and emphasis based on context.