The perceived value of testing is often related to how test teams believe — and promote — the idea that they “test for quality.” The reason I say that is because I maintain quality is value to someone whose opinion matters. You might want to check out my post on testing for quality where I covered a lot of these points. What I didn’t cover in that post is the value of testing itself. The value of testing can be trickier than you may think. Let’s start with this: people will place different value to the quality they perceive. Do we agree on that? If so, I argue that the value of testing that people perceive will be in direct proportion to how much they believe your testing tells them what they need to know. Different people will need to know different things given that their valuations regarding quality will be different in some cases.
What you end up with is dynamic assessment of value and it’s here that testers can start to think a bit more deeply about how what they do provides value.
The value of testing can be made tangible by recognizing it as something like a dynamic engine with four moving parts:
- Assessment of Product Quality: How accurate and complete is it?
- Cost of Testing: How reasonable is it? Is it within project constraints? Is there a good return on investment?
- Decisions: How well does the assessment of quality due to testing serve the project and the business?
- Timing: Is the testing done soon enough to be useful? Can it be done quickly enough to be effective but still meet business commitments?
With that second point, incidentally, you might wonder how people view a “good return on investment.” As you would imagine, this can vary quite a bit. I believe that an effective measure in this context is the extent of information gained per test. In any event, as far as these parts go, each is subject to components of assessment related to value and cost. You can break these components down into a series of operational questions.
Value
- How are we assessing and reporting the quality of the product?
- Are we sure our assessment of quality is justified by our observations?
- Are we aware of the stated and implied requirements of the product when we need to know them?
- How quickly are we finding out about important problems in the product after they are created?
- Are our tests covering the aspects of the product we need to cover?
- Are we using a sufficient variety of test techniques or sources of information about quality to eliminate gaps in our test coverage?
- What is the likelihood that the product could have important problems we don’t know about?
- What problems are reported through means other than our test process but that our testing should have found first?
Cost
- How much does the testing cost? How much can we afford?
- How can we eliminate unnecessary redundancy in our test coverage?
- What makes it difficult (and thus costly) to perform testing?
- How might the product be made more testable?
- Are there tools or techniques that might make the process more efficient or productive?
- Would testing be less expensive overall if we had started sooner or waited until later?
What both of these aspects speak to is allowing people (like senior management) to make an informed decision about what they consider quality. These can also allow business analysts to make an informed decision about what the customer considers quality. Or you can just have the customer themselves tell you this.
Here are the operational questions you can ask as a function of the above value + cost questions:
- Is the test process aware of the kinds of decisions management, developers, or customers need to make?
- Is the test process focused on potential product and project risks?
- Is the test process tied to processes of change control and project management (regardless of how informal they may be)?
- Are test reports delivered in a timely manner?
- Are test reports communicated in a comprehensible format?
- Is the test process communicated as well as test results? (People may know what we concluded; but do they know how we arrived at our conclusion?)
- Are we reporting the basis for our assessment and our confidence in it?
- Is the test process serving the needs of technical support, publications, marketing, or any other business process that should use the quality assessment?
When you start to think like this — meaning, when you start to ask operational questions based on cost and value — you can also operationally start to speak to two questions that testers often feel are too difficult to answer.
“How good is our testing?”
- With respect to the questions above, are there any pressing problems with the test process?
- Is our test process sufficient to alert management if the product quality is less than they want to achieve?
- Are any classes of potential problems intolerable, and, if so, are we confident that our test process will locate all such problems?
“Is our testing worth improving?”
- What strategies can we use to improve testing?
- How able are we to implement those strategies? Do we know how?
- How much cost or trouble will it be to improve testing? Is that the best use of our resources?
- Can we get along for now and improve later? Can we achieve improvement in an acceptable time frame?
- How might improvement backfire and introduce bugs, hurt morale, or starve other projects?
- What specifically should we improve? Are there any side benefits (such as better morale) to improving it?
- Will improvement make a noticeable difference?
Something to consider is that the quality of a product can be considered good enough when the potential positive consequences of releasing it or making it available acceptably outweigh the potential negatives in the judgment of key stakeholders in the project, such as business staff or customers.
I suppose it could be said that I’m promoting a utilitarian view of quality. That view is framed in terms of positive and negative consequences and those consequences can only be determined to the extent that people can make informed decisions about the results of tests.
That is how testing is of value relative to quality.