Sometimes you’ll hear testers talk about how they “test for quality.” My problem with that phrase is that it’s a potentially dangerous one. It’s an even more dangerous expectation unless some ideas of what the phrase actually means are sorted out. Once you have an idea of what “test for quality” actually means, maybe you can craft intelligent processes around the phrase.
If you’re going to do that, make sure you know why you test and make sure you have some idea of what quality means. Without some thinking around those two areas, “testing for quality” is nothing more than something you say to make managers feel good.
Quality can be “simple” when you consider the viewpoints of two people that I respect.
- Quality is value to some person. That comes from Gerald Weinberg in his 1991 book Quality Software Management. I would add that this person in question is usually a person who matters. More on that later. People can (and do) debate this notion of quality all day long but I’ve found it to be a truism and I think if most people actually think about it, they’ll agree.
- A bug is anything that threatens the value of a product. That comes from James Bach and his book Lessons Learned in Software Testing but it’s really part of the “Agile Testing” credo. This definition is in contrast to those people who say a bug can only be said to be a bug relative to a formal specification.
In my opinion these definitions allow people who think about quality to be flexible rather than simply reactive.
A few things fall out of all this. First, value is generally what someone will do (or pay) to have their requirements met. This means that quality is inherently subjective. Decisions about quality are ultimately business decisions. So here’s what you really have to know: who has the power and authority to make those decisions on the behalf of the business? Those are the people that really matter.
Another point to get out there immediately is that quality is not a thing; it’s the measure of a thing. Think of quality as a metric. Putting my cards on the table, I’ll say that the thing that quality measures is excellence. I know that sounds trite, but it’s true. Okay, I hear you asking, but then how much “excellence” does a thing possess? Well, what do we mean when we say something is “excellent”? Excellence is usually said to be the fact or condition of going above and beyond; of superiority; surpassing goodness or merit. Notice something there? It’s subjective. Just like quality. And that makes sense because — at least according to me — the one is defined in relation to the other. What this means is that anything that doesn’t measure up to your brand of “excellence” is, ipso facto, a problem. And what that means is you need to take a problem-oriented attitude when looking at quality. Bob Biehl in his 1995 book Stop Setting Goals if you Would Rather Solve Problems describes well how a problem-oriented attitude is very different from a goal-oriented attitude. That’s important when you realize many people treat quality as a goal, which it’s not.
Okay, so quality isn’t a thing. And quality isn’t a goal. So what is it then? How about this:
Quality is getting the right balance between timeliness, price, features, reliability, and support to achieve customer satisfaction.
This is where I start to bring the notion of “testing for quality” in because if software quality is basically a combination of reliability, timeliness to market, price vs. cost, and the feature richness being offered then the testing that is done must exist in balance with these various factors. And this is usually where some notion of “Quality Assurance” pops up.
So here’s an important point: The QA and/or Test team needs to be sure that all activities of the testing function are adequate and properly executed. The body of knowledge, or set of methods and practices used to accomplish these goals, is quality assurance. Quality Assurance, as a team of people, is responsible for making sure there is some notion of quality against which the product can be compared. Testing is one of the tools used to ascertain the quality of a product relative to that existing notion.
This ties into the excellence that I mentioned before. How? Well, the following are what I believe are good definitions of the fundamental components regarding the goals of quality assurance.
- The definition of quality is customer satisfaction.
- The system for achieving quality is constant refinement.
- The measure of quality is the profit.
- The target goal of the quality process is the most valuable things provided, working correctly, in the shortest responsible timeframe.
Quality can be quantified most effectively by measuring customer satisfaction and that often translates into profit which, in turn, translates into competitive advantage.
One formula that’s effective for achieving these goals is:
- Be first to market with the product.
- Ask the right price.
- Get the right features in it.
- Keep the unacceptable bugs to an absolute minimum.
There is a corollary to the first point, which is this:
- If you can’t be first, be really close to first.
There is a corollary to the third point as well, which is this:
- The “right features” are the required stuff that the customer needs to do their job and then there’s some of the “extra” stuff that may be window-dressing but does please the customer.
Finally, there is a corollary to the last point, which is this:
- Make sure your bugs are less expensive and less irritating than your competitor’s bugs. And if you don’t have competitors, make sure your bugs are not perceived as expensive and irritating.
In my experience, except in the safety-critical arena, the above is a viable and actionable formula for creating an excellent product or an excellent service. (I should also note that I am indebted to Marnie Hutcheson who hits on many similar points in her book Software Testing Fundamentals.)
Yet, with all this being said, testing has to show that it adds value and that it is necessary for product or service success. In my experience this is the only true route to counter the market forces that suggest testing can be left out of the process. A QA and/or Test team has to make a good cost-benefit statement for the folks who ultimately decide whether it’s all worth it. I think it’s really important to keep in mind that quality assurance, in a very real sense, is a political process and that means the notion of how much credence to give testing reduces to business decisions.
Keep in mind that the bottom line for testers is that the testing function must add value to the product. Testers must be able to demonstrate that value.
One way to allow a QA and/or Test team to develop a good cost-benefit statement, and to allow a testing function to add real credibility to product and service quality, is to recognize the need to use efficient formal methods coupled with effective metrics. Okay, maybe you can buy that but I bet dollars to doughnuts that some of you cringed when you read “formal.” First of all, “formal” just means following a set of procedures. I don’t think anyone could argue too much that this is a bad thing. Assuming that makes sense, a question presents itself: what are “efficient formal methods” and “effective metrics?” Well, the first thing I want to state here is that no effective QA or test organization should use a term like “best practices.”
An important thing to keep in mind is that there are practices that are good enough given the context in which they are used. They might not be good enough in another context. So a notion of “best” is highly localized.