Sometimes you’ll hear testers talk about how they “test for quality.” My problem with that phrase is that it’s a potentially dangerous one. It’s an even more dangerous expectation unless some ideas of what the phrase actually means are sorted out. Once you have an idea of what “test for quality” actually means, maybe you can craft intelligent processes around the phrase.
If you’re going to do that, make sure you know why you test and make sure you have some idea of what quality means. Without some thinking around those two areas, “testing for quality” is nothing more than something you say to make managers feel good.
Quality can be “simple” when you consider the viewpoints of two people that I respect.
- Quality is value to some person. That comes from Gerald Weinberg in his 1991 book Quality Software Management. I would add that this person in question is usually a person who matters. More on that later. People can (and do) debate this notion of quality all day long but I’ve found it to be a truism and I think if most people actually think about it, they’ll agree.
- A bug is anything that threatens the value of a product. That comes from James Bach and his book Lessons Learned in Software Testing but it’s really part of the “Agile Testing” credo. This definition is in contrast to those people who say a bug can only be said to be a bug relative to a formal specification.
In my opinion these definitions allow people who think about quality to be flexible rather than simply reactive.
A few things fall out of all this. First, value is generally what someone will do (or pay) to have their requirements met. This means that quality is inherently subjective. Decisions about quality are ultimately business decisions. So here’s what you really have to know: who has the power and authority to make those decisions on the behalf of the business? Those are the people that really matter.
Another point to get out there immediately is that quality is not a thing; it’s the measure of a thing. Think of quality as a metric. Putting my cards on the table, I’ll say that the thing that quality measures is excellence. I know that sounds trite, but it’s true. Okay, I hear you asking, but then how much “excellence” does a thing possess? Well, what do we mean when we say something is “excellent”? Excellence is usually said to be the fact or condition of going above and beyond; of superiority; surpassing goodness or merit. Notice something there? It’s subjective. Just like quality. And that makes sense because — at least according to me — the one is defined in relation to the other. What this means is that anything that doesn’t measure up to your brand of “excellence” is, ipso facto, a problem. And what that means is you need to take a problem-oriented attitude when looking at quality. Bob Biehl in his 1995 book Stop Setting Goals if you Would Rather Solve Problems describes well how a problem-oriented attitude is very different from a goal-oriented attitude. That’s important when you realize many people treat quality as a goal, which it’s not.
Okay, so quality isn’t a thing. And quality isn’t a goal. So what is it then? How about this:
Quality is getting the right balance between timeliness, price, features, reliability, and support to achieve customer satisfaction.
This is where I start to bring the notion of “testing for quality” in because if software quality is basically a combination of reliability, timeliness to market, price vs. cost, and the feature richness being offered then the testing that is done must exist in balance with these various factors. And this is usually where some notion of “Quality Assurance” pops up.
The QA and/or Test team needs to be sure that all activities of the testing function are adequate and properly executed. The body of knowledge, or set of methods and practices used to accomplish these goals, is quality assurance. Quality Assurance, as a team of people, is responsible for making sure there is some notion of quality against which the product can be compared. Testing is one of the tools used to ascertain the quality of a product relative to that existing notion.
This ties into the excellence that I mentioned before. How? Well, the following are what I believe are good definitions of the fundamental components regarding the goals of quality assurance.
- The definition of quality is customer satisfaction.
- The system for achieving quality is constant refinement.
- The measure of quality is the profit.
- The target goal of the quality process is the most valuable things provided, working correctly, in the shortest responsible timeframe.
Quality can be quantified most effectively by measuring customer satisfaction and that often translates into profit which, in turn, translates into competitive advantage.
One formula that’s effective for achieving these goals is:
- Be first to market with the product.
- Ask the right price.
- Get the right features in it.
- Keep the unacceptable bugs to an absolute minimum.
There is a corollary to the first point, which is this:
- If you can’t be first, be really close to first.
There is a corollary to the third point as well, which is this:
- The “right features” are the required stuff that the customer needs to do their job and then there’s some of the “extra” stuff that may be window-dressing but does please the customer.
Finally, there is a corollary to the last point, which is this:
- Make sure your bugs are less expensive and less irritating than your competitor’s bugs. And if you don’t have competitors, make sure your bugs are not perceived as expensive and irritating.
In my experience, except in the safety-critical arena, the above is a viable and actionable formula for creating an excellent product or an excellent service. (I should also note that I am indebted to Marnie Hutcheson who hits on many similar points in her book Software Testing Fundamentals.)
Yet, with all this being said, testing has to show that it adds value and that it is necessary for product or service success. In my experience this is the only true route to counter the market forces that suggest testing can be left out of the process. A QA and/or Test team has to make a good cost-benefit statement for the folks who ultimately decide whether it’s all worth it. I think it’s really important to keep in mind that quality assurance, in a very real sense, is a political process and that means the notion of how much credence to give testing reduces to business decisions.
The bottom line for testers is that the testing function must add value to the product. Testers must be able to demonstrate that value.
One way to allow a QA and/or Test team to develop a good cost-benefit statement, and to allow a testing function to add real credibility to product and service quality, is to recognize the need to use efficient formal methods coupled with effective metrics. Okay, maybe you can buy that but I bet dollars to doughnuts that some of you cringed when you read “formal.” First of all, “formal” just means following a set of procedures. I don’t think anyone could argue too much that this is a bad thing. Assuming that makes sense, a question presents itself: what are “efficient formal methods” and “effective metrics?” Well, the first thing I want to state here is that no effective QA or test organization should use a term like “best practices.”
There are practices that are good enough given the context in which they are used. They might not be good enough in another context. So a notion of “best” is highly localized.
In many companies, the point above may not be as much of a hard point to get across. However, in the field of Quality Assurance and Testing, I would certainly maintain that except in limited contexts, there’s often very little consensus about what methods or metrics are “best.” And even if there is such consensus, my experience is that there is no consensus on how and when to use these methods or metrics. Here’s what I think we can say while remaining intellectually honest: there are methods and metrics that are more likely to be considered good and useful than others, within a certain community, and assuming a certain context (such as a particular project) that the community is acting within or upon. The point here is that good practices are not a matter of industry popularity. Good practice is a matter of skill and context.
Now this brings me back again to the notion of testing and testers. Just as good practices are a matter of skill and context, the practice of testing is a combination of mind-set and skill-set. The best testers you’ll come across have very effective thinking skills. Thinking is what the focus of testing actually is. Earlier I talked about how a test team collectively adds value. Well, now here’s where a tester (as an individual) adds value: by thinking about things just a little differently than everyone else so that they can find problems that others are unlikely to find. Testers make informed decisions about quality possible because testers think critically and logically about products and the customers who use those products. Testing is about questioning a product or service in order to evaluate it. A tester must observe what the product does, when it does it, how it does it, and under what conditions the product changes what it does.
Testing is really about one thing and one thing only: making informed decisions about quality possible.
I can refer again with definitions from two people whose opinion I respect and to show that I’m not totally alone in this kind of thinking.
- “Testing is questioning the product in order to evaluate it.” (James Bach)
- “Testing is a process of technical investigation, intended to reveal quality-related information about a product.” (Cem Kaner)
I bolded the relevant parts. The first definition is a good operational definition that can be used when you want to talk with and among skilled testers. The second definition is a good operational definition that can be used when you’re trying to underscore the idea that testers need skills.
The “questions” consist of various ways of configuring and operating the product. The product “answers” by exhibiting behavior, which the tester observes and evaluates. A good test will result in a credible (empirical) observation of the product doing something. To evaluate a product is to infer from its observed behavior how it will behave in the field and to identify important problems with the product. These problems are the things that threaten the product’s value to people who matter: the business and the customers of the business. The “technical investigation” are the various means used to synthesize the answers to the questions and the basis for understanding what is considered an answer in the first place.
Asking questions and technical investigation are the two sides of testing that, interestingly enough, often work in conflict. It’s a good conflict, though, sort of like the conflict that drives narrative tension in a work of fiction. The two sides are this:
- asking the right questions
- getting the right answers
Here questions = tests and right answers = credible observations. Traditionally, testing has been more focused on getting the right answers, through carefully documented, controlled, and repeated tests. But — and here’s where the conflict comes in — there are so many questions worth asking, a tester has to consider the trade offs between ultimate care in getting the right answers and the danger of simply not doing enough testing (i.e., not asking enough questions). An effective test effort requires that everyone understand certain realities as they relate to how the business and the customer of the business place value on the product.
The goal of a test effort is not so much to determine what is and is not of value in a product. That’s a business decision. The goal of a test effort is to provide enough information to help people make informed decisions about whether the value they believe exists does, in fact, exist.
The above truly is an extremely important point to note because this notion relates testing to quality in an operational way. Why? Because it brings up the true conflict (as opposed to the surface one): cost versus value in testing. Here’s how I like to break this down:
Value in Testing
- The elements of the product that are exercised by the test.
- The probability that if a problem exists in the area covered, the test will reveal it.
- The degree to which the test can be trusted, because its results are valid and consistent across multiple executions of that test.
- The degree to which the information we want to discover exists; and the degree to which that information matters.
- The potential for the test to reveal unanticipated problems by challenging assumptions.
Cost in Testing
- Performing the test may prevent you from doing other tests.
- The cost of getting ready to perform the test.
- The cost of performing it as needed.
- The cost of getting someone else ready to run that test.
- The cost of keeping the test running.
- The cost of explaining the test, justifying it, and proving that it was performed.
So what’s the point of all this? Well, I think it’s an important one:
In software engineering it’s generally a truism that at every point in the life cycle of the product, it’s necessary to compare the present quality of the product against the cost of further improvement and the value such improvement would provide.
The same point applies for testing. A test team always has to look at the cost and value of doing more testing based on the informed decisions about quality that the current testing allows. It can’t just be assumed that more testing is good any more than it can just be assumed that the cost of further testing is too high.
I suppose testing could be said to be “best” if the assessment of quality is more accurate, the cost of testing is lower, the basis for making decisions is better, and the time frames are shorter. That all sounds great, right? In fact, “perfect testing” would probably be to instantly and effortlessly give the right information to allow any part of the business to make any necessary decision concerning the product. The only slight problem is that I’ve yet to see that perfection be achieved and I question if it’s even possible. So the final important point here:
Effective “testing for quality” should be a process of developing a sufficient assessment of quality relative to stated values, at a reasonable cost, to enable wise and timely decisions to be made concerning getting a product into the hands of customers.
That really is one of the fundamental quality concept to understand.