Select Mode

Testers Relax … You Don’t Really Assure Quality

As a tester the bad news is that you’re not assuring quality. That’s also the good news.

Despite the commonly used title of “Quality Assurance” for a testing team, you really can’t assure quality by testing. You can’t assure quality by gathering metrics. You can’t assure quality by setting standards. You can’t assure quality by unit testing all of your source code. Quality Assurance involves a continual process of minimizing risk and promoting competitive advantage. Quality is an always-evolving and malleable perception by the customer or user regarding the value provided by a product or service. Quality is not a static viewpoint that never changes, but rather a fluid perception that changes as a product matures (innovation) and other alternatives (competition) are made available as a basis of comparison.

To assure such a fluid concept, you need skilled people throughout the project life cycle who have time and motivation as well as an appropriate balance of direction and creative freedom. All of that is out of scope for a test group, unless that test group has a very broad mandate (and the skills to back it up). These things are within scope for product managers, development managers, and possibly business analysts. That being said, the test organization can certainly help in this process by performing a wide range of technical investigations. Just keep in mind that those investigations are not — and never will be — quality assurance.

There has been a definite shift in the testing industry since I started in it back in the early 90s. That shift has been to what extent requirements specifications are delivered and the extent to which testers play a role in crafting those requirements rather than just interpreting them. However, some environments only practice these ideas halfheartedly, at least when it comes to testing. Yet there is usually still the idea floating around that “testing leads to quality” and thus “quality assurance” and “testing” are really the same thing. Both of those ideas are false, in my opinion, and I’ll try to show why I think that.

First, let’s look at two aspects of the “testing leads to quality” idea.

  • Testing will verify correctness of the product. This is often something people want but it’s impossible to do this by testing. You can prove that the product is not correct or you can demonstrate that you didn’t find any bugs in a given period of time using a given test strategy. However, you can’t test exhaustively, and the product might fail under conditions that you didn’t test. The best you can do (if you have a solid, credible test model) is assessment — test-based estimation of the probability of errors.
  • Testing will assess quality. This is a tricky objective because quality is multidimensional. The nature of quality depends on the nature of the product. For example, a computer game that is rock solid in terms of functionality but not entertaining is often considered a lousy game. To assess quality — to measure and report back on the level of quality — you probably need a clear definition of the most important quality criteria for your particular product, and then you need a procedure that relates test results to that definition.

An Example of Simple Testing Leading To Complex Quality

Taking those two points above let’s consider how complex this can get even with a simple example. It’s very important that you understand the problem you’re trying to solve. This usually involves making sure you read the specifications carefully, in whatever format those happen to be. You should know what data is to be input into the application and what form that data should be in. You should also know what processing has to be done to that data. Finally, you should determine what the expected output is supposed to be and what form the output must take.

For example, suppose you’re asked to test a simple program that simulates a calculator. The program takes in two numbers. Once it does so, the program outputs the sum of the numbers. Before you start doing anything, you have to define this problem more completely. For example:

  • Should the program ask the user to enter the numbers? Or does it just get them from a file? Or database?
  • If the program should ask, then how should the program ask?
  • What kind of numbers are to be input?
  • Must they be whole numbers or can they be decimal numbers?
  • Should the numbers be input one number per line, or can they both be input on one line?
  • After the numbers are input and added, how should the result be displayed?
  • Do you want the input numbers output along with the sum, or do you just want to output the sum?
  • If you want the input numbers as well as the sum to be output, how do you want it done in column format or on one line?
  • Does the program loop at all? Does it read (or ask for) multiple numbers?

As you can see, there are many questions to ask even in this simple problem — and I didn’t even ask any questions about the processing, because adding is a simple calculation. Imagine how complex it would be to calculate financial numbers when liquidity is applied to hedge funds or to calculate changing X and Y coordinates in a missile targeting system. Notice also that I didn’t focus on any specific user interface aspects to this. The assumption might be a command line or a very simple UI. Again, imagine how the complexity scales as you have more in-depth applications that have a correspondingly complex user interface.

The important point here is to ensure that you solve the correct problem. You need to make sure that you understand exactly what you have to do before you start the next step in the development cycle. If you’re not sure what you need to do, then ask the person who wants you to develop and test the program. The point is that just as developers can waste time solving the wrong problem, testers can waste time proving the wrong problem was solved. And thus does “quality” sometimes happen to us rather than be assured by us. So “assessing quality” and “verifying correctness” are not necessarily the simple concepts they might seem to be. They do not produce quality.

Producing Quality

When you hear someone say that they try to “produce quality” what that means (usually) is that they employ a methodology that is based on how strongly they think quality is related to the formality placed on the specification process that happens as part of the development life cycle. What this leaves wide open is the nature of the methodology. Most people are familiar with the distinction between “waterfall” methodologies and “agile” methodologies. In terms of the waterfall approach, what the “produce quality” view generally suggests is that quality will be “assured” when software is produced via a process that does four things:

  1. Obtain a clear and comprehensive requirements specification.
  2. Construct a design specification from the requirements.
  3. Derive the implementation from the design specification.
  4. Demonstrate convincingly that the implementation meets the specification.

Does that all produce quality? If it does, is the implementation then what we use to “assess quality”? Does having this specification mean we can “test for correctness”? Well, my simple example with the calculator shows how many design and implementation questions can show up in just that limited context. So imagine the case with much more complex systems. In short: no, just producing the above information does not assure quality. The very act of having such information, by itself, does not assure quality.

What about an agile approach? The view here generally suggests that quality will be “assured” when software is produced via a process that does these things:

  1. A customer (or business) user is present to discuss a favored implementation.
  2. The simplest possible solutions are created to meet the implementation needs for a small area of functionality.
  3. Feedback is received from the customer (or business) user on the solutions as they are developed.

Does that produce quality? As with the waterfall approach, we can, once again, ask whether the implementation is what we use to “assess quality”? We can ask if having this customer-derived feedback means that we can “test for correctness”?

As we all probably know, following a waterfall approach does not assure quality. However, contrary to what some agile pundits will tell you, following an agile approach likewise does not assure quality. Well, that’s just great. So what are we, as testers, supposed to do? I think we can go back to that “assessing quality” idea but with a shift of emphasis.

Assessing Quality, A Second Look

So let’s reframe the question here a bit from “How do we assure software quality?” to “How do we satisfy the customers of our software?” Let’s reframe the idea of “Do we have quality software?” to “Are our customers satisfied with the experience of our software?”

Many people’s understanding is that quality can be treated as a property that can be built into a system by following certain rules and procedures. But this understanding inclines those same people to forget that “quality” is an assessment that others make based on their experience with how well the software helps them do their work. The question on the minds of the users of software is not, “Is this software well-structured?” but “Does this software help me get more work done? Can I depend on it?” The greater the level of satisfaction, the more likely the customer is to say the software is of good quality.

An immediate and obvious consequence of this shift of the question is that the software developer and/or designer does not and should not declare that the job is “done.” Instead, the customer declares satisfaction (or dissatisfaction) with what the project has ultimately delivered. The job of the software developer/designer — and thus of the tester — is not done until the customer has declared satisfaction. There are probably many levels at which a customer can declare satisfaction but here are a few relatively obvious ones:

  • All basic promises were fulfilled. The customer assesses that the producer has delivered exactly what was promised and agreed to. This might be called “basic integrity.”
  • No negative consequences were produced. The customer uses the product for a while and encounters no unforeseen problems that cause disruption or serious losses. The customer assesses that the product’s design has been well thought out and that it anticipates problems that were not apparent at the outset. That this level is distinct from the previous one can be seen in the common phrases that occur when this level is not reached: “This may be what I asked for, but it’s not what I want!” and “It does what you all promised but I wish I’d have asked about what it would do in this situation!”
  • The customer is delighted. At this level the product produces no negative consequences and, in fact, goes well beyond the customer’s expectations and produces new, unexpected, positive effects. The customer expresses a large amount of delight, for lack of a better term, with the product and often promotes it to other people.

This all sounds great but the problem comes in when you realize that “delight” is ephemeral if based on the software itself. Having mastered (and come to take for granted) the new stuff, the user will expand their horizons and expect more. Will you be ready for the growing customer? “Delight” arises in the context of the relationship between yourself and the customer. The delighted customer will say that you’ve taken the trouble to understand their work and business, you’re available to help with problems and to seize opportunities, and that you, as a provider of a product or service, generally cares for the customer. A key point to get out of this is that users sometimes change their expectations, often in response to the new level of productivity made possible by the software itself.

Keep this in mind! The assessment of the customer evolves and changes and that means your own patterns of assessing quality must evolve and change; your notions of what “correctness” means will have to be modified.

Now realize that different customers will have differing opinions and you’ll see that just “testing for correctness” and “assessing quality” will become somewhat problematic.

Keep this in mind! You can never know the “actual” quality in some universal sense, because part of quality is an axis determinant based on the eye of the beholder.

While the “actual” quality may allude you, it is the case taht via the application of a variety of tests, you can derive an informed assessment of test results and how those results are interpreted relative to the organizational notion of quality.

I should also note that the first level in the list above has been made unnecessarily difficult to achieve because the language used to describe business and organizational processes is often different from the languages currently used in specifications for requirements and design. For example, business processes are cyclic and comprise networks linked by requests their participants make of other people; specifications use certain specialized notation to describe input-output relationships of software components. Business processes have deadlines and are triggered by time-dependent events; temporality is difficult to express in the notation systems used for specifications. The customer and designer would be in a better position to know they are in agreement if the language of the specifications contained the same distinctions as the language of business processes. This is somewhat the point of the practice known as Domain-Driven Design (DDD) and it is also the basis of testing that utilizes acceptance criteria as early as possible.

Acceptance-Driven Assessment

If you want a good start at assuring quality, get your specifications in the language of the business. Since testing is a key component of the process (or should be) and is used to provide information on “correctness” and “quality” (or should be), that would seem to argue that test statements (or test specifications, as I prefer to call them) can be the shared language that bridges the business process and the requirement/design specification. What I believe this shared language approach can provide is a good place to put some emphasis, namely the idea of managing the relationship between risk and a shared understanding of what quality means for your product given the (often varying) needs of your customers.

An Important Point! A shared understanding of quality is where the stated and implied requirements come into play and where those requirements intersect with the business processes such that people can look at the intersection and say “Yes! That’s what I mean by quality.”

Requirements and/or specification development is not something that happens all at once at the start of a project. In this way, I think agile approaches largely get it “more right” than waterfall approaches. Generally, requirements are negotiated in the course of two simultaneous dialogues throughout the project life cycle. Those dialogues entail asking, “What do we actually want?” and “What can we actually build?”

I believe that the efficacy of these dialogues goes a long way towards determining the ultimate quality of the product. I think a good definition of requirements is that they are the set of ideas that collectively define quality for a particular product. These are your acceptance criteria. I then define testing as a process of developing the means to provide an assessment of product quality. These are also your acceptance criteria. In fact, they can be the same thing: your requirements need to be testable statements or testable specifications.

The upshot is that your testing won’t “assure” quality but it will work with the deliverables that everyone agrees encapsulate what quality means for a given project and determine if that notion has been adhered to.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.