Select Mode

There Is No “Test Phase”

When people talk about the “testing phase” of a project, you already have a problem because that implies testing is an activity that is relegated to a certain phase as opposed to a structural element of the entire project itself.

This idea of a test phase reminds me of James Bach with the analogy of driving a car and asking “When is the ‘look out the windshield phase’ of driving?” The obvious idea here is that there is no “look out the windshield” phase. You look out the window all the time! (Well, hopefully anyway). This falls under the idea of “good and safe driving”, which you practice for the duration of the “project” (i.e., driving from A to B).

Yet a lot of people do like to “bucket” testing within a phase. Usually these are the same people who like to write twenty page test plans that no one reads. An article I read awhile back talked about aggressive and passive testing and the focus of the article was really on one person’s stance — or viewpoint — on testing. And I do think that’s important, that notion of an articulated stance towards testing.

So let me start articulating my own stance. In my view, the project phase at which you test is any time you have communication about what the project is intending to create and how it intends to implement the intent. Likewise, the architectural level at which you test (unit, integration, or system) are considerations of your overall test strategy.

Test Stance

This still leaves open a lot of room for how things are done. As with the article I just referenced on aggressive/passive testing, I think it is important for individuals — and test teams — to come up with a test stance that guides how they think about the testing they are doing at a given point in the project. To me, your stance speaks to the intent of your testing: not just why you are testing, but what you hope to find and how your testing interacts with the project at the phase it is at. Here are some examples:

  • Sympathetic Testing: Early in a project, you generally know the application doesn’t work all that well. Some features are still being thrown together. However, even relatively simple tests at this stage can do a few things. At minimum they can actively introduce you to the feature, as opposed to just reading a requirement document about it. Tests at this point can find bugs without you having to exert a ton of effort to try out all combinations. The “sympathetic” part here is in relation to your stance: you are sympathetic to the idea that everyone knows the feature is not quite ready yet. So hammering developers with tons of bugs that they know exist is not the goal. The goal is discovery and maybe to help the developers see what testing is going to look like as it gets more aggressive.
  • Aggressive Testing: As the major features get further along, your simple tests, while still useful to an extent, will tend become less effective. Before you were just checking if the features were working. But now you want to know if they work well. You want to know if they work as intended. You want to know if they work under varying conditions (performance, data, security, and so forth). This stance is aggressive because your intent is to put the feature through its paces. You should expect a high bug count here.
  • Diverse Testing: As the features you are testing go through a few fix-cycles, the feature itself starts to mature. Bug counts can go down here. This may be because you have found most of the bugs. Or it can simply be that the remaining bugs are hidden and hard to find. What you can do is diversify your portfolio of testing techniques. Try to come at the feature from different directions, with more complex, subtle, or even somewhat unrealistic data expectations. Test sessions and pair testing are very effective at this point.
  • Meticulous Testing: This is where any change is tested very carefully. This is where you make sure anything that is to be released is fully and completely just how it should be. This is where you really drill into a particular feature or even an aspect of a particular feature.

Now, putting these in a list makes it seem like these have been given a chronological ordering sequence. Not necessarily. These are stances, which means they can be applied at various points and at various times and under varying conditions. You may actually find yourself hopping between a few in a given day — or even during a particular testing session with a particular feature. The key thing is to really notice what you are striving for with a given stance. For example, when I’m in my “aggressive” stance, I’m really applying the shotgun effect in a lot of ways and just hammering at the application, finding lots of stuff wherever I can. This is sort of like when a writer starts getting something on paper. When I jump into my “meticulous” stance, I’m really trying to focus in on the things I have found or suspect might be issues. This is like when a writer goes into editing mode.

Test Level

Along with the test stance, I like the idea of what I provisionally call the test level. The idea here is that lower levels are simpler, less powerful tests. Higher levels are thus more complex, more powerful tests. What this does is break up the testing into a sort of hierarchy that can complement the idea of a “test stance” and provide a little more meat for the discussion.

An important element of this for me is recognizing that functional testing is an approach that works at different architectural levels: unit, integration, and system. Those levels really break down into test techniques: unit testing, integration testing, and system testing. Within that, you will have areas of focused testing, such as performance, usability, security, and so on. So the “level” I’m talking about here is the nature of your tests within those architectural levels.

  • Level 0 (Smoke Testing): I think just about every tester knows this one. These are simple tests to show that a product is ready for independent exploratory and scripted testing. This is basically a sanity check. The idea here is that if these tests fail, further testing is unlikely to be a good use of time.
  • Level 1 (Capability Testing): Here you’re generally testing the capability of each function of the product. Each function should, at the very least, be capable of performing its task. Convoluted scenarios, challenging data conditions, and even many function interactions are generally avoided at this level. This is definitely part of the “sympathetic” stance but also serving as the basis for the “aggressive” stance.
  • Level 2 (Activity Testing): These are tests that examine the capability and basic reliability of each function. When I say “function” here and “activity”, think of that as discrete things that users can do with the application. Data coverage becomes of interest here as well as more diversified techniques, such as boundary analysis, error handling, permutations of data, and so on. Convoluted scenarios and function interactions are still of less value here because the emphasis is on the function or specific activity itself. Whereas in Level 1, you just wanted to see if the functions were basically capable, now you’re extending that notion of capability and also seeing if the functions are reliable. This is definitely part of an “aggressive” stance but also serving as the basis for the “diverse” stance.
  • Level 3 (Workflow Testing): These are tests that involve interactions and flow of control among various activities to form scenarios or user work flows. Here you want to bring the full brunt of your diversity to bear on the application but you do so within the context of full application scenarios. So, for example, while previously you might have very meticulously checked the functionality of some forms and you might have also checked some reports, now you are checking that a user who does particular activities, using particular combinations of data, can view a report and find information of a very specific sort.

Note here that, as with the stance, the level numbering does not imply a chronological order to when such testing tasks are done. In fact, you’ll probably step into various levels at various times. Rather the level here is the nature of the complexity of what you are doing. While the workflow testing is really a combination of the activities (which in turn are broke out from the various capabilities), the fact is that you are considering the application as a whole rather than as a series of parts.

My thoughts are still percolating on stances and levels and the most effective way to communicate my thoughts on this. The above is what I’ve been able to come up with so far.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.