Previously I talked about the distinction between positive and negative tests and why I felt such a distinction was not only unnecessary but also potentially harmful. This is still a topic that interests me because I run into testers who still like to make these distinctions.
This is often the case with more junior testers where they read these terms in books and then try to incorporate this idea into their test design. In that previous post, I said:
“In a future post, I want to go through a few more examples around what I’m talking about…”
This post will be those “few more examples” based on some discussions I’ve had with testers who have engaged with me on this topic. Before getting into specific examples, I’ll state up front my perhaps simplistic view of testing and how it works for me. Stripping away all talk of techniques, approaches, processes, and tools what I want to do are the following:
- I want to show that the application does what it is supposed to do.
- I want to show that the application does something it is not supposed to do.
Note the wording of the second: I want to find that the application does something that it is not supposed to do. I prefer this rather than saying the often cumbersome phrases like “I want to show that an application does not do anything that it’s not supposed to do.”
Distinctions: By Example
One focus here is considering tests designed to make a system fail; another focus is considering tests that are designed to exercise functionality that deals with failure. There is a subtle shift in thinking there and it’s one I find some testers don’t make. Or, rather, they often do — but not consistently.
As an example, even if a user gives some invalid input on a particular form, I would be expecting a “positive” result, such as a helpful error message for the user. So, in testing that out, I try those invalid data conditions and I’m looking for situations where the application either gives the wrong error message or allows the invalid input, thus giving no error message at all.
I bring this up because a specific and data-entry based example definition for “negative testing” is often given by saying that negative testing “involves the intentional entry of incorrect data to determine if an application will recognize that the data is invalid and take appropriate action.” In other words, the system will not do something (accept data) that it should not (accept invalid data).
Then, by this logic, “positive testing” is defined as the process that “involves the entry of valid data into an application to determine if its processing results are correctly produced.”
Note the distinction here is between valid data and invalid data. That is what the “negative” and “positive” refer to, as opposed to conditions of the system, at least in this case. So “positive testing” will verify that valid entries are handled correctly and “negative testing” will verify that invalid entries are handled correctly.
To me, that’s just general testing and I don’t have to worry about making too many distinctions between “positive” and “negative” because what I’m testing for is an answer to the following questions: are things handled correctly (whether valid or invalid)?
That’s not “positive testing”; that’s not “negative testing.” That’s effective testing.
Lack of Distinction: By Example
Here I’ll present some examples that testers have often presented to me in terms of justifying why a distinction between “negative” and “positive” tests is not only possible, but, in fact, crucial.
Example: Invalid Input in a Field
Assume you have a form that contains a password field that only allows six numeric characters. The so-called “negative tests” would be an input of anything other than six numeric characters; the expected result would be that those entries are not allowed.
My testing of this password field would be to test if the field accepts six numeric characters. I would also try to enter more than six. (Perhaps it allows this but warns the user. Or perhaps there is a bound put on the field and no more than six can be entered.) I would also try to enter alphabetic characters or symbols. I would also try to enter a combination of numeric and alphabetic to make sure that the field is parsed correctly.
Note that in all cases I’m basically trying to see what the application does given a certain input. Whether I call those “negative” or “positive” tests is, to me, largely irrelevant. I am going to test valid and invalid conditions.
Let’s say that the application password didn’t (or at least shouldn’t) allow this input: jeff123.
So I type that input in the field and click a putative “Submit” button. Now let’s say that the application goes on its merry way and processes the input. Here the application did something it was not supposed to do. It was supposed to return an error but did not. How does a positive or negative distinction come out of that? The heuristics I follow are:
- If an error does not appear that should, I have a failure.
- If an error does appear that should not, I have a failure.
Again, hand-wringing over whether my tests were “negative” or “positive” in nature really doesn’t matter to me simply because if I’m doing effective testing I’m going to be testing for those things that some testers traditionally call “negative tests” and those things that some testers traditionally call “positive tests” anyway. If I’m not doing that, my testing is not effective.
Going back to my previous statements, how I think of it is quite simple in terms of my test design:
- I want to test that the application does what it is supposed to (i.e., “123456” is accepted for the password field).
- I want to test that the application does something it is not supposed to (i.e., “jeff123” is accepted for the password field).
With these tests in mind, the following logic holds:
- In the first case, if “123456” was not accepted, then I have found an error condition.
- In the second case, if “jeff123” was accepted, I have found an error condition.
I could say that with the first error condition (“123456”), the application “does not do what it was supposed to do” (i.e., accept the input). In the second case, I could say that with the second error condition (“jeff123”), the application “does something it is not supposed to do” (i.e., accepted an invalid input).
Now, stop for a second. If you are an experienced tester, you might be saying: “Why is he beating this particular horse to death? This is all obvious.” Okay, but if it is, then again, you’ll note I could describe all of this without recourse to labeling my tests “positive” or “negative”. If you agree that what I describe here, in broad strokes of course, is effective testing — then that implies the “negative”/”positive” distinction does not need to come into it. Yet it’s a meme that still persists in the test industry.
Now, I do realize some might argue that the basis of a “negative test” is trying to do something that no user in their right mind would do — and yet, they just might, whether accidentally or maliciously. Certainly it’s hard to make predictions about every form of real-world use, but it’s almost certain that real users will find ways of using the system that were not originally considered. From a security perspective, it’s also worth considering those malicious users, who understand characteristic weaknesses and seek to exploit them.
Here again though: if I’m doing tests like this, such as say testing if an exploit works, what I want to see is if the malicious user can actually get into our system. Put another way, I’m seeing if our system does not adequately protect its resources from such attacks. There is no need here for a distinction between “positive” and “negative” testing.
Example: No Input in a Field
Let’s say you’ve got a situation where a field should not be allowed to have anything entered in it at all. Let’s further say that you know that some programmers do think of blocking any keyboard entry actions but not necessarily the pasting entry action. So a “negative test” here, according to some testers, would be to try to paste characters into the field instead of using the keyboard to enter characters in the field.
Well, my first response to this is a recommendation: the field should probably be marked non-enabled or inactive. That’s how you handle such a situation in programming because cutting and pasting is operating system functionality. Although I should note that this is if the field should not allow any entry at all by the user, such as a text field that only displays text for the user’s information, but not modification. Another option is to make the field a label, which a user could not paste into anyway.
So from an effective testing standpoint, I’m first looking at why an action that we want to not allow is simply made just about impossible to perform. That’s using testing as a design activity, albeit in a very small way.
The above presumes a field where nothing will ever be entered. But it may be the case that the user is allowed to type in entries in a particular text field but not allowed to cut-and-paste. Some password fields are established in just this way. In this case, your tests would be to make sure that clipboard events are not recognized by the control in question.
So here’s a breakdown of the testing heuristics:
- If the control allows pasting even when it should not, you’ve found a problem.
- If the control does not allow pasting when it should, you’ve found a problem.
- If the control allows pasting when it should, you have no problem.
- If the control does not allow pasting and it should not, you have no problem.
With that simple set of heuristics in mind, once again I’m not sure why it would make sense for me to call certain tests “negative” and others “positive.” I would rather just make the distinction between valid and invalid entries, as an example, or valid and invalid actions, or valid and invalid data conditions.
Example: Numeric Data Only
Below is an example situation that I was presented with by a tester during a training course.
Condition: User’s password must only be numeric.
Negative Test Case: Users’s password cannot contain alphabetic characters.
Input: User enters ABC123 and Clicks OK.
Expected Result: Error message/warning about password.
Negative Test Case PASS if: error message shows.
Negative Test Case FAIL if: password has been accepted by application.
The question was then asked of me if this was an accurate way to look at things. As I did then, let me take the above example apart a little. First, let’s consider that idea of the “Negative Test Case” which states: “Users’s password cannot contain alphabetic characters.”
My first statement here was: why is this considered a “negative” test case? Forget, for a moment, the definition of negative testing that most testers like to throw around. Why is the test case, as stated above, negative?
Now let’s consider the distinctions made about pass and fail. The “Negative Test Case” passes if “error message shows” and fails if “password has been accepted by application.”
Okay, so if the error message shows up, the application has done what it is supposed to. That would be, by the strict definitions, a “positive” test case. Thus the “negative test” is a “positive test,” which is why I argued in my previous post on this topic that the distinction is based partly on intent and partly on outcome.
If the invalid password has been accepted by the application, you have an instance of the application (1) doing something it should not [accepting the password] and (2) not doing something it should [throwing an error message]. So the same test case covered “negative” and “positive” conditions simply by being executed. Note, of course, that the above test case, as it is worded, is not really a test condition: it’s a test case. The breakdown is perhaps better thought of like this:
(Test Case) User's password cannot contain alphabetic characters. (Test Condition 1) User enters ABC123 (Test Condition 2) User enters ABCDEF (Test Condition 3) User enters 123456 (Test Condition 4) User enters //$#** (Test Condition 5) User enters 123$$$
There could be others but what I’m showing here is the breakdown instead of going for thoroughness. And, again, just to really drill home my already stated opinion, note that none of these test cases is really “negative” or “positive.” Some are invalid and others are valid in terms of the data they are trying to enter. That, to me, is a much better distinction and that is what I think testers should be concentrating on.
I’ve also talked with junior testers who say that they are “able to write test cases to test that the requirements had been met” but then say “I have no clue as to what else I should test.” They also tell me that, at this point, they’re usually introduced to “negative testing” as a concept. That is where I would differ in approach were I training this person. If someone comes to me with something like this, what I bring up are the valid and invalid conditions that should be sought. In other words, look for ways that error conditions can be generated. Those are your risk points. Those types of conditions can happen with valid inputs and invalid inputs. In terms of how that satisfies requirements (assuming you have such), note some important points:
- A valid input being handled correctly is an instance of meeting requirements.
- A valid input that is not being handled correctly is an instance of not meeting the requirements.
- An invalid input being handled correctly is an instance of meeting requirements.
- An invalid input that is not handled correctly is an instance of not meeting the requirements.
So what I try to do with testers is more get them to think about the root issue: the various test and data conditions that act towards proving out a particular test case. I want testers to think of the various combinations and permutations that are possible in a given area of an application and then think about how they can write tests that will exercise those various combinations and permutations. For example, in the above four points, what does “handled correctly” mean? Do we know? If we don’t know, then testing valid or invalid inputs may be compromised from the start.
So … what?
The only point I can end with here is the one I have repeated: I can perform all of the examples I talked about here without worrying about different and potentially ambiguous interpretations of a “positive” or “negative” test. It is my contention that these terms should be banished from the vocabulary of testers who focus on effective and efficient testing.