Ask a tester this: “You are given a login form. How do you start testing?” For the most part, you’ll find the answer falls along two distinct lines of investigation. You’ll either be told “First, I would try to log in successfully” or you will be told “First, I will try typing in an invalid password, then an invalid username, and then I’ll try entering nothing.”
This is not a question of whether or not the first tester would have also done those tests. They likely would have. It’s rather a question of what they started with: the success scenario or failure scenarios. So let’s talk about that.
Why does where they start matter? It may not. What I do know is that I have always opted for the second response. If someone presented me with that, admittedly brief, context I would generally go into a litany of things I would test that I expect not to work and that I expect the application to handle gracefully.
In my second post on how a positive and negative testing distinction make no sense to me, I said this:
I want to show that the application does something it is not supposed to do.
So in the case of the login example, I’m specifically seeing if the application does something (let me log in) that it should not (because I am providing invalid information). I’m also seeing if the application does something (NOT let me log in) that it should not (not informing me of the issue in a reliable way).
Building Up Trust …
But why do I focus on that first? To my mind, you will, at the conclusion of those tests, get at least some impression of how error handling or user messaging works with the software you are testing. Error handling and communication with users is often badly designed and implemented, and also badly tested by programmers. Business analysts often forget this component as well and assume the developers will put in a message to the user that “makes sense.”
I should note that from a user perspective, no doubt you’d try to just log in with a valid account and if that worked, you would be happy. Sometimes that’s also exactly what a developer would try. They often assume the user would be rational and simply use the software as it was intended, without making some “dumb mistake.” But that’s the difference from thinking purely as a user would and thinking as a tester would. A tester is looking for what kind of actions will give them the most information in the shortest possible time that will also give them an idea of how much they can trust the application. Put another way: testers are looking for how the application will remove confidence and cause concern. This is the distinction between falsification and confirmation.
Seeing that error handling or user messaging is improperly handled will tell you a lot of things. Seeing that a successful action occurs will tell you much less.
So note the focus here: it’s not so much that you are testing for errors per se; rather you are testing to see if you trust the application. Certainly being able to successfully login tells me something as well. But it tells me less than seeing how the application handles things when everything isn’t going according to plan.
In the book Beautiful Testing: Leading Professionals Reveal How They Improve Software, Chris McMahon says this (in the chapter “Software Development is a Creative Process”):
“Good software testers do not validate function. Their tools do that for them. Good software testers make aesthetic judgments about the suitability of the software for various purposes, exactly as does the editor of a book or the leader of a band or the director of a play. Good software testers supply critical information about value to people who care about that value, in a role similar to that of a book reviewer or a movie critic.”
Yup. And if you look fundamentally at what those various people are ultimately responding to is trust broken, as in “I trusted you to tell a good story” or “I trusted to see a cohesive plot without glaring plot holes” or “I expected a performance where clearly the people knew how to play the instruments.”
The Testing Perspective
Let’s go with the idea that the testing perspective is a way of looking at any development product and questioning its validity. The person examining work products from this perspective utilizes a thorough investigation of the software and all its representations to identify issues. The search for issues is guided by both systematic thinking and some intuitive insights as well as some good ol’ “gut feels.” Can this notion be refined a bit? Well, the testing perspective for finding issues has been delineated as follows:
- Skeptical (wants proof of quality)
- Objective (makes no assumptions)
- Thorough (doesn’t miss important areas)
- Systematic (searches are reproducible)
That list comes from the book A Practical Guide to Testing Object-Oriented Software by John McGregor and David Sykes. Their view is that anyone who adopts this perspective is a tester in mindset. Maybe. I think that list is a bit simplistic.
I believe that the goal is to foster a fascination with skepticism. I tell testers they should rely on skepticism to keep them interested but intellectually honest at the same time. That requires them to be as objective as possible (make the fewest possible assumptions, given what they know) and as thorough as makes sense, given that “important areas” covers a lot of ground. If I have zero trust, everything is an important area to me because I have no idea how things are going to work (or not work) or be communicated to the user.
Skepticism is not some theory of knowledge that would thus provide a strategy for “proof of quality.” Rather, skepticism is a technique used to determine when something is not true and thus indicating when our knowledge or expectations are false. Skepticism is something like an attitude or an approach in that you leave open the possibility that much of what you hear is probably inaccurate to some degree; it’s just a matter of figuring out how and if it matters. In the testing world you add that much of what you see is probably not the best it could be, although it might just be good enough.
Emphasize Trust in Your Test Strategy
So what this comes down to is a strategy coupled to your framework for thinking.
Your test strategy is the distribution of the test effort over the combination of functional characteristics and functional areas coupled with the known sensitivities (behavior, data, interface, context) of the technology stack you are working with. That distribution is constructed so as to find the most important problems as early as possible and at the lowest costs. Being able to log in tells me (importantly!) that logging in works with that login. Not being able to log in shows me a localized important problem. Not being able to log in and not being given effective messaging could be pointing to a systemic important problem.
As a tester you have to balance getting as much information you can with quickly building up some idea of how much you can trust and how much you can’t. The amount of trust you have will absolutely guide many of your decisions about your test strategy.
Just to address the question at the beginning of the article: “You are given a login form. How do you start testing?”
I DO start with a simple confirmatory test that I expect to work. Why? Because if a simple confirmatory test demonstrates that the application doesn’t work it completely invalidates any other ‘negative’ testing I might decide to do. I think it’s important to establish a benchmark first so you know what successful behaviour looks like, and you’ve demonstrated that the product can produce the successful behaviour, so that you know that any deviation from that behaviour is a result of the variables you are changing, vs just the default behaviour of the program.
Good points, Aaron. The problem is, as you say, what does successful behavior look like? Login is an interesting case because testers sometimes assume that if we can login, the application must work. But given the limited context I provided, how would I know that just by the act of logging in? What if I shouldn’t have been allowed to log in? What if the system isn’t even checking for valid names and just logs you in no matter what I enter in? What if I should have been given a confirmatory message saying I was logged in? I have not built up my trust of the application at all yet and the best way to build up trust in anything is to see it when it’s being fed “bad” things. That’s a truism from engineering.
Keep in mind this example is abstracted away from “Well, I could ask a developer or check the requirements.” The focus here was simply on: you’re given a login form, what’s the first thing you do? For me, I’ve often found starting with the things I know should fail and seeing how the system responds tell me a lot more than trying something that everyone tells me is (mostly) guaranteed to succeed. Note that my points are in line with yours: I try something I expect to work. Specifically, if I try to login in with invalid credentials, then I expect the application to work by not allowing me to login and to respond with a message … and to respond with an appropriate message given the situation.
The benchmark I’m establishing here is not successful behavior but rather how much I trust based on the behavior I observe. If I click the “Login” button on a form and have filled in no details, and the system lets me in — that tells me something. It says it’s not working and has a huge security problem. If the system doesn’t let me log in but doesn’t tell me anything about why, that tells me something as well. User friendly messages may not have been a priority. If we don’t even provide simple messages for something like “Invalid username”, how much less likely are we to provide messages for very specific conditions, such as “Account is invalid” or “Database is not available”?
Now, all this being said, I agree: in reality there are a set of tests that we apply against an application that have both valid and invalid inputs and outputs. My focus here is on simply where people’s instincts are. The instinct of the more scientific testers I’ve run into is that of falsification rather than confirmation. That doesn’t mean testers who go the other route are bad, of course, and, in reality, we’re all doing both. Speaking anecdotally, however, I’ve found much more success with the testers who opt for the invalid scenarios first rather than the valid scenarios. I’m not entirely sure why that is yet.
You say you’re not entirely sure why your anecdotal evidence is as it is. My guess is you have your answer: anytime you are asking for trust, you have to pick multiple data points. A single data point, like “successful login”, is not enough. That’s trust that the login works. But you’re asking for trust beyond the application. You’re asking for trust that the developers (or the business) thought about invalid scenarios. They often don’t. You’re asking for trust that the developers put in messages or responses tailored for those invalid scenarios. They often don’t. You’re asking for trust that the developers used effective communication skills in the verbiage of those messages. They often don’t.
You are combining multiple tests there: functional tests (does the login form even work), data integrity tests (does the data get recognized), security tests (can I get in even if I shouldn’t), usability tests (does the application provide good messaging). You are looking for a disconfirming instance in each case thus your falsification. All it takes is one confirming instance of these tests failing to say ‘no, it’s not working as it should’.