Ask a tester this: “You are given a login form. How do you start testing?” For the most part, you’ll find the answer falls along two distinct lines of investigation. You’ll either be told “First, I would try to log in successfully” or you will be told “First, I will try typing in an invalid password, then an invalid username, and then I’ll try entering nothing.”
This is not a question of whether or not the first tester would have also done those tests. They likely would have. It’s rather a question of what they started with: the success scenario or failure scenarios. So let’s talk about that.
Why does where they start matter? It may not. What I do know is that I have always opted for the second response. If someone presented me with that, admittedly brief, context I would generally go into a litany of things I would test that I expect not to work and that I expect the application to handle gracefully.
I want to show that the application does something it is not supposed to do.
So in the case of the login example, I’m specifically seeing if the application does something (let me log in) that it should not (because I am providing invalid information). I’m also seeing if the application does something (NOT let me log in) that it should not (not informing me of the issue in a reliable way).
Building Up Trust …
But why do I focus on that first? To my mind, you will, at the conclusion of those tests, get at least some impression of how error handling or user messaging works with the software you are testing. Error handling and communication with users is often badly designed and implemented, and also badly tested by programmers. Business analysts often forget this component as well and assume the developers will put in a message to the user that “makes sense.”
I should note that from a user perspective, no doubt you’d try to just log in with a valid account and if that worked, you would be happy. Sometimes that’s also exactly what a developer would try. They often assume the user would be rational and simply use the software as it was intended, without making some “dumb mistake.” But that’s the difference from thinking purely as a user would and thinking as a tester would. A tester is looking for what kind of actions will give them the most information in the shortest possible time that will also give them an idea of how much they can trust the application. Put another way: testers are looking for how the application will remove confidence and cause concern. This is the distinction between falsification and confirmation.
Seeing that error handling or user messaging is improperly handled will tell you a lot of things. Seeing that a successful action occurs will tell you much less.
So note the focus here: it’s not so much that you are testing for errors per se; rather you are testing to see if you trust the application. Certainly being able to successfully login tells me something as well. But it tells me less than seeing how the application handles things when everything isn’t going according to plan.
In the book Beautiful Testing: Leading Professionals Reveal How They Improve Software, Chris McMahon says this (in the chapter “Software Development is a Creative Process”):
“Good software testers do not validate function. Their tools do that for them. Good software testers make aesthetic judgments about the suitability of the software for various purposes, exactly as does the editor of a book or the leader of a band or the director of a play. Good software testers supply critical information about value to people who care about that value, in a role similar to that of a book reviewer or a movie critic.”
Yup. And if you look fundamentally at what those various people are ultimately responding to is trust broken, as in “I trusted you to tell a good story” or “I trusted to see a cohesive plot without glaring plot holes” or “I expected a performance where clearly the people knew how to play the instruments.”
The Testing Perspective
Let’s go with the idea that the testing perspective is a way of looking at any development product and questioning its validity. The person examining work products from this perspective utilizes a thorough investigation of the software and all its representations to identify issues. The search for issues is guided by both systematic thinking and some intuitive insights as well as some good ol’ “gut feels.” Can this notion be refined a bit? Well, the testing perspective for finding issues has been delineated as follows:
- Skeptical (wants proof of quality)
- Objective (makes no assumptions)
- Thorough (doesn’t miss important areas)
- Systematic (searches are reproducible)
That list comes from the book A Practical Guide to Testing Object-Oriented Software by John McGregor and David Sykes. Their view is that anyone who adopts this perspective is a tester in mindset. Maybe. I think that list is a bit simplistic.
I believe that the goal is to foster a fascination with skepticism. I tell testers they should rely on skepticism to keep them interested but intellectually honest at the same time. That requires them to be as objective as possible (make the fewest possible assumptions, given what they know) and as thorough as makes sense, given that “important areas” covers a lot of ground. If I have zero trust, everything is an important area to me because I have no idea how things are going to work (or not work) or be communicated to the user.
Skepticism is not some theory of knowledge that would thus provide a strategy for “proof of quality.” Rather, skepticism is a technique used to determine when something is not true and thus indicating when our knowledge or expectations are false. Skepticism is something like an attitude or an approach in that you leave open the possibility that much of what you hear is probably inaccurate to some degree; it’s just a matter of figuring out how and if it matters. In the testing world you add that much of what you see is probably not the best it could be, although it might just be good enough.
Emphasize Trust in Your Test Strategy
So what this comes down to is a strategy coupled to your framework for thinking.
Your test strategy is the distribution of the test effort over the combination of functional characteristics and functional areas coupled with the known sensitivities (behavior, data, interface, context) of the technology stack you are working with. That distribution is constructed so as to find the most important problems as early as possible and at the lowest costs. Being able to log in tells me (importantly!) that logging in works with that login. Not being able to log in shows me a localized important problem. Not being able to log in and not being given effective messaging could be pointing to a systemic important problem.
As a tester you have to balance getting as much information you can with quickly building up some idea of how much you can trust and how much you can’t. The amount of trust you have will absolutely guide many of your decisions about your test strategy.