In my post on
porting development lessons to testing I mentioned getting into the ideas of what makes the testing role something uniquely distinct from that of the development role. So let’s talk about this.
Since I have devoted my career to testing and quality assurance, as well as the promotion of those disciplines, I want people I work with to say “I’m glad you’re here because you are doing a job I can’t do” versus “I’m glad you’re here because you’re doing a job I could easily do but don’t want to.”
The Sharp Slope of Testing
If you’ve ever worked in game development, you know there is the so-called “magic formula” of a great game — easy to learn, difficult to master. This can be said about testing as well. James Whittaker brought up the paradox of testing when he said “The price of entry is low, but the road to mastery is deceptively difficult.” In fact, as James says in his book
Exploratory Software Testing:
“Most anyone can learn to be decent at [testing]. Apply even a little common sense to input selection and you will find bugs. Testing at this level is real fishing-in-a-barrel activity, enough to make anyone feel smart. But after the approach, the path steepens sharply, and testing knowledge becomes more arcane.”
It’s humbling for us in the discipline but it’s important to keep in mind that even the most rudimentary knowledge of an application will let you find bugs. For that matter, a tester can even stumble across bugs without intending to. After all, our users do it all the time. What this shows us is that you don’t have to have ‘expertise’ to find bugs necessarily. A particularly buggy application can make a tester look good even if the tester has little skill.
Yet we should be heartened by the fact that the path of effective and efficient testing has a very sharp slope. After all …
- It’s not just a matter of being able to catch bugs. It’s a matter of being able to achieve the most effective coverage with the minimum amount of efficient tests that provide the highest yield of bug finding capability.
- It’s not just a matter of being able to leverage automation. It’s a matter of choosing the right abstraction for automation that takes test thinking as input and provides checking as output.
- It’s not just about choosing tools. It’s about choosing tools in such a way that support testing such that the cost of testing over time will be lowered.
- It’s not just a matter of being aware of various test techniques. It’s a matter of choosing the right one from your toolbox, where the “right one” means the one that has the best chance of finding the most important problems quickly enough while expending the least time and effort.
So Let’s Get to the Uniqueness
I think the uniqueness of testing, as a discipline, goes beyond the pursuits I just mentioned, important as they may be. After all, developers, in their own context, are often doing the exact same things. Their artifacts and intentions may differ but I feel this is a matter of degree, not kind.
So what is the uniqueness? And, here, of course I’m speaking solely of my own views. I’m not purporting to speak for the discipline as a whole and I’m certainly not looking for agreement. This is the context I perceive and operate within.
Representations
I’m stuck on this idea that the key distinction has to do with when the representations we create have to reach across boundaries. By which I mean people and technical boundaries.
Developers deal with code. Even their tests are essentially code. It doesn’t have to matter how much or how little of that code is understandable to a non-developer. And there does not have to be a one-to-one correspondence between what a developer writes and what a user experiences. Very small bits of code can provide outsized features. Incredibly large bits of code can be needed for relatively small features.
But a tester deals with multiple representations: requirements, tests, manual checks, automated checks.
Unlike a developer, we need a certain level of consistency between those representations. And those representations serve us not just for right now — for the feature being built — but as a way to reason about the system as a whole later.
Those representations have to take the form of models — say, paths of executions; workflows — that apply various techniques that are distinct from the code itself — boundary conditions, orthogonal paths, all pairs conditions, and so on.
Distill the Abstract
So a key idea here is to communicate abstract concepts clearly and concisely. There is a determination to find things out — to distill experience, to depict reality — that is as much artistic vision as scientific sensibility. Perhaps this harkens back to my thoughts around the
art and science of testing. By way of example, I tried to practice this idea in my thoughts on how
testing helps us understand physics. This is part of engaging in the idea of
“testing is like…” exercises.
In my view, and in my experience, the most engaging and engaged testers are the ones who have a desire and driving need to present material like that, particularly across disciplines. This isn’t to say you can’t be a good tester if you don’t do that. It’s just to say that what makes the more unique testers are the ones who can and do.
Testing as a discipline puts a focus on using narrative as reconstructions of the processes that produced whatever structure it is we’re considering. These narratives vary in their purpose, but not their method. The method is to ask: “How could this have happened?” We then try to answer the question in such a way as to achieve the closest possible fit between representation and reality.
Sorting out the difference between how things happen and how things happened involves more than just changing a verb tense. It’s an important part of what’s involved in achieving that closer fit between representation and reality. And there’s a balancing act here because if we oversimplify causes we run the risk of subverting our narrative and we end up detaching representation from reality.
Levels of Thinking
We have to tie our narrative as closely as possible to whatever evidence has persisted. That’s an inductive process.
But we have no way of knowing, until we begin looking for evidence with the purposes of our narrative in mind, how much of it’s going to be relevant. That’s a deductive calculation.
Composing the narrative will then produce places where more research on our part is needed, and we’re back to induction again. But that new evidence will still have to fit within the modified narrative, and so we’re back to deduction.
Keeping in mind that
inductive means reasoning from the particular to the general while
deductive means reasoning from the general to the particular, these are examples of what I claimed that testers should be really good at:
shifting between polarities.
Further, I believe this leads us to adductive reasoning. Note:
not abductive but
adductive; I term I picked up from reading about the Black Swan theory.
Adductive reasoning adducing answers to specific questions, so that a satisfactory explanatory “fit” is obtained. A strategy for explanation, in this context, is an interactive structure of workable questions and the factual statements which are adduced to answer them. This strategy for explanation can also be called an explanation model, and modeling is a crucial skill of testers.
This leads into various aspects of what we consider to be truth.
Truth and Facts
Often, as testers, we are clarifying the circumstances in which the predictable becomes unpredictable. We are often showing that patterns exist, even when there appear to be none at all. Along those same lines we are often showing that other seeming patterns are nothing more than epiphenomena driven by our human desire to encode pattern and meaning on just about anything. And, as testers, we often have to demonstrate, perhaps most crucially, that the patterns that are there can emerge spontaneously, without anyone having put them there necessarily.
Aligning this with my previous thoughts in this post, we encode all of this as representations. And the encoding part is important because we have to make sure that memory is not our sole, or even primary, recourse, but rather a set of established facts. And it’s important for testers not to confuse facts with truth.
The movie “Memento” had a great quote from the lead character that could be applied to testers:
“They collect facts, they make notes and they draw conclusions. Facts, not memory. That’s how you investigate. I know. It’s what I used to do. Look, memory can change the shape of a room. It can change the color of a car and memories can be distorted. They’re just an interpretation. They’re not a record. They’re irrelevant if you have the facts.”
And yet, it’s not
just facts.
Facts and truth can be separate things. As a tester, I’m more interested in truth than I am in fact. (Side note: this means, as a tester, I must be comfortable with artifice when it serves my needs.) After we do our experiments, our weapons are often words. Even more important than words are images since humans are visual creatures. But my words often have to be the image; they have to provide pictures that demonstrate a scientific and an artistic representation of reality. Both representations are required because facts are flexible, and our images can point to a truth even if they don’t precisely portray that truth.
This is the intersection of testing with development, with business analysis, and with product ownership. It is in that intersection where the uniqueness of testing lies because, unlike those other disciplines, I do believe testing requires a flexible movement between all while still retaining its core function and thus core identity.
These are ideas I’m planning to continue to explore, including my ability to articulate them.