Earlier I talked about describing my own role. I think what I said there is almost interesting. Interesting, for me that is. But I often find testers struggling to frame their value beyond “I find bugs” or “I help mitigate risk.” So let’s dig into this a little bit.
One of the questions I’ll often ask testers that I’m interviewing is: “What makes testing a really interesting job?”
And, I have to admit, it’s mildly dispiriting the kinds of answers I tend to get. Now, to be fair, it’s a question asking for an opinion so it’s not like I can claim anyone’s answer is wrong any more than I could claim my response to that would the “right” one. So where does the dispiriting aspect come from? It’s the same thing that a scientist — in any discipline — tends to feel when people don’t seem to understand how nuanced what they do truly is.
But that does assume, of course, that what they do is truly nuanced. After all, maybe they just do it “as a job” and the wider implications of that don’t matter as much to them. But here I’m talking to the tester who has chosen this as a career path — or is at least willing to do so — but struggles to really define their role as anything other than Bug Hunter or Risk Finder or even just Automater.
Escape the Descriptive Malaise!
Even a simple reframing can help. For example, I tell testers: “If you are good, part of what you do is spot ambiguity where it hides, ferret out inconsistency, and drag outright contradiction into the daylight.”
And this happens not just with some software product you are exploring. This happens with documents, with conversation, with anything wherein there is intransparency about what we do or what we say when we, working as a team, build products or services. In fact, if you get a few drinks in me and/or catch me in a philosophical mood, I’ll frame what I do as this:
“I help reduce the epistemological opaqueness and ontological confusion that stems from the cognitive biases that all of us have when we try to build anything even remotely complex.”
But let’s face it: that’s a little difficult to fit on a business card. And as an elevator pitch goes, it might not be the most scintillating of conversation openers. But it is an extremely accurate representation and description of what I believe I do.
That’s what you do? Really?
I’ll grant what I just said can sound like a bit of artifice wrapped around a nice package of sophistry. But let’s break it down a little bit. As testers we do have to deal, front and center, with epistemology (how we know what we think we know), ontology (what people actually think is the case), and ontogeny (how something develops from one state to another and what it maintains or loses along the way).
And while you may not use those exact words, they should get you thinking about ways to frame what you do as contrasted with the much less people sometimes think you do.
Frame Testing More Expansively
It will come as little news, and less shock, to most people reading this that testers often have to fight for their identity, even on so-called “cross-functional teams.” So now let’s jump into another area that I encourage testers to think about on that identity quest.
We, as specialist testers, understand and explore the cognitive biases that all of us humans have. And we do this because these biases are the causes of our errors. They are why we make mistakes; they are why we rely on faulty evidence; they are why we jump to conclusions; they are we are subject to logical fallacies that undermine us even when we are giving our best efforts.
This is helping people — including ourselves! — to build up a map of the logic of why things go wrong. And once we all have a map, we can begin to understand how to navigate the terrain. At the very least, we can put up warning signs around the parts of the terrain that are dangerous.
My standard broken record phrasing here: testing is a design activity, not just an execution activity. But this is important. Testing doesn’t just happen after decisions have been made. Testing is what leads to certain decisions over others being made. Testing is what puts pressure on decisions that carried over into design and ultimately into implementation.
It may not always be called testing, but I’m here to assure you that it very much is that.
So here’s another thing to consider as you tell people what you do. Testing — our discipline! — becomes an effective and efficient feedback system when it puts pressure on design at various levels of abstraction and does so via exploration that guides experimentation. Experimentation and exploration are two of the oldest means by which we, as human beings, create models. And models are one of the core means by which we understand and incorporate feedback.
All of that is what we, as test specialists, do. We don’t just execute tests. We understand the wider context in which testing as a discipline fits in. I talked before about test intersections so now let’s consider what I just said in the context of a specific example.
The Intersection of BDD and Testing
Testing, as a discipline, is about us understanding the good of some concepts, like BDD, but recognizing that even its creators are wrong when they say BDD is not about testing.
Now, for all I know, you may very much agree with that statement. If so, bear with me for a second. When testing is framed as, partly, a design activity and when that activity puts pressure on design, this means that at a certain level of approximation, requirements and tests merge into the same type of communication mechanism. That is one way that the collaboration around BDD has evolved: as living documentation and executable specifications.
But, at a slightly deeper level, this is about abstractions and these are things that testers have to be very good at working with and managing. In fact, one of the biggest challenges testers face is choosing, and effectively using, better rather than worse abstractions. If there’s one area that puts us into our own particular smoking crater of technical debt, it’s choosing the wrong abstractions. But the abstraction discussion comes with an understanding of how our different representations of testing are structured and organized.
Does it? Really? I think so. Consider how we might choose an effective abstraction for our test expression. One powerful question is: should our human tests and automated tests be written in the same expression layer? And, if so, are tools like Cucumber or SpecFlow too heavy of an abstraction? And, if so, is that possibly due to us writing imperative rather than declarative scenarios?
As John Ferguson Smart eloquently argues, it’s important not to let automation sabotage BDD. But it’s equally important to realize that the value a lot of people see in BDD approaches is how collaboration is able to be encoded in a format that can be read by humans and automation.
The Pivot Points of Testing
What I hope you can see here is that testing, as a discipline and activity, starts to get framed around a few pivot points:
- abstraction, expression
- structured, organized, discovered, referenced
And this provides a nice mantra for how testers must add value. We deal with and provide:
- Viable Abstractions, Effective Structure, Efficient Organization, Elegant Representations
And that first one is critical: finding the right abstractions. When you do that, you begin to figure out your structural and organizational principles. All of this is helping you decide your representations. And all of this is what provides a grouping strategy and an explanation model for your artifacts and activities. Those two concepts are, I believe, critical elements for testers to understand and be able to articulate.
And, Yes, Tools Aren’t Going Away
When it does come down to the tools that support testing — and it eventually will — testers must realize that when they introduce heavyweight practices to a project, they increase the project’s inertia and make it more resistant to improvement. A painstakingly detailed test library becomes a source of project inertia. An overly detailed test library can have maintenance costs that discourage improvement of the product.
Testing Is About …
I’ve argued that I prefer a more expansive rather than narrowed view of testing, and one of the contexts I’ve argued that in is the current testing vs checking debate. I’ve also argued that in the context of the dogma and tradition we sometimes attach to ourselves. We also have to be careful of becoming fundamentalists or muddy thinkers.
A lot of these problems happen when people try to narrow what testing is about too much, particularly when proponents of intersecting approaches, like BDD, argue quite unequivocally that those things are not about testing.
Testing, like science, is not a body of doctrine. Ultimately it’s a method. And, again just like science, testing is a specialized way of investigating certain aspects of reality. (Reality Investigator! Now that can go proudly on your business card!) Testing, like science, is not a source of timeless truths. Science, just like testing, is entitled to — in fact, obliged to — change its mind when the facts so dictate. But crucial to providing that freedom to change minds is the ability to find those facts in the first place.
So the next time someone asks you what this whole “testing thing” is all about, I’m not asking you to agree with my words or ideas. All I’m asking is that you find your own ways to describe how exciting testing is and just how fundamental it is to the technical and human aspects of turning ideas into software and hardware that adds value to people’s lives.