As a specialist tester, one has been doing this since the early 1990s, it’s interesting to follow the contours of a notoriously fractious discipline. A discipline that is often populated by articulate but frustratingly argumentative practitioners. I say “frustratingly” not because argumentation is bad (it isn’t) but because that argumentation often turns into becoming an instinctive contrarian and a ruthless, rather than pragmatic, skeptic.
I’ve seen testing, as a discipline, becoming more and more dominated by technocrats on the one hand (who want to turn everything into a programming problem) and the deconstructionists on the other (who want to turn everything into a binary semantic debate).
I’ve seen many testers struggle to stay relevant in their career — or even decide whether they want to stay relevant in testing — because of choosing the wrong arguments. Or by simply regurgitating the arguments of others without any thought of their own.
Consider Some Examples
As an example, right now we have an industry of manual testing deniers. We have continued debate — well, not really debate; just acceptance — that it makes sense to delineate “checking” as distinct from “testing.” Now, you, Gentle Reader, may be one of those people. I’m not here to change your mind with this post. To be sure, I often revisit my own thinking on these ideas just to keep myself intellectually honest. I’ve even engaged on both sides of some of these arguments, such as talking about how automation is a technique, not testing.
My point here is that all of this is essentially putting constraints on our current history; our history as it is developing in an industry context that is reaching not just new heights of complexity, but new heights of complication. Read Samuel Arbesman’s Overcomplicated: Technology at the Limits of Comprehension for a nice distinction on those two terms.
Something I said when discussing the “death” of manual testing is a sentiment I still believe to be true:
“Testers need to broaden their horizons and understand that, in many cases, where they are choosing to wage their battles is not where the war is even being fought. Testers are often planting their victory flags on hills that no one was trying to take anyway.”
This, when removed from the language of metaphor, is I believe one of the key constraints of our current history. And why testing, speaking generally, has struggled to retain relevance in the industry when other roles — just about all other roles, in fact — have not.
A Driver of Constraints
Let’s consider one such driver as it manifests but also how we can harness it to break free of a constraint. While writing the second part of my Ode to Testability series, I ended on a note that I think needs more reinforcement in the industry. I wanted to abstract that out a bit for consideration here.
A specific, key question we grapple with in the industry is this: “When does developer work become tester work?”
How this question is engaged with, much less, answered has been a large part of what has constrained our testing history since the advent of Agile and has continued with the focus on DevOps.
The question may not be framed so directly but it’s often where confusion lies. However, if we go with the idea that many times the word ‘developer’ is a broad term that is used very narrowly (largely as ‘programmer’) and if we agree that most times what people mean by “developer work” is “programming,” then we’re really asking a different question. We’re asking: “When does programmer work become tester work?”
That shift alone can mean a lot when you consider how practices like TDD or BDD are employed or approaches like DevOps are adhered to and what it means to put the word “continuous” in front of something like “integration”, or “delivery”, or “deployment.”
Further, when we frame things in that way, essentially breaking out of our constraints, we can ask a much more leading question: “When does programming become testing?” And the reason that’s important is when we can turn it around. Specifically, in the context of automation, we can reverse that and ask: “When does testing become programming?”
These are very good questions to engage people with, exactly as I am stating them here.
On that last question, I’m not saying that testing becomes programming. It never does, in my view. And that’s the point! But a fun discussion to have is to follow up some of these questions with an idea like this:
“Code and tests are a consequence of applying knowledge into a format that can be executed.”
That “can be executed” part is crucial for getting into the intersection of testing and automation.
Framing discussions as powerful questions or statements that look at challenging premises and conclusions is interesting. As Nassim Taleb said in his book The Bed of Procrustes:
“An idea starts to be interesting when you get scared of taking it to its logical conclusion.”
Indeed. That’s how we start to break out of the constraints of our history.
Constraints Have Consequences
All of what I’m saying here — and asking here — is much better (in my opinion, of course) than the current debates — largely only internal to testers and I’m being charitable calling them “debates” — about “testing or checking” or some of the various other distinctions which seem to so many outside of the discipline as nothing but internecine squabbling; all of that taking place while others get the real work done. Or so it is often perceived.
Too much hyperbole on my part, perhaps?
Well, the trends of the industry suggest not. And the perceptions of testers by others have continued to go down the more fractious our practitioners have become. This has been exacerbated by a rising “professional class” of test consultants (independent and company-based) since around 2009. That’s a professional class that often doesn’t have as much “skin in the game” as they like to claim.
In case that reference is unclear, lately a lot of test consultants have read Taleb’s book Skin in the Game: Hidden Asymmetries in Daily Life and apparently quote from it is as much as they can. The irony being that their own “skin in the game” — when it comes to people’s careers or the companies those people work at — is negligible at best.
And, in case I come across as someone who has some axe to grind, I say all this having run my own test company for the last four years; albeit one whose “sales pitch,” if such it can be called, was “Don’t hire consultants.”
As I wrote before, there is a lot that politics can teach us about testing, particularly regarding the specialist classes of people we end up with. That’s a consequence of our constraints and equally a driver to perpetuating those constraints.
I believe that all of what I say here has also been exacerbated by the fact that those driving the most innovation in testing (whether as approach, technique, or tool) have been developers. The output from most testers has been to tell people that “manual testing doesn’t exist,” or that “you’re not testing, you’re just checking,” and various other phrases. In fact, let’s look at another one of those.
Constraints of Relevance
Someone on LinkedIn recently quoted one of our well-known test professionals, who said this:
“Automation is not something you DO. Automation is something you USE to get something important done. Thinking of *automation* as the thing that you do risks displacing attention from the quality of that important something that is your actual mission.”
Yeah. Try having this discussion when you earn your paycheck from people who actually want you to get something done. Yet … let’s be fair … those providing that paycheck also (presumably) want that something done well and thus there is an underlying sentiment to the above quote that is valuable: don’t get lost in your tools.
But, given that, it’s usually much easier and much more effective to just say something like this: “Automation is just one technique that can be used during testing. But, like any technique, it can provide certain efficiencies but also has some limits.”
That’s a conversation most anyone can get behind because “certain efficiencies with some limits” is descriptive of pretty much any tool, anywhere at any time.
I provided another example of this same kind of thing when I talked about testing “wisdom”.
To break some of our constraints, we need to use words and phrases that foster better discussion, particularly in an industry where the reputation of testing (and/or testers) is already questioned.
Constraints of Semantics
Semantics are important. But not all semantics are equally important. Recognizing that is one of the ways that testers can start breaking out of the current constraints of their history and making sure they not only seem relevant to how the industry is developing but actually are relevant. It’s also a way that they can stop simply quoting whomever their favorite authority figure is and start thinking about what people in the industry are — or more likely, are not — saying.
As just one example of what I mean about semantics being important, I think some of those reframed questions I presented above let us start framing testing as two parts: testing as design activity and testing as execution activity. These are concepts and terminology that I find testers and developers can get behind. Even more importantly, I find discussions framed in this way are more productive and career-enhancing for specialist testers.
I talked before about testers have a challenge of not being such a tester.
Part of this is us, as specialists in our discipline, building up a narrative intuition. Such an intuition is the ability to know the basic rules of narrative but also to have internalized them to such an extent that you are able to immediately see when those rules are being broken or not applied effectively.
This, I believe, is an evolving skill of good testers. I believe it is an entirely necessary skill of specialist testers. But we need to start applying it to our own discipline before these constraints become our prison.