The test discipline is an interesting spot right now which is where some testers are considered “too technical” and the fear is they don’t want to do the actual testing. On the other hand, some testers aren’t considered “technical enough” and thus the fear is that they put too much emphasis on the actual testing and not on the tooling around the testing. This should be a false dichotomy.
Should be. Yet often is not. To be fair, it is a dichotomy that often gets showcased by testers who prefer simply to write tools (or “automation”) at the expense of thinking about testing itself. And there are testers who have never jumped on any technical bandwagon to level up their skills such that they can make responsible decisions about what test tooling should be in place to support the act of testing. Some folks even believe that software testers should not code.
The reason I say this should be a false dichotomy is because any tester — or person taking it upon themselves to hire testers — should know that the test solutions they write are in service to testing as a discipline and tests as an artifact. To write test tools that provide demonstrable value, it is important that those tools consume tests. And let’s keep in mind a few facts.
- Those tests will not be reasoned about by the tool.
- The tool will not look for inconsistency, ambiguity or outright contradiction.
- The tool will not have engaged with developers and business analysts about whether what is being described has any correspondence with the reality of what will be developed.
- The tool will not make any value judgments about whether tests have too much or too little detail.
- The tool will not be able to incorporate exploratory aspects that allow for creativity and innovation in terms of how to produce further tests.
All of that is up to the tester and the test design process.
The output of that is what gets fed into the tools.
This should all be painfully obvious. And yet sometimes it seems like it isn’t to people who make decisions about testing and their test staff. Hard to blame them, though, when sometimes it doesn’t seem obvious to some testers either.
The Industry Wide Catch-22
Companies who are hiring testers have to make sure to look at their candidates in a slightly more nuanced way than “seems too technical / seems not technical enough” but testers also have to do a better job of communicating this nuance as well. Admittedly, that gets tricky. Testers will try to get hired and then maintain their positions in a way that seems commensurate with what is being sought by their employer. This gets complicated when you have prospective employers who write positions that run the gamut from “a developer with a test background” to “a tester with a development background.”
We can “thank” Microsoft for conflating and confusing the issue with their Test Engineer roles and Google and others for continuing the trend with “Software Development Engineer in Test” (SDET) and “Software Engineer in Test” (SET). These companies essentially jammed roles together without necessarily understanding what each role brought to the other individually. This has often separated what should have been kept together and conflated what should have been kept distinct.
We have a Catch 22 here and, at least in my experience, it is hurting the testing discipline, not just in terms of how it is perceived but also in terms how it is practiced. In fact, the perception feeding into the practice is the Catch 22.
Separate the Tooling from the Testing
In my post on effectice, efficient, and elegant testing I discuss a lot of characteristics of tests and the ability to write them. Would any tool do that for you? Certainly not. But would you want a tester who can eventually have the output of those ideas executed by a tool? Most likely the answer is yes.
What about if we consider some heuristics for test writing? How about these:
- Remove incidentals.
- No magic numbers (or allowed only with context clues).
- Single key action for any given scenario.
- Single or action-related observables.
- Generalize test conditions over data conditions.
- Generalize data conditions over test conditions.
Can a “technical tester” write a tool that would just somehow do all that without that person having practiced the art of test writing? The answer is no. But a tester who has thought about these things and leveled up their technical skills, could certainly do so. What you ideally want is both. You want a tester who can think about these design heuristics and then apply them in a testing tool, often first having to chose between a set of different tools out there. Which, in turn, implies the tester has a good understanding of what actually is out there.
Let’s talk a bit about the popular “outside in” approach of BDD. The idea here is (very roughly) the following:
- Start with a conversation
- Determine the business value
- Provide examples of use
The overall goal here is to start testing at the most responsible moment: when features are being discussed. That means treating testing as a design activity. Could a “technical tester” just start doing that using tools like Cucumber, SpecFlow, JBehave, or whatever else without having thought about these things to some level of depth? Well, actually, yes, they could start doing the above. But could they do it effectively and efficiently, as per my previous post? Could they do so while applying the heuristics of test writing I just mentioned? Highly unlikely.
Why? Because the test thinking that goes into using such tools in a way that doesn’t make them an unnecessary abstraction layer or a burdensome complicating element is the following:
- Create a ubiquitous language. That’s the basis of domain-driven design.
- Create testable scenarios based on examples of behavior. That’s the basis of example-driven testing.
- Use test specs to spot ambiguity, recognize inconsistency, remove duplication.
The basis of these principles requires a lot of thought by testers who have studied them. Implementing them in tools is actually relatively simple once you find the solution that allows you to do so — or writing your own, should that be necessary.
So here’s a heuristic for those who are hiring testers: if you have found a tester that has written their own tools and you are concerned that this is all they spend their time doing at the expense of “actual” testing, ask them how their tools were informed by their thoughts on good test design.
On the other hand, if you have a tester that seems to have done everything manually, you might then ask them how they would translate their stated test designs into various test solutions. How would they make a decision between competing tools? What would be “must haves” for the tools versus simply “nice to haves”? What tradeoffs would they be willing to make? How much time do they spend investigating solutions before choosing one?
And for you testers who are trying to get hired or trying to retain the independence of your position after having been hired, make sure you make the nature of these questions clear to people. Don’t always answer the question you are asked. Answer the question you should have been asked, thereby providing people with better questions to be asking.
And don’t settle for the false dichotomy.
It is critical for those looking for testers to see these nuanced distinctions and not make decisions based on some binary value judgment of “too technical, not in the trenches” vs “not technical enough, too much in the trenches.”
It is critical for testers to make sure these distinctions are front and center in any discussions when you are looking to convince prospective or current employers about the value of testing as a discipline and test practitioners as the ones to carry out that discipline.