Looking For and Valuing the Modern Tester

The test discipline is an interesting spot right now which is where some testers are considered “too technical” and the fear is they don’t want to do the actual testing. On the other hand, some testers aren’t considered “technical enough” and thus the fear is that they put too much emphasis on the actual testing and not on the tooling around the testing. This should be a false dichotomy.

Should be. Yet often is not. To be fair, it is a dichotomy that often gets showcased by testers who prefer simply to write tools (or “automation”) at the expense of thinking about testing itself. And there are testers who have never jumped on any technical bandwagon to level up their skills such that they can make responsible decisions about what test tooling should be in place to support the act of testing. Some folks even believe that software testers should not code.

The reason I say this should be a false dichotomy is because any tester — or person taking it upon themselves to hire testers — should know that the test solutions they write are in service to testing as a discipline and tests as an artifact. To write test tools that provide demonstrable value, it is important that those tools consume tests. And let’s keep in mind a few facts.

  • Those tests will not be reasoned about by the tool.
  • The tool will not look for inconsistency, ambiguity or outright contradiction.
  • The tool will not have engaged with developers and business analysts about whether what is being described has any correspondence with the reality of what will be developed.
  • The tool will not make any value judgments about whether tests have too much or too little detail.
  • The tool will not be able to incorporate exploratory aspects that allow for creativity and innovation in terms of how to produce further tests.

All of that is up to the tester and the test design process.

The output of that is what gets fed into the tools.

This should all be painfully obvious. And yet sometimes it seems like it isn’t to people who make decisions about testing and their test staff. Hard to blame them, though, when sometimes it doesn’t seem obvious to some testers either.

The Industry Wide Catch-22

Companies who are hiring testers have to make sure to look at their candidates in a slightly more nuanced way than “seems too technical / seems not technical enough” but testers also have to do a better job of communicating this nuance as well. Admittedly, that gets tricky. Testers will try to get hired and then maintain their positions in a way that seems commensurate with what is being sought by their employer. This gets complicated when you have prospective employers who write positions that run the gamut from “a developer with a test background” to “a tester with a development background.”

We can “thank” Microsoft for conflating and confusing the issue with their Test Engineer roles and Google and others for continuing the trend with “Software Development Engineer in Test” (SDET) and “Software Engineer in Test” (SET). These companies essentially jammed roles together without necessarily understanding what each role brought to the other individually. This has often separated what should have been kept together and conflated what should have been kept distinct.

We have a Catch 22 here and, at least in my experience, it is hurting the testing discipline, not just in terms of how it is perceived but also in terms how it is practiced. In fact, the perception feeding into the practice is the Catch 22.

Separate the Tooling from the Testing

In my post on effectice, efficient, and elegant testing I discuss a lot of characteristics of tests and the ability to write them. Would any tool do that for you? Certainly not. But would you want a tester who can eventually have the output of those ideas executed by a tool? Most likely the answer is yes.

What about if we consider some heuristics for test writing? How about these:

  • Remove incidentals.
  • No magic numbers (or allowed only with context clues).
  • Single key action for any given scenario.
  • Single or action-related observables.
  • Generalize test conditions over data conditions.
  • Generalize data conditions over test conditions.

Can a “technical tester” write a tool that would just somehow do all that without that person having practiced the art of test writing? The answer is no. But a tester who has thought about these things and leveled up their technical skills, could certainly do so. What you ideally want is both. You want a tester who can think about these design heuristics and then apply them in a testing tool, often first having to chose between a set of different tools out there. Which, in turn, implies the tester has a good understanding of what actually is out there.

Let’s talk a bit about the popular “outside in” approach of BDD. The idea here is (very roughly) the following:

  • Start with a conversation
  • Determine the business value
  • Provide examples of use

The overall goal here is to start testing at the most responsible moment: when features are being discussed. That means treating testing as a design activity. Could a “technical tester” just start doing that using tools like Cucumber, SpecFlow, JBehave, or whatever else without having thought about these things to some level of depth? Well, actually, yes, they could start doing the above. But could they do it effectively and efficiently, as per my previous post? Could they do so while applying the heuristics of test writing I just mentioned? Highly unlikely.

Why? Because the test thinking that goes into using such tools in a way that doesn’t make them an unnecessary abstraction layer or a burdensome complicating element is the following:

  • Create a ubiquitous language. That’s the basis of domain-driven design.
  • Create testable scenarios based on examples of behavior. That’s the basis of example-driven testing.
  • Use test specs to spot ambiguity, recognize inconsistency, remove duplication.

The basis of these principles requires a lot of thought by testers who have studied them. Implementing them in tools is actually relatively simple once you find the solution that allows you to do so — or writing your own, should that be necessary.

So here’s a heuristic for those who are hiring testers: if you have found a tester that has written their own tools and you are concerned that this is all they spend their time doing at the expense of “actual” testing, ask them how their tools were informed by their thoughts on good test design.

On the other hand, if you have a tester that seems to have done everything manually, you might then ask them how they would translate their stated test designs into various test solutions. How would they make a decision between competing tools? What would be “must haves” for the tools versus simply “nice to haves”? What tradeoffs would they be willing to make? How much time do they spend investigating solutions before choosing one?

And for you testers who are trying to get hired or trying to retain the independence of your position after having been hired, make sure you make the nature of these questions clear to people. Don’t always answer the question you are asked. Answer the question you should have been asked, thereby providing people with better questions to be asking.

And don’t settle for the false dichotomy.

It is critical for those looking for testers to see these nuanced distinctions and not make decisions based on some binary value judgment of “too technical, not in the trenches” vs “not technical enough, too much in the trenches.”

It is critical for testers to make sure these distinctions are front and center in any discussions when you are looking to convince prospective or current employers about the value of testing as a discipline and test practitioners as the ones to carry out that discipline.


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

2 thoughts on “Looking For and Valuing the Modern Tester”

  1. The change or lack of it will be driven by the “testers” that adapt to the market needs vs the ones that educate employers. I find that there’s a big number of employers who are misinformed about testing, what it is, and the kind of testing they need, so they just use a template or take pieces of multiple descriptions to define what they need.

    Time to market plays a huge role as companies adopt a lazy and sometimes irresponsible implementation of “least viable product”.

    As long as we keep looking at testing as a stepping stone into programming the discipline won’t evolve as it should and I agree with you when you say that SDET or SET roles often offer the wrong approach. I think tester being technical helps big time because it removes some of the abstraction and allows the tester to create different sets and types of tests. It also helps when dev and tester speak the same language even if it is on a high level.

    This is a huge challenge in a world where everybody has an opinion and the loudest one usually wins and most people┬ájust run with someone else’s idea instead of carving their own.

    Good post. Thanks for provoking thoughts.

  2. Hi Jeff and congratulations on the awesome thought provoking post!

    It’s a long topic and I will probably give my thoughts and long asnwer in a post on my blog, but I will give the short one here.

    I believe the most insulting thing in the dichotomy you are talking about is once interviewed you are put into this box, eighter “technical” or “not enough technical” or “automation” vs. “manual” and it seems like you can’t reach to the “other box” anymore. Which is silly, honestly.

    The good thing I see is more testers are geting their hands dirty in analyzing and challenging the division of manual and automaiton, or tech vs. non-tech and I believe it’s the only solution we could offer as a community. It is true that a tester should be both technical and analytical and neighter automated checking or the intelectual process of testing does the job by itself. It’s a complicated process and it needs tools to perform it, but it also needs an expert to operate these tools in the most efficent way.

    Thanks for bringing this up and I hope we’ll see much more interesting thoughts on this topic.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.