I previously talked about some heuristics for hiring test specialists. There was an assumption in that post that you do, in fact, want to hire specialist testers. But, of course, that is just an assumption. Perhaps you don’t. And before you say “But of course we do!”, let’s talk about this a little bit.
Before getting into this, I should provide the basis for how I think about this. In this entire post, I’m going to be skipping over entirely the “traditional” — for lack of a better term — aspects of being a tester. By this I mean writing test cases, test scenarios, creating and running automation, etc.
Those are important and are not to be dismissed. When hiring test specialists, those are aspects of the discipline you need to be ferreting out. But they are also the low-hanging fruit in the specialty of testing as a discipline. Testing is much more about the thinking than about the artifacts that result from that thinking. Hiring a test specialist means you agree with that or, at the very least, understand it.
And that’s a really important point because in an interview you may ask a test specialist something about a traditional aspect of the discipline but the answer you get will seem to be much more expansive than what you were necessarily looking for. You should be willing to embrace that but also challenge the specialist if you are having trouble seeing the relevance.
Interviewers typically want answers but, as you’ll see in this post, test specialists are often more about the questions. That disconnect alone is what I often see trips up interviewers who don’t understand what hiring a test specialist is all about.
My Focus on Specialist Testing
A goal I have with test specialists is I want to see how much, and to what extent, they can shake loose the notion of planning and delivering and replace that with exploring and discovering. I say that because test specialists know that the exploring and discovering will, in fact, lead to delivering. They also recognize that planning sometimes gets in the way of exploring and discovering and thus compromises delivery.
What I just said can be seen as utter heresy to many people in the industry. But I stick by it.
Planning — yes, even in so-called agile contexts — tends to reduce variability. Planning ultimately tries to smooth out the stressors that are actually valuable to a team because those stressors serves as a way to avoid being fragilized. In my experience, this is a particularly pernicious problem with teams adopting an agile mindset.
Test specialists know that the process of delivery is sometimes going to be messy and therein lies its strength. As a team, you’re forced to stumble into your answers by asking dozens or even hundreds of small questions and running dozens of experiments. The reason this can work is because you should be keeping all elements of your work relative small and build in cost of mistake curves. Maybe this should be what we mean by “agile,” but it’s often not practice that way. (See my post on reframing agile for more on that.
Here’s the bottom line: if you want test specialists, your team needs to be comfortable in a world of uncertainty, arguments, questions, and reasoning. Often this process will move faster than the Jira tickets or Trello boards you try to capture everything in. While this work most definitely can fit into sprints, test specialists are often best working outside of sprints or, more specifically, not being gated at all times by sprint cycles.
Okay, taking all that as a given, how do you “just get things done” if there’s a lack of emphasis on planning? What does it mean to rely on “exploring and discovering” in order to deliver value? To get into that, it’s worth asking this question: what does what an exploratory and empirical organization look like?
A Culture to Support the Mindset
Being exploratory and empirical is a mindset. It’s a way of viewing the world and, even more specifically, a way of interacting with the world such that you get the information you want. This is important to understand because there might be a big disconnect between a quality-focused mindset — which relies on exploration and empiricism — and the way the organization operates; its “culture,” for lack of a better term. So let’s talk about culture for a second.
Charles Jacobs wrote an excellent book called Management Rewired: Why Feedback Doesn’t Work and Other Surprising Lessons from the Latest Brain Science and in it he says:
“Culture is a convenient way of thinking about patterns of behavior in an organization that aren’t hardwired by policies, procedures, or structure. Popularly, it has become a black box used to explain, in the absence of anything else, the failure of an organization to implement its strategy. The conventional wisdom, based on Aristotelian thinking, sees culture either as a thing or a set of observable behaviors.”
What’s interesting about that is that this idea of culture being quantified is also what tends to happen around the idea of quality. Quality gets treated with that same sort of Aristotelian thinking. Test specialists are going to challenge that. Test specialists recognize that quality needs to be a mindset of the organization just as Jacobs tells us that culture is a mindset. Going a bit further, Jacobs says:
“Because our mind-set is structured as a story, culture can be thought of as the collective story the members of an organization tell themselves, driving the way they perceive the world and act as a result. So what’s the story we tell?”
That story aspect is important. I talked about that a bit with the warning of “don’t be such a tester.”
Also helpful, I think, is this thought from Edgar H. Schein in Organizational Culture and Leadership:
“[Culture is] a pattern of shared basic assumptions that the group learned as it solved its problems that has worked well enough to be considered valid and is passed on to new members as the correct way to perceive, think, and feel in relation to those problems.”
Now, why am I talking about culture so much? I’m doing that because everyone might actually understand the quality-focused mindset and feel like they have the right culture to adopt it … but … they don’t know how to connect the two.
This can be a big challenge because in most organizations, thinking about quality using a scientific method is a larger change than it likely seems. Test specialists know that. And test specialists — the good ones — will tell you that. Better, they will be able to demonstrate it. But you do have to meet them half-way. You do have to be willing to listen. And you do have to be willing to think beyond the “traditional” aspects of testing as you interview such people.
Value of Test Specialists
Before hunting for and bringing on test specialists, you actually have to ask a serious question: does our company really desire test specialists? Many companies claim they want a quality-focused mindset. But there’s often a lack of understanding the value of a quality-focused mindset that test specialists will bring. I talked about that above but let’s dig in a little more.
Testing is about using the scientific method to gain insights and thus help people reason about things better and thus make better decisions sooner. Asking good questions is the core of this scientific method.
Many teams, or even companies as a whole, feel that it’s better to be a group of people who know things … or at least act as if they do. They like teams who speak in very clear statements that seem to convey a high degree of certainty. This clarity is seen as more valuable than not knowing. That type of thinking is replicated in interviews: ask a question and expect an answer dripping with clarity.
Test specialists, however, deal in the realm of the opaque and the uncertain. That’s why test specialists are so fond of the scientific method.
This scientific method is a loop of discovery. Test specialists will work with delivery teams to ask interesting questions and then they will help research those questions. Test specialists will then use the research to come up with new insights. This means test specialists help delivery teams take an empirical approach to their work. Instead of planning, delivery teams need to adapt. Instead of relying on answers, delivery teams look for interesting questions.
Consistent with what I talked about above, I would add it’s not so much just informing the team about quality as it is telling a story about quality. And quality is a broad topic, that has both objective and subjective components. When I say “telling a story” there I mean that term deliberately. Good storytelling is the bridge between what you learn and what you can tell others. A lot of testers just inform; but the really good ones — the specialists — tell a story.
But a lot of people in their professional capacity turn off their ability to tolerate stories. We love stories in our books and our movies. But we somehow abandon the power of the story in our professional contexts. A whole industry of consulting formed around the notion of helping corporations tell better stories.
The Questioning Focus of Test Specialists
One way — and I would argue, the most effective way — to look at the situation is that testing is ultimately helping people reason about projects and products better which means they can make better decisions sooner. But that means testing has to be an activity with a wide-angle lens, operative at those places where we make the most mistakes by virtue of the fact that we are imperfect humans building ideas with imperfect technology.
As such, I would and do argue that testing is fundamentally about asking interesting questions and delivering new knowledge based on exploration and experimentation around those questions. Testing thus focuses on discoveries that enable decision-making.
And questions are the key to discovery. That’s essentially the scientific method. It’s an empirical process of exploration.
A test specialist is not always someone that executes tests (although they should be good at doing so) or writing automation (although they should be able to). A test specialist is often very much like a research lead in a data science context: they lead the questioning and drive the research. A test specialist is someone who pushes the team to ask interesting questions and then helps the team adhere to standards of evidence that suggest how to get answers to those questions.
The Research Aspect of Test Specialists
I mentioned a research lead and I do think that test specialists take on many characteristics of such a role. But what does a research lead actually do? It’s generally agreed that research leads have three areas of responsibility: identify assumptions, derive questions, and know the business. In a data science context these roles also democratize data, distribute results, and enforce learning. From a test specialist standpoint, I would say those latter three are slightly modified: democratize testing, distribute quality, and reinforce learning.
Just as data scientists and research leads do, test specialists focus on discovering and presenting interesting bits of data that support decision-making.
The Reasoning Aspect of Test Specialists
Test specialists work to avoid soft reasoning and shallow questions. That means they don’t settle for “easy” or “quick” answers. In fact, test specialists are highly skeptical of answers that are easy or quick. That said, sometimes certain things are relatively easy or quick. So it’s important for a test specialist to a pragmatic rather than ruthless skeptic. This is something you would want to ferret out in an interview.
Test specialists work to build up their skills not just around critical thinking, but critical reasoning. That’s really important to look for in an interview because it largely determines the distinction between a specialist and a non-specialist.
Consider that the “critical” in critical thinking is about finding the critical questions that can chip away at the foundation of the idea. Ideas such that our features are well specified or that our code is not degrading various qualities, as just a few examples.
The “critical” aspect is about the ability to pick apart the conclusions that make up an assumed and/or accepted belief. It’s not just about judgment; it’s about the ability to find something that’s essential that provides the basis for varying judgments. That’s critical thinking. What about reasoning?
Reasoning is the evidence, experience, and values that support conclusions about whatever we are using to for judgments. Once you have the reasoning, you can start to look for critical questions, especially those things that are not framed as questions.
For example, you might hear about the “always/never heuristic.” This is something that test specialists are good at noting in conversations or in documents, where something is explicitly or implicitly stated to be “always” or “never” the case. These are conclusions that have — allegedly — been reasoned to. So this is an area that specialists will use their own reasoning to pounce on.
Test specialists are also primed to listen for and respond to terms like “because of” or “and thus” or “and as a result” or “and therefore.” These phrases are all suggestive that what is to follow will be some form of reasoning that has led people to a conclusion. Thus specialists are gearing up their own reasoning to counter that.
There’s a notion of here of a test specialist being an instinctive contrarian. They are essentially ready to disagree with whatever is said — even if, on the surface, they agree with it! But the good test specialists have learned to temper that instinctive contrarian aspect and harness it around asking clarifying questions rather than just coming off as someone who disagrees with everything.
The Questioning Aspect of Test Specialists
In the book Data Science: Create Teams That Ask the Right Questions and Deliver Real Value, Doug Rose brings up the idea of “places to look for questions” for a data science team. Those places are listed as:
- Clarify key terms
- Root out assumptions
- Find errors
- See other causes
- Uncover misleading statistics
- Highlight missing data
This advice applies equally well to test specialists. In fact, I think test specialists often operate very much like data scientists just as they operate like the research leads I mentioned earlier.
What all of the above really boils down to is this: challenge evidence.
That is what test specialists do. All the time. They have a variety of techniques — people-focused and process-focused — that will assist them in doing this. In fact, Doug Rose indicates this aspect of challenging evidence in his book. He states that for every question we engage with, we should ask the following:
- Should we believe it?
- Is there evidence to support it? Evidence is well-established data that you can use to prove some larger fact. If there’s evidence, you should ask the third question.
- How good is the evidence, and does it support the facts?
As with any scientific method, however, it’s important not to get hung up on whether or not we prove or disprove things necessarily. What test specialists are looking for is evidence that is strong rather than weak. In legal terms, we might state that our evidence has to have greater probative value than it does prejudicial value. Meaning, essentially, the evidence has to help us reason about things better while minimizing our assumptions and biases.
Beware Test Specialist Nihilism
The above being said, it is important for test specialists not to get overly hung up on trying to suggest there is no certainty at all and that the very notion of proof and disproof has little or no meaning or that there is no such thing as facts but rather only interpretations of facts. I talked about that a bit regarding broadening our testing wisdom.
As an example of what I mean by this, a very well-known test consultant said this on Twitter:
“There is no experiment, there is no test for ‘it works’. There is no test for the future, for ‘always’. All I know is that *it hasn’t failed yet, as far as I can tell*.”
The problem here is that a “test of it works” and “test for always” are not the same thing and not too many people that I’m aware of suggest that a statement of about “now” applies to all future conditions. And, yes, of course we can realize that something may fail at some point in the future for some reason. But that doesn’t really tell us much.
Ask yourself this, if you want to know someone’s current conclusion, what would you prefer: “From what I can tell it works.” or “It hasn’t failed yet so far as I can tell.” Is there a difference between those two? There is a difference and a test specialist knows why and can articulate that difference.
I want to be clear that the test consultant I mentioned is not someone I consider a nihilist in this somewhat tongue-in-cheek sense. This is simply an example used for illustrative purposes.
Continuing with the example, a test specialist, in my opinion, would never engage with the above kind of comment because it would immediately call into question their relevance. As I pointed out in that Twitter thread, “It hasn’t failed yet, as far as I can tell” can also mean “It works, for now” which is the distinction I just brought up to you. So, yes, I do agree that this means that “it works … with qualifications.” The problem isn’t the qualifications; the problem is with the framing of them. A test specialist frames the argument such that people understand that qualifications on a result don’t stop us from making decisions in the present.
Can We Support a Test Specialist?
There was a lot of material in this post. I’ll close with a few points regarding some questions you can ask yourself (or your team or your group or your company) when thinking about hiring someone whose specialty is a tester.
If you have an existing test team, watch out if those testers are the types to dismiss things as just semantics. A test specialist will tend to get very frustrated with such a group if that group shows no ability to evolve their thinking around the idea that semantics can matter quite a bit. This applies to all groups but test specialists usually expect a bit more of testers.
Consider if your team or group or company is one that turns testing into a programming problem. By which I mean, you tend to fall into the technocracy. Testing — all or most of it — is abdicated to tooling, such as automation. Automation is a key skill; automation is a core strategy for better leveraging humans. Test specialists know that. They can work in that context. But they aren’t misled by what these tools actually provide us and they understand how those tools can be misused.
Consider if your team of group very quickly shows frustration with dealing with hard topics, like quality and testing. You will see this happen most often during sprint planning or grooming sessions where people get frustrated by how testing discussions seem to go off on a lot of tangents, failing to recognize that those tangents are often the most critical aspects.
In general, seriously consider if there are individual and/or organizational biases against questions. Questioning is one of the first steps toward discovery. That said, do note that very few groups, teams or companies actually feel that they have a bias against questions. So what you have to be watchful for is the tolerance people seem to have for discussions and a lot of questioning.
Clear as Mud?
I have a whole series of posts related to a career in testing and a series of posts related to interviewing in the context of a testing career. There is definitely some overlap there but what I’ve tried to do here is present a viewpoint and set of arguments that is different from any other posts in terms of how I’m framing that viewpoint and those arguments.
Even given the length of this post, which is longer than I intended, there’s still probably much more unsaid than has been said. I do hope, however, if you find yourself in disagreement with what I’ve said here, you frame those as questions. I would love to discuss them with you. And if you find yourself in agreement with what I’ve said here, I hope you are skeptical of that and frame those as questions too.
If nothing else, I hope I’ve convinced you that hiring specialist testers is something that deserves consideration beyond just the idea of “I want to hire someone to test my software.” If you get the former but all you wanted was the latter, both parties are likely going to end up unhappy.