Select Mode

Test Interview Technocracy

I’ve focused on the danger of the technocracy before, which is where we turn testing into a programming problem. This has, in many cases, infused the interview process for test specialists as well. And yet automation is important! There’s no doubt that automation is a scaling factor and a way to leverage humans better. So that does bring up an interesting dynamic when you interview test specialists but where you hope to have some sort of programmatic focus.

Most people probably know I’ve talked about the interviewing problem for testers for quite some time.

I’ve brought up that technical test interviews are broken. They still are.

Rather than just complain about it, I’ve suggested focusing on broad skills for technical testers but also making sure that you interview testers as if that’s what you actually want. To that latter point, if you really want a developer (in the strict programmatic sense), then do everyone a favor and interview that kind of person.

But … do know that it is possible to interview a tester as a developer. Or, rather, as a type of developer. One that simply has a speciality in test rather than in code.

Getting to First Principles

Let’s first think about why we use automation in the context of testing. And to do that we can actually think about why we have fueled an industry with computers and code in the first place.

The problems we wanted to solve or provide solutions for became more complex. Just as we leverage construction machines to augment what humans can do, we leveraged computing machines to augment what humans can do. Of course this meant that we had to learn to think slightly differently about problem solving because a machine doesn’t work the same way that humans do. Humans can do complex things rather slowly. Machines can do simple things really quickly.

In the context of programmatic development, the job of the programmer is to harness the simple abilities of the machine to solve complicated problems.

Now consider this: outside the context of automation, it’s the testers job to harness the abilities of humans as they are working with machines to solve complicated problems. So right out of the starting gate, being a tester means being an observer of humans, and of machines, and how the two interact via various mechanisms, such as … you guessed it … coding.

Getting to Second Principles

The above is what you should first be looking for in a test specialist, particularly if you want that person to have a focus around automation.

I say this for two reasons.

The first reason is obvious: automation is just coding. So if a test specialist doesn’t understand the types of mistakes that developers make when programming, what would prevent them from making those same mistakes during automation?

The second reason is perhaps less obvious. Let’s first frame it as a question: What gets lost when the human intuitive becomes machine expressive? It’s another way of asking how or to what extent does our test expression change because we choose to encode it as automation? Which is yet another of asking what we lose when we take human testing and turn it into purely automated testing.

Bringing this around to the interview process, let’s say some hiring team gives you a test automation exercise. For the most part, I’m totally fine with this. As long as the focus is on determining how the test specialist brings their test thinking into the automation. This would be as opposed to simply looking at whether the test specialist knows all the design patterns or can solve algorithms that are of dubious value in automation anyway.

Let’s get even a little more specific. Some fundamental questions that should be asked around automation in this context:

  • Are the assertions/expectations clear and verifiable?
  • Are the inputs and outputs precisely specified?
  • Are error cases, both input and output, considered?
  • Is the test logically coherent?
  • Is the test readable and maintainable?
  • Does the test makes it clear how its expected output relates to its input?

I can tell you how many test interviews I’ve been through that focused on automation and made sure of the above: it would be exactly zero.

Or, being accurate and fair, I can say that the above was never discussed with me as part of the exercise or challenge. Maybe it was considered by someone on their own but I was never really questioned about these ideas, even peripherally. Even the third bullet item there is often not considered very well in interview situations because it’s not just a matter of whether those cases are considered but how they are situated as part of the automation workflow.

Looking for Developer Thinking

Let’s consider this in terms of how we might interview a developer with a code speciality. Say, as part of the interview process, we are letting the developer write some front-end code for us, using a library of their own choosing. They can use whatever language they want.

That last point is important. Test interviews get this wrong. They assume that because they’re a “Java shop” or a “C# shop”, that their exercises must require test automation written in this same language. As I’ve written before, your automation language does not have to be the same as your development language. If an interview situation requires a specific language, they have already shown that slight bit of inflexibility that, in fact, you don’t want in a test automation developer.

Anyway, let’s say our developer picks JavaScript and more specifically they go with React. So, in the context of the interview exercise, the developer has to throw together some mini-application that shows they have the ability to utilize React in an application, similar to how a tester might prove they can use Selenium in the context of automation.

That last point is important as well. I see so many test interviews where ultimately the role is about having you write automation using Selenium, but they have you going through some wider exercise that doesn’t really fully test your ability to use Selenium at all.

Back to our developer: once that borderline level of skill (creating a simple React app) is observed to be in place, there are certain things I would want to understand about this developer I’m interviewing, in terms of how they conceptualize code. Let’s consider some examples.

  • Does the developer seem to favor a declarative over an imperative style? (React embraces a declarative style over imperative by updating views automatically.)
  • Does the developer focus on a component-based structure using pure language? (React doesn’t use domain-specific languages for its components, just pure JavaScript.)
  • Does the developer seem to favor powerful abstractions? (React has a simplified way of interacting with the DOM, allowing you to normalize event handling and other interfaces that work similarly across browsers.)

You can see here that I can ask questions here that are general enough to apply to whatever language and library the developer chose; the specifics are just there to relate that specifically to the developer’s choice.

The same would apply for a technical tester using Selenium. Whether the tester has the ability to write a script that uses Selenium to drive a browser is extremely easy to determine very quickly. What ultimately matters, however, is the type of choices they make beyond that: patterns they do or don’t use; techniques they do or don’t use; and so forth.

Looking for Tester Thinking?

So we can see how to do this with a code development focus, where higher-level thinking is checked for along with some lower-level implementation. Why is that often not the case in technical test interviews that are focused on automation?

I’ll repeat that: in technical interviews for developers, we tend to look for the higher-level thinking that informs their coding practice. In other words, we want to see the wider level discipline they practice around coding.

We tend not to do the same thing for testing. And if you are a company, or a hiring manager, or a hiring team, that is falling into this trap, you are part of the technocracy problem. You are essentially perpetuating the problem of testing being turned into a programming problem.

Going with my above example, just to take one particular thing — abstractions — a good test specialist, in the context of automation, knows that you need to use just enough abstractions that you’re not doing more work than you have to but few enough abstractions that your test code is easy to read.

You, interviewing the test specialist, should know that as well. But how are you testing for it? How are you checking whether your test specialist with their automation focus is able to situate code in a wider context?

So, in that test specialist context, what you want to be looking for is someone who feels comfortable looking at code and reasoning about code. While not being an enterprise developer, they should certainly understand what makes good code and what makes bad code, just as they understand what makes a good test and a bad test.

This is important because testability — a key driver of internal quality – sees a great deal of application in the context of code. This, too, perhaps not surprisingly, is something that technical test interviews are very bad at ferreting out.

The Automation / Testing Relationship

Automation is a technique, that relies on the discipline of coding. Automation is simply one context for a coding activity. It’s also a context for a testing activity — but only a very limited part of it.

I think it’s easy enough to frame it like this:

Imagine you hired someone. And all that person could do is exactly what you told them. They could do it over and over. But they would never vary from what you told them. And even if they do vary, they would only vary in exactly the way you told them to vary. And they wouldn’t be observing anything except what you specifically told them to observe.

That’s automation, basically.

It simply does exactly what it was told to do and it can look for a range of things that it was exactly told to look for. Put another way: automation doesn’t think. It doesn’t reason. It doesn’t interpret. It doesn’t make value judgments. It won’t be curious about something it sees (or doesn’t see).

It really is as simple as that

Automation can provide confirmation (this is working) or it can provide falsification (this is not working). It can only do this in the context of being purely formulaic; by doing nothing more than following a script. It may stumble upon a bug. It may even be a new bug. But it will not have reasoned its way to that bug. It will have simply encountered it. And once the bug is encountered, automation will not explore around it or attempt to understand it.

I think it’s basically as simple as this: automation is a technique of testing. It is a technique that allows teams to scale certain efficiencies, such as speed of execution and searches for regression.

Since it is a technique, automation supports human-focused testing but does not replace it.

But if that’s the case — and it is — then why do so many “technical test interviews” still not act as if it’s the case?


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.