It has been and continues to be my contention that many test and quality assurance interviews these days are handled terribly. I have seen, and participated in, interviews where candidates were barely tested for the wider aspects of how they think and approach problems at a human-focused level. Instead the focus is almost entirely about how they think and approach at the code level. So let’s talk about that.
I’ll short-circuit one common objection. I do believe code-based interviews are great for developers. I do believe code-based interviews have a certain place for testers who will be focusing on automation. But in that latter case I rarely see code challenges that are predicated in any way on what you will likely most be dealing with. For example, I’ve rarely had someone say:
“This role will be helping us build our WebDriver automation. I see from your resume you have a lot of experience with that. Here’s a laptop. It has Ruby, Python, and Java installed. We also have Node.js on there. Pick whatever you want and talk me through how you write automation. Let’s use The Internet (http://the-internet.herokuapp.com/) for this.”
I say “rarely.” What I mean is I’ve had this style of code exercise presented to me only once in my entire career. And I loved them for it. They then questioned me on why I used certain code styles, how and why I used certain wait states, why I used (or didn’t use) certain patterns and so on. Perfect.
Anyway, moving along. I get a lot interest, both positive and negative, regarding my career-style posts. One that often gets hit is where I talk about interviewing testers for broad skills. Putting my money where my mouth is, I’m going to remind folks of the test game I wrote for interviewing testers. The game is called “Test Quest” (because that apparently was the limit of my imagination for naming it). I described this a bit in the quest for testers.
That was my initial attempt at “gamification” for the interviewing process of testing candidates. But let me rationalize that out a bit more, in terms of the motivation.
You Want an Experimenter
First of all, one of the most important things I want to see from someone during an interview for a test position, whether it will be “technical” or not, is how they experiment. To do that, I need to provide them with a context in which to do so. That context has to be “wide enough” to tell me multiple things about the person simultaneously. After all, I have limited time and I need to respect both their and my time while remaining fair.
Code challenges can provide a certain such context, but these look at a very limited set of cognitive abilities, at least when you consider the mandate of what a tester, as distinct from a developer, should be.
There are certain behaviors you have to display when conducting an experiment.
- You need to determine if it is going well.
- You need to determine if it is going poorly.
- You need a way to amplify the parts that are going well.
- You need a way to dampen the parts that are going poorly.
Before all that you need to have a reason for believing your form of experimentation is likely to produce good results in the first place. Watching someone perform in the context you provide is part of how you see them exhibit these behaviors.
There’s also a certain temperament to conducting an experiment. You need to:
- Be able to question everything and take very little at face value.
- Be able to argue both sides of a point but don’t be needlessly contrarian.
- Believe that knowledge and self-control lead to wise decisions and act accordingly.
- Believe and act like emotional and mental clarity are necessary.
- Be able to maintain focus to rely on intuition that is guided by experience.
Experimental Thinking Guides Our Strategies
As part of your experiment, you will come up with a test strategy. This test strategy should be a guide to your test design. This is how good testers work.
So in the context of the interview, a crucial point is that you are seeing how the person internalizes these concepts because they’re not going to have time to write down a full-on strategy and design. I’ve had some candidates tell me they couldn’t start testing the “Test Quest” game until they did write all this out. And that told me quite a bit about how much they trusted their instincts, about how they operated on-the-fly, and so on.
The reason the strategy doesn’t need to be written down is because a test strategy is and should be your reasoning about why the tests you choose to do, particularly in a limited amount of time, are ones you consider the most effective and efficient for the experiment at hand. The strategy partly answers why you feel these tests, and not others, are adequate for fulfilling your most important goals for the given experiment.
Further, you need to be able to tell someone the results, in a meaningful way, in a timely manner, and in such a way that encourages people to take action. That is to say, the feedback you provides needs to be directed, relevant, and accurate.
Coding challenges, while good at testing algorithmic thinking and the ability to follow a problem through to a possible conclusion, are not as effective as doing all of this as what I’m proposing.
This All Works Because Typologies Matter
There is a certain typology, for lack of a better term, that you want to discover in testers. I talked a little about this — in a very limited context, admittedly — when talking about “tester as learner.”
Now, that’s about the testers themselves as opposed to the people using applications that testers are testing. But the key point is that knowing those attributes, those typologies that people exhibit, allows testers to apply them to hypothetical users and how users utilize an application or service.
Well, what better way to test this out than by having the tester act as the tester and as the hypothetical user? That’s what I tried to do with “Test Quest.”
And I absolutely believe that this is something you want to see during an interview, particularly with an experienced tester. You want to see how candidates for a test position take on the persona of a user, acting as that user, but also backing up that action with their experience as a tester. It’s critical to come to some answer about that.
Some Typology Examples
This typology business might seem a little abstract so let me drive it home a bit. As some people know, I’ve done a lot of game testing and I recently had a discussion with someone about the typologies of game players and how that might relate to the typologies of testers. Of import, however, is the recognition that we are really talking about types of persona.
Consider, for example, the “easily frustrated gamer.” Or the “don’t pander to/patronize me gamer.” Or the “please don’t challenge me too much gamer.” Even if you aren’t a gamer yourself, you can probably intuit that these are somewhat nuanced types of gaming style because what makes one gamer “easily frustrated” does not to another. What’s considered a “helpful hint” to one gamer may be considered “dumbing down” or “pandering” to another.
So what you want is your interview challenge context to have some built in frustrations. Have some deliberate bugs. You’re not only seeing if the candidate finds these things and, crucially, recognizes them, but also how they respond to these things. I’ve had candidates with “Test Quest” get so frustrated at what they saw as “a lot” of bugs that they basically declared the game was no longer worth testing. Yet I’ve had others blissfully breeze by bugs, assuming “Well, that’s just how games are sometimes.” Either response tells me a lot.
I’ll give you another quick anecdote from my own career.
When testing MMO (Massively Multiplayer Online) games, you have the notion of players simply playing against the “environment” (i.e., the game itself) — called PvE — but also against each other — called PvP. With that understood, I had to consider the “hardcore PvP”, the “casual PvP”, and various levels of both type of player, from the ultra-competitive whose keyboard ends up lodged in their monitor due to a fit of rage to the relatively passive player who doesn’t bat an eye when their whole raid group wipes out, essentially rendering the last two hours of their game playing null and void.
Beyond even this, in certain such games, like Star Wars: The Old Republic or Guild Wars 2, the story is a critical component of how the game plays out in the context of PvE or PvP dynamics. This is in contrast to games like World of Warcraft where the vast majority of the story is not told in the game at all. This means it was critical for me to focus on the players who were there for the story itself, where the notion of PvE or PvP was slightly — or even entirely — orthogonal to any of their concerns.
That orthogonality of concern matters when you consider what the user experience is likely to be based on what you are finding during testing, particularly when you have to put that in context with the idea of getting something out the door, usually sooner rather than later.
The trick with a typology is that it has to also take into account motivation and desire on the part of the user. Well, when we consider testers and how they do their work, motivation and desire are equally important to understanding that person. And this is something you most definitely want to ferret out during an interview. And it’s not something that a code-exercise style interview will tend to do.
Motivation and Desire
Not only should test candidates have their own motivations and desires — which you do want to discover during an interview — but they have to align those with that of possible users as well as purposely subvert those motivations and desires. That is a large part of what makes an effective tester: the ability to play a role, or adopt a persona, but then actively subvert that role at various points.
But do many interviews take that into account? I would say not. And certainly code-based ones do not, unless they are very cleverly done. The “compute bowling scores, “generate Fizz and Buzz based on multiples of 3 and 5” or the “reverse a string” variety are not, to me, examples of those that are cleverly done, at least not when talking about a tester interview.
In fact, this kind of thinking about motivations and desires somewhat mirrors the notion of “test tours”, mostly applied in exploratory testing. So you might have the “requirements tour”, the “complexity tour”, the “variability tour”, and so forth. These are often broken down into “tourist” type ideas of various sorts: i.e., the person who goes to the business district, the one who goes to the historical district, the one who goes to the “bad side of town”, etc.
And this matters because one of the key things interviews for testers often don’t test for is the ability to do exploratory-style testing. Even automation specialists need to have the ability to think outside of and beyond tools that support testing. (Unless you are just hiring technocrat testers who like to reduce testing to a programming problem.) And tools do not do exploratory testing and harness the results of that exploration as feedback loops to refine test strategies.
I think we would all agree that expectations are a powerful force for people. Expectation acts on our perception, bending and twisting our perception in ways that are ultimately measurable after-the-fact but nevertheless often imperceptible in-the-moment.
So not only do we tend to see see what we expect to see, we also tend to experience what we expect to experience. The point being: expectations matter. We act on them, consciously or otherwise, and these actions have consequences. We appear to internalize these expectations so that they not only affect how we see others, they affect how we see ourselves. Our interpretation of information tends to be guided by our own self-interest and one of our self-interests is in maintaining the idea that our own viewpoints are correct.
Imagine how that plays out for all of us in an interview context!
So if I have a viewpoint, as a user, that certain applications are “very difficult” then I will have that expectation going in. I will be looking — as humans are wont to do — for confirming instances of how difficult something is, even if it’s not actually all that difficult. My expectations start to subtly shape the reality a bit.
And that matters in terms of how we consider testing applications, games or otherwise. We have to, as much as possible, look at what we test while adopting the expectations of various types of people. If you combine that idea with the idea of motivation and desire — which can compete with or reinforce expectations — then you start to get into the true typologies of users, again, gamer or otherwise.
And, once again, this is the kind of stuff you want to ferret out about a particular tester during an interview.
Cognitive biases are also important to look for. Confirmation and hindsight bias are two that affect testers and developers particularly.
From a cognitive standpoint, people are interesting. Beyond the obvious, what I mean is that there’s often an assumption that people tend to cluster around the same opinions. But in reality people more often tend to cluster around the same framework of analyses.
Those frameworks of thought are what provide a lot of the expectations, motivations, and then ultimately desires. So, in the United States, for example, this is one of the key reasons why we have such partisan style politics. It’s not that people are clustering around the opinions of, say, Republicans or Democrats. Rather, it’s that they buy into the same framework of analysis that their respective party of choice tends to cluster around. This then drives their expectations which in turn drives their motivations. (This is probably another way that politics can teach us things about testing.)
All of that is a long way of saying that groups of people tend to assign the same importance to the same sets of circumstances and cut reality into the same categories. Certainly the act of categorizing is necessary for humans; it’s a large part of how we survived as a species. But it’s also, in our modern societies, what fragments us. Usually along lines of race, gender, disability, sexual orientation, etc. And therein lines the problem: categorizing becomes pathological when the category is seen as definitive. This prevents people from seeing the fuzziness of various boundaries. And it certainly acts to prevent people from revising their categories.
So what does all that mean? It means the act of categorizing always produces reduction in true complexity.
Yet that false reduction in complexity is certainly something testers and developers are prone to. We think something is less complex than it really is — such as the sensitivities of a system or a product or a service — and thus this sets our expectations. Thus it guides our motivations for how we test.
There is a corollary to this, of course, which is that we sometimes see more complexity than there may be. In human thought patterns, this is often what leads to conspiracy-style thinking: we think that the “true events” must be more complex but because that’s not immediately obvious, there must a vast conspiracy behind whatever it is we are looking at.
In those cases, we actually don’t see the underlying simplicity and so we end up having our expectations and motivations for testing (and development!) guided by that false view of complexity.
Why Does All This Suggest Interview Gamification?
The reason I think all this is relevant in particular to game thinking — including developing and playing games — is because with games we are provided a constrained world in which to explore. That constrained world immediately forces us to consider our expectations of what we are going to encounter. A game is no more and no less than any other application or service we could test. It just happens to have a more dynamic textual, visual, and sometimes auditory, context.
That is what my game “Test Quest” tries to provide.
The point of it was to force testers to look at how their desire (to test the game during an interview) motivated certain behaviors. Those behaviors were informed by a shifting set of expectations. The expectations shifted because they learned more as they played the game. If they found the game to be buggy, because they explored it effectively as a tester, they began to have expectations about how “bad” the game was. But if they missed a lot of the bugs, or simply weren’t paying attention, their expectations were about how “good” — or at least how “not bad” — the game was.
So what I was looking for as testers engaged with that game was the framework of analysis they started to build up. That framework told me how they dealt with, of fell prey to, the various cognitive biases we are all capable of.
Beyond that, you are poking and prodding their mental processes, looking for a mixture of logic, imagination, and problem-solving. In that last case, I’m more often looking for general problem-solving rather than a specific competency. My focus in these cases is often on the reasoning because in these types of contexts, the reasoning itself often is the answer I’m seeking.
So What Was The Point of All This?
I’m hoping we, collectively, start to think better about how to challenge people who want to become part of our team, while also allowing them to challenge us. I also want to put some personality back into the interviews, by having these challenges be fun and engaging and allow ourselves to show that we are a person, not just a bundle of skills.
The goal is to provide an executing context that uses multiple senses, and thus multiple styles of thinking, and look for testers to parlay their (hopefully justifiable) hunches into testable hypotheses, perhaps negotiate a few false turns here and there, and then ultimately arrive at some (again, hopefully justified) answer. It might not be the answer you consider “right,” but what you’re looking at is how they arrived at their answer and whether they were able to consistently justify that answer and, perhaps most importantly, check if they were continually evaluating their thinking, refining it if need be.
I know I may sound like I’m tooting my own horn here a bit, as in “Hey, look at this great thing I created.” In fact, I think “Test Quest” in its current incarnation is largely crap. It does the job but probably not as well as it could. I want to take the next steps in this and create some interesting interview-challenge games. I particularly want to take the platform out of it and have these be as browser-based as possible.
I think there’s a lot of room to be creative in this area. And I know there are plenty of people much smarter than me out there who could take this idea further than I ever could. If you’ve done similar things, I would love to hear from you. If you have ideas about doing similar things, likewise: let’s hear it. And if you think everything I described here is flawed, I want to hear that too!