I seem to be on a rant lately about interview techniques for getting good testers. Here I’m going to back up a little further what I do in order to find effective and efficient testers.
I recently went through some interviews with Amazon and, while this may sound a little arrogant, I was terribly disappointed in their interview process. Great, fantastic company; very substandard interview process. I say that because it was so … by the numbers. It was so “traditional” and predictable. I’m not sure exactly what I expected, but I guess I expected more creativity in the process.
But then again: who am I to talk? What do I do that’s so different or creative?
Fair question. So first I’ll talk about my own thoughts regarding some core high-level skills of testers. It is these skills, I believe that you have to ferret out, because they are the hardest to train for. After that, I’ll talk about a particular interview challenge I give to candidates that I’ve found to be very effective in ferreting out the skills I really want.
The Tester, Sub Specie Aeternitatis
Above and beyond all, what is of paramount importance for me is that testers are motivated, engaged, curious, and inspired to solve problems, draw conclusions, and generate new knowledge. Everything else can be trained for during employment. But all the stuff I just listed can be hard to “train” on. Instead, it should be what the person brings with them — at least to some extent.
Given some time in the industry, the most effective testers should have a precise, expressive testing vocabulary for their philosophy of testing. This means testers should have the ability to speak in precise non-technical and technical terms about their testing, as well as to articulate a story about their thought processes. They should be able to demonstrate that to you during an interview challenge. Even if the candidate is relatively “junior” in terms of time within the industry, the candidates that have an instinct for testing will have an intuition for balancing preciseness with expressiveness.
Testers have to be good at moving, step-by-step, to a conclusion, and to move through a series of conclusions to a specific result. Testers have to demonstrate how different conclusions at any step might change the result. This is best demonstrated through interview exercises that are based on exploration.
There are a few thinking fallacies that we human beings all fall prey to and these are things I like to test for in testers. Let’s consider some specifics.
- People tend to make naive observations of the past and treating those as definitive and representative of the future.
- People make the error of confirmation. Meaning, we focus on pre-selected segments of the seen and generalize to the unseen.
- People will perform distortion of silent evidence. What we see is not all there is, but sometimes we treat it as if it was.
- People commit the narrative fallacy. This means we fool ourselves with stories that cater to a distinct pattern.
- People commit the tunnel fallacy. Meaning, we focus on a few well-defined sources of uncertainty or on too specific of attributes.
All of this is what I like to check for, at least to some degree during the interview process. Rather than do code interviews for testers — which do have their place, but not usually in the simplistic way they are done — I try to provide exploration exercises that allow me to check for a series of traits that showcase thinking skills and problem solving skills while also being able to articulate at different technical levels.
Interview Challenge: Test Quest
I’ve done a lot of independent game testing for companies (see Testing Games is Hard!) and I’ve often found that testers who had a good mentality for testing games made some of the best testers in other venues as well. So wrote a game called Test Quest and I provide that to candidates as a means of looking for the traits I mentioned above. The game requires DosBox, which is available for multiple platforms.
The game is a one-room game. So this isn’t something where you are going to have to require the candidate to spend a long amount of time exploring the problem space. The game has a character that can be moved around via mouse or keyboard.
See the little guy there? That’s the character you control. The game also presents a typing interface for commands.
The game does have an interface to control the game, via menus as well:
The challenge is presented to the candidate as such:
“We’re competing in the casual gaming space, which is quite demanding with little room for bad games. In order to meet some upcoming gaming competitions for ‘short challenge games’, meaning they can be solved in about five minutes or so, we want to release our game “Test Quest”. We need to ferret out the bugs and also determine if the game is at least challenging enough but without being too challenging. I’m totally new to testing and you’re the experienced person so I’m turning to you for help. I’d like to pair test with you.”
The candidate is given a set of requirements for the game, which include some design notes, as well as a game manual that users of the game will be able to download. With all that, they’re off to the races. These materials are kept short to make sure the interview session does not just focus on them. The candidate is most certainly allowed to ask questions about any of the material provided.
One thing I should note is that the candidate is given a walkthrough for the game as part of the requirements. That’s a key point. I’m not expecting the candidate to be a game player and I’m certainly not expecting them to solve puzzles in a game. Further, this game is not based on reflexes or any major hand-eye coordination nor is there a time limit, save that of the interview time.
What I am expecting the candidate to do is explore the game with a tester mindset. And by the end of the interview period I ask them one question:
“So do we release the game or not?”
Based on what I just observed them do over the course of the exploration and testing of the game, it helps me determine upon what evidence they utilize to come up with their answer. I should also note that I don’t just sit quietly watching the candidate. I encourage them to “think out loud” and I play the part of a fellow tester, developer, or business analyst depending upon whom the candidate wants input from. I keep a running discussion going during the interview.
I also try to have a little fun with the candidate, joking a bit as we explore and test the game together. After all, if we have to eventually work together, I want to make sure that we get a good feel for each others’ personalities.
I expect the candidate to explore all of the provided material — the documentation, the requirements, the game — and simply show me how they interact with a problem space — particularly one they’ve never seen before — and eventually come to make decisions about that space based upon the testing they perform under time and resource constraints.
I also do have the source code of the game provided if candidates would like to take a look at it. It’s written in a Lisp-like language so as to minimize the chances that candidates familiar with more traditional languages will have any sort of advantage. Here’s an example of what the code looks like:
(instance RoomScript of Script
(method (handleEvent pEvent)
Print("There's absolutely nothing important on the table.")
Even not knowing Lisp someone could probably get some idea of what is likely happening there.
There’s quite a bit more to say on this but I can’t give away too much information about the game because it’s still used at a variety of companies. Eventually, however, I’ll be writing a new type of game and I’ll be able to reveal more about this one, including making it available for anyone who wants it.
What Does This Tell Me?
I trust that even with this lack of a deep-dive into the game itself, you can see why this might be helpful as an exercise. As you can probably imagine the game is designed with some bugs: some obvious, some not so obvious. (The third screen shot above shows one possible bug. Can you spot it?) When and if the candidate stumbles across those, I ask them what kind of bug report they would write. At some points I’ll ask the candidate how they would write test cases or a test approach for the game.
The game also has a few elements that are designed to be a little frustrating. How someone deals with that matters to me. The game also has a few “traps” where a candidate could end up spending a whole lot of time, to the exclusion of time spent elsewhere. This helps me see how the candidate prioritizes their activities when they know time is limited. Given the initial scenario I start them off with, it also helps me see how they treat areas of an application that they consider risky.
Clearly the candidate is getting a lot of input thrown at them. There’s me verbally talking with them, there are the documents they are provided with, and there is the game itself. So what I want to look for in all this are some traits that I believe good testers must have:
- Be thorough in reading.
- Be patient in listening.
- Be careful in questioning.
- Be thoughtful in evaluating.
- Be precise in checking responses.
Further aspects that I’m looking for:
- Showing precision of thought.
- Being articulate and persuasive.
- Identifying the salient points of an issue quickly.
- Having an evidential bias that demands proof.
- Separating emotion from analysis and decisions.
That last point is important to me. I want to see if candidates avoid emotional ties to a particular position. For example, some people get caught up in the “fun” of playing a game. This gamer position then takes precedence over a tester position. Some people get frustrated with the requirements or the game interface itself and begin to feel that the game is “not testable” or “should not be tested” given its current state.
Also of import to me is that I want to see if a candidate can accept a certain amount of ambiguity and use it as flexibility for creative and innovative approaches to testing.
What I ultimately want to see is that I’m dealing with a candidate that …
- … manages with data
- … supports with facts
- … convinces with evidence
- … makes appropriate operational distinctions
- … does not hide complexity behind a facade of simplicity
- … does not mask simplicity by a falsely assumed complexity
I want to see candidates that learn to parse a question (“should this game be released?”) down into its component parts, to carve away the emotion and the confusion and leave only the thing they are dealing with. I want to see candidates that don’t get distracted by the problem until they understand the nature of the problem and what aspects of the problem need to be decided now or can safely be deferred until later. I want to see candidates who are comfortable with the “unfairness” of intransparency on some projects and use that uncertainty to think about the allocation of risk.
Testers need the ability to ask the right questions, investigate the right issues, put together the pieces of the puzzle, and draw the right conclusions. They are thus curious, experimental, and analytical. Testers have a mode where they experience the joy of exploration (and discovery) coupled with the pleasure of the hunt.
Test Quest is my attempt to seek out those abilities in candidates.