Exploration, Requirements, and Computation

Let’s use this post to take stock of what I’ve done so far in the interactive series but also talk a bit about exploration as a core technique of testing, particularly around the idea of requirements.

I’ve already talked quite a bit about exploration in a very general sense, applied to some specific contexts. See the following posts for what are some (relatively) condensed ideas:

I don’t want to simply rehash things here and I do want to keep this relevant to the running examples I have with interactive exploration but there’s a couple of different issues at play here that I feel it’s important for testers to realize.

Exploration and Sensitivities

Testing a game story like the one we’re creating in the interactive series — and indeed writing a game story so that it is easy to test consistently — is an art in itself. That’s why so many of my exploratory testing posts focus on game-based thinking. I feel this kind of thinking is more amenable to driving the points home, even though the thinking applies to any application or service, not just games.

As you can probably imagine, just based on what you’ve seen so far, checking implementation thoroughness can be a very laborious process. As in most testing, the presence of actual bugs in the sense of “this breaks the application” is not the only thing you want to consider. You also want to check whether the game story has been constructed with a consistent amount of depth. As just a few examples:

  • Are there descriptions for everything the player might look at?
  • If you’ve implemented special verbs (like “photographing”), are there appropriate reactions for all the different objects that can be used with the verb?
  • Do most objects in a story have specific responses to contextually relevant actions (like “photographing” or “buying”)?

You may have noticed in the source text I provided so far that there is a certain amount of textual description provided to the player for each location and some description for objects.

One thing you want to do is look for no descriptions, poorly worded descriptions, entirely inaccurate descriptions. You also want to look for items referred to that don’t have descriptions or that can’t be referenced at all. If you’ve examined the source text from the previous posts, you’ve certainly seen this. Consider that the rooms list “nannies” and “tourists” and “flower beds” and “trees”. None of those can even be referenced. (While you were at it, you might have noticed that Black Lion Gate has a bit of misspelling: “bewteen thre trees”).

Beyond this you want to look for things that players try. You are always exploring the idea of: What do the players expect someone should be able to do? This is a question we ask about any application we test and, of course, there are constraints in terms of what the application will allow them to do in the first place. But notice how I worded that: “What do players expect someone should be able to do?”

While I’m describing all of this in the context of this particular game, consider how this informs a lot of our exploration in testing. The idea is to have sensitivity to that which is poorly clued (“how do I use this?”), poorly described (“what the heck is this?”), poorly implemented (“this doesn’t work!” or “this is horrible to use”).

During exploration we engage our emotion, our intuitions, our biases, our prejudices, our prior experiences, past conversations, shared understandings and we use that as a way to investigate and experiment.

So does this mean we do all this without requirements? Or do all those things become part of the requirements? Let’s broaden out our discussion here slightly.

Exploration Interleaved with Requirements

There are a lot of discussions that come up about exploratory versus scripted testing and even more about “testing without requirements”.

The interesting thing is that both issues tend to swirl around what is written down or encoded in some fashion. Scripted tends to — although does not have to — mean something that provides the control of the testing by being an artifact that is consulted or executed to perform “checks.”

“Testing without requirements” is the implication that you literally have nothing to go off of in terms of testing the application. This is a poorly framed concept from the start: there are always requirements. They may be implicit rather than explicit; they may be a set of shared verbal agreements rather than entirely written down. But they are always there.

Consider the game we’ve been looking at. With the “photographing” challenge, I didn’t give you any sort of requirement document by any stretch of the imagination. I gave you a scenario and then we ended up constructing some scripts (the “test” commands). And, in many ways, that started to become our requirements. Had that been a live session, you would have seen me working with participants as a developer, business analyst, product manager, and tester all at the same time.

The fact is that, if you were exploring, you were using a combination of all of the ideas I mentioned at the close of the last section as well as the feedback loop you got as you encoded exploratory ideas as scripted tests. You were shifting between what testers commonly like to refer to as “checking” and “testing.”

In my experience, working at and consulting with many different companies, it’s getting testers to realize there are polarities (“checking”, “testing” perhaps being one set) that we have to shift between. It’s not an either-or dichotomy; it’s not a case of either you test or you check. It’s an interleaved set of activities that, together, provide a strong basis for investigation and experimentation.

Maps, Artifacts, Sources of Truth

Exploration is, in part, realizing that you don’t always have a map at the start. Or at best, you have a hazy one with a lot of “here there be dragons” references on it. That map can be constructed as you go, which means you have to be willing to let requirements evolve as opposed to having them all on hand at the start. Some testers have a really hard time with this concept. There are some good things that testers have to keep in mind if they want to remain reasonably sane:

  • Don’t assume that requirements are known upfront.
  • Don’t assume that all requirements will be easy to grasp just because they are written down.
  • Don’t assume that requirements, once specified, will not change for the duration of the project.
  • Don’t assume that if changes happen, they will be minor.

I was discussing these ideas with someone recently, and they said:

“Requirements that are not documented are like the verbal contract, it is not worth the paper it is printed on. After all, aren’t those requirements the reason for the work?”

I would, and did, argue that the requirements are not the reason for the work. That confuses causality.

The reason for the work is that someone, or some group, had an idea and they want that idea translated into reality because they believe it provides value to customers which in turn provides value to the business. The requirements that may be drafted around that idea are a cartographic construct; a representation of that desired reality. That is a representation that is likely to evolve. Further, in modern software development, that evolution may occur over the course of one day as part of continuous deployment activities.

There’s an interesting corollary here with the “written down” and the “evolving” as it applies to this game story idea we’re looking at in the interactive series and general applications that get tested.

In the case of our game, the design defines a virtual world. For now keep in mind that each aspect of design constrains the imagination of everyone downstream. You are effectively constructing a box that may be difficult to think outside of. The more artifacts you have in place and the more sources of truth, the more boxes you create.

I talked at a previous time about being careful about artifact crutches and reducing sources of truth.

As we know, computers and programming languages are very good at interpreting low-level, raw logical conditions (i.e., “if the player carries the camera”). Computers, and programming languages, are not capable of determining how challenging (or frustrating) a given puzzle is, or how poorly worded a description is, or how enjoyable a game is to play or an application is to use.

This is why we interleave checking and testing as part of our exploration. This is the distinction between testing as an execution activity and testing as a design activity. That’s how I would rather frame things: a spectrum of testing activities, rather than worrying about “checking” vs “testing”. In all cases, exploration, and the appropriate encoding of information as a result of that, should be what testers are focusing on.

The (Computational) Value of Testers

In my closing post of 2016 on my future in testing I specifically talked about testers, as a whole, need to better distinguish their talents from that other disciplines.

Let’s start with something Professor Jeannette Wing said:

“Computational thinking confronts the riddle of machine intelligence: What can humans do better than computers? and What can computers do better than humans? Most fundamentally it addresses the question: What is computable?”

When I read this it made me realize I’m often asking a similar question: What can testers do better than other specialists?

A few other quotes from Wing (all from her article “Computational Thinking”):

“Computer science is the study of computation: what can be computed and how to compute it. Having to solve a particular problem, we might ask: How difficult is it to solve? and What’s the best way to solve it?”

This sounds incredibly similar to exactly what I do in my role as a tester. The problem is how we produce good software that adds value. That’s a difficult problem to solve in many projects and we’re always looking for better ways to solve these problems.

“Computational thinking involves solving problems, designing systems, and understanding human behavior. Computational thinking is reformulating a seemingly difficult problem into one we know how to solve, perhaps by reduction, embedding, transformation, or simulation.”

Again, exactly what I do as part of my overall testing. Regarding simulation, as just one idea, I think back to my use of models, rules, and features. Part of reformulating one problem into another that we know how to solve is a key aspect, I believe, of testability. It’s about recovering a part of the past (“how we solved something before”) and leveraging it for the future, which means dealing with that project singularity.

“Computational thinking is using abstraction and decomposition when attacking a large complex task or designing a large complex system. It is separation of concerns. It is choosing an appropriate representation for a problem or modeling the relevant aspects of a problem to make it tractable. It is using invariants to describe a system’s behavior succinctly and declaratively. It is having the confidence we can safely use, modify, and influence a large complex system without understanding its every detail.”

That last part — “without understanding its every detail” — is a key part of what we just talked about regarding not necessarily having a full map — but with the goal of making such a map (“make it tractable”, “choosing an appropriate representation”).

Beyond that, however, I do think this is how modern testers incorporate what they are (or should be) very good at into the concerns that most development organizations have. Examples here would be treating treating code as the specification, then driving design in that context, and finally providing a lucid approach.

Now why did I spend all that time quoting things about computational thinknig in the context of exploration? I did that because computational thinking is often thought of as “having answers”, of “having a plan”, of “having an encoded set of instructions that guides our work.” But …

“Computational thinking is using heuristic reasoning to discover a solution. It is planning, learning, and acting in the presence of uncertainty.”

I would argue that “heuristic reasoning” to discover a solution is exactly what exploration is all about. This operates at two levels in my game story example posts, wherein you can explore the game itself but also the construction of the game. You do this by drawing upon what the game presents, what the source code presents, and what is incrementally discovered about adding features (such as the “photographing”) action to the game.

Share

About Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.
This entry was posted in Exploration, Interactive. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *