Dan Ashby recently posted his views regarding
a model for test strategies and heuristics. I really like the level of effort put into it, particularly because it was clearly distilled not just from thought but thought based on experience. Let’s talk about this.
I don’t want to talk about Dan’s model in detail. I’m not here to praise it or critique it. You should go read about it and decide for yourself. What I do want to talk about is the kind of healthy evangelism that this showcases; this is evangelism that is not falling into the
fundamentalist trap or the
technocrat trap.
So what does it do?
Focus on Positively Shaping Mental Models
As Dan says, he went from one model (the Heuristic Test Strategy Model) to this new one. But what’s interesting is not the rationale for the change but the fact of modeling itself.
My question to Dan was that I was curious how he got others to conceptualize this model. After all, there’s a lot going on in that diagram. And often the people you most want to reach will need help understanding how to encapsulate the details.
But before getting to that question and Dan’s response, let’s first let’s talk a bit about models.
Modeling In General
Models are, by their very nature, simplifications. I think most people would agree that no model can include all of a discipline’s complexity or the nuance of human communication or even thought that is part of that discipline. Inevitably
some information gets left out and some of that information is, in fact, going to be
important information.
So to create a model, we have to make choices about what’s important enough to include, simplifying the world into a kind of “toy” version that can be easily understood and from which we can infer important facts and actions. That inference is important! We must make it possible to recover the important bits — the nuance — that necessarily gets left out.
Modeling as a Way to Conceptualize
So my question to Dan was essentially this: if you had to distill this as a way to frame an understanding of testing, what are the important facts and actions? What is the distillation of the model? The post explains the moving parts more than adequately. But models are as much about the ability to reaction within them than they are about the points of data that make up the model.
So what I’m really asking about is how the model can be internalized as a guide to action. This is particularly relevant if you happen to agree that testing is a democratized activity but there is a specialist aspect to it as well. By this I mean, all humans test no matter what. It’s hard-wired into our cognitive abilities. But some of us harness those cognitive abilities and learn to apply them in a way that goes beyond just the “average.”
Modeling Based on Experience
Many models start out just like Dan’s, with a series of hunches often backed up by some experience. These kinds of models tend to form in our minds because we, as practitioners, wonder what matters most in the area we’re dealing with or working in. Then we try to figure out which of those variables we should count and then finally deciding how much weight to give each of them.
Dan indicated that, as part of his rationale, he wanted to “explain how context affects our test execution” but that, as part of that explanatory model, he “needed a deeper lens into how that pans out with different test approaches” with a focus in how those approaches “expanded into the continuous testing layers too.”
And that’s great! That all makes sense. As an explanation, it justifies the creation of the model. This is as opposed to when models are constructed such that they define their own reality and then use that reality to justify the results of the model.
I would argue that what Dan has done here is the start of an explanation model and even a grouping strategy. I talked about that quite a bit when I talked about
testing and model building.
My Framing of the Model
So, for me, what I did with Dan’s model is look at it this way (using terminology that I like and promote):
Testing is a design activity, an execution activity, and a framing activity. Testing (as discipline) puts pressure on design at various levels of abstraction: written requirements, conversations, code, applications. This provides for a shared understanding of what quality means. Testing (as activity) is executed by humans or tools which exercise implemented solutions to determine the value of the experience provided.
Testing frames all of this into a narrative that we use to tell the story of our company: how we work together to understand what quality means for various people at various times and in various situations such that we provide value-adding experiences for our customers.
That, to me, is what model-building needs to be. A way that we, as testers in a specialist discipline, can start to get past debates of
“checking vs testing” or whether
“manual testing is on its deathbed” and start having discussions about how to frame testing as a multi-faceted discipline.
This is a discipline that has a long history in just about every field of endeavor out there, which I talked about in
Testing is Like. At the end of that post, I said:
The very notion of testing undergirds disciplines like physics, geology, historiography, paleobiology, chemistry, social science, archaeology, linguistics, and so on and so forth. Our discipline has an incredibly rich pedigree that can inform us. Let’s start acting like it.
The Testing Model Gets Lost
But testing, when shoved into our technology context — particularly that of building software — has become something other than what it means in many other disciplines. It’s been marginalized, distorted, or conflated with other disciplines to a disturbing degree. It’s been questioned as something that only machines
should do or only humans
can do. Nomenclature wars have started over this, such as whether we talk about “manual programmers.” And if not, why do we talk about “manual testers.”
Yeah, great. But none of that is the point!
Here’s one point. Testing matters — or should matter — to people outside of specialist testing roles because all humans do it. Not all humans specialize in it, however.
Here’s another point. Developers — and not testers — have driven most of the innovations in testing over the last decade, from ideas, to discussions, to tooling.
Testers Need To Choose Models…
That’s why I want us, as testers, to give people the vocabulary of our discipline, the nuance behind the vocabulary, and then show how that vocabulary can be framed in discussions where it doesn’t sound defensive or in denial of a reality that others perceive, such as whether “manual testing” does or does not exist.
But I also want to make sure that this vocabulary is consistent with the discipline as it is understood in those vast other arenas in which it plays out; arenas that, just like software testing, are rooted in investigation, experimentation, exploration, and understanding the biases and categories of error that all us humans are subject to when we engage in complex activities.
.. And Help Others Adopt Those Models
Modeling not just our practices but how those practices can be conceptualized and articulated is, I believe, key to this endeavor. This modeling requires us, however, to choose our battles well. All of us carry thousands of models in our heads, from the complex to the mundane. Those models tell us what to expect in various situations, guide our decisions, and allow us to understand responses.
So when we’re asking others — including those in our discipline — to understand what we’re talking about or why we’re making the distinctions we are, we’re asking them to adopt yet another model. As such, it’s critical that the models we ask people to adopt are able to fit in with their thinking and their experiences to a certain degree. And when our models challenge their own, we need concise ways to articulate why those challenges are not only healthy but productive.
I don’t think the link between testing and problems customers face is as strong as we think. Both for software as well as hardware, customers can tolerate a lot of problems.
Consider a low end product and a high end. There are many customers who are willing to pay for the low end product. Similarly if the high end product has some problems, it’s still better than the low end.
Many problems are handled by customer support. A 20 member dev team has a 5 member support, which is busy all year. They can take care of many problems.
The problems are less for enterprise software (banking, telecom).
Defects are those that must be fixed, e.g., software crash.
Problems can always be fixed.
If you have good design, good dev and a successful business model, testing is dispensable. As a result, weaker testing approaches can appear to be successful. They are successful, since it doesn’t affect the business. Or they are not really relevant. Customers are impacted, but they are tolerant of issues. If there are big problems, you can patch it.
As a result, good testing may remain marginal. It may be better to promote theories that you believe in, rather than trying to show one is better than the other.
Examples:
Fintech product won’t work on rooted phones. Some customers are affected. We didn’t think about it while testing and don’t give any message until the app is installed. Sorry. A few phones are not rooted and the software still complains.
Stock charts have some inaccuracies. We didn’t find them while testing. We didn’t think about that issue. Doesn’t really matter. It isn’t the main functionality.
Firefox consumes more memory on Win 10. Buy more memory.
I agree that there is tolerance for issues among people. That’s really orthogonal to the point about modeling, however. For example, you say “If you have good design, good dev and a successful business model, testing is dispensable.” On the one hand, someone could argue that’s incorrect. Testing is being done as part of that design and that development. It may not be by specialist testers. But it is testing of some sort. And it is being done.
But I completely agree: this can be seen as a weaker test approach. Yet: is it? If it works, how weak is it? Yet, how do we know it works? By customer complaints and/or feedback. And if, as you say, tolerances can be high (and I would agree), that barometer may not mean much or at least not as much as people think.
These are really good points and that’s why I think it’s even more important to be able to look at how we model concepts. For example, as Dan states in his post, looking at how those approaches fit in a continuous context where, as you said, “if there are big problems, you can patch it.” So what does testing look like in a continuous context? If the changes are small enough at each point, that means presumably they are easy enough to test, even by non-specialists. And if that’s the case, can’t we just get by with that minimum of testing?
So all of these points point even more to the need for modeling testing.