My Future in Testing

I had two major series of thematic posts that I tried out this year: Modern Testing and Indefinito. The former was eminently focused on the tactical and the latter more on the strategic and perhaps even philosophical. In some ways these provided my focus as I find myself on the doorstep of 2017.

Incidentally, I should probably explain that the term “Indefinito” is being used in the sense of “indeterminate” or “unresolved.” I choose this in a similar way to how Nassim Nicholas Taleb used the term “Incerto” for his series of books. I’ll presume here that it goes without saying that I’m in no way comparing myself to Taleb, certainly not in terms of intellectual rigor nor even in writing style.

A Look Back at the Thematic

Having drawn this comparison, particularly one that may make me seem a bit deluded at best, I should probably try to explain a bit. After reading Taleb’s various books, a few points stuck with me.

  • “You are alive in inverse proportion to the density of clichés in your writing.”
  • “You exist in full if and only if your conversation (or writings) cannot be easily reconstructed with clips from other conversations.”

From the standpoint of what I tend to read about testing, I’m hoping that I avoided the standard clichés. I also have tried to be unique enough that people would not necessarily feel that I was just reconstructing the work of others in my own words. I was also somewhat challenged my readers — and myself — to think about this:

  • “Newspaper readers exposed to real prose are like deaf persons at a Puccini opera: they may like a thing or two while wondering, ‘what’s the point?'”

I’m okay with people reading some of what I post and asking that very question: “What’s the point?” It goes along with the previous two quotes from Taleb. If you have to ask what the point is, it means I got you thinking about it and I certainly didn’t provide too much that was cliché or nothing more than a simple reconstruction.

In the book Admirable Evasions, I read this:

“The search for the elementary particles of human existence does not so much obey the biblical injunction seek, and ye shall find as follow the less glorious procrustean epistemological principle find, and ye shall seek. Once a metaphysically impatient person has found his guiding explanatory principle, he manages to explain everything by it: and the more he explains, the more intoxicatingly explanatory his principle appears to him.”

To a certain extent, I have found that in the writings of many of my fellow testers. A simple case in point is the oft-stated “agile tester.” Here someone has found “agile” as their guiding explanatory principle and they attempt to explain everything by it or in reference to it.

Remember that good old Albert Einstein said that any explanations should be as simple as possible, but not simpler than possible. The problem is that, being human beings, sometimes our desire for the illusion of understanding is greater than the desire for understanding itself. Coupled with this is the idea that the desire for explanation tends to outrun the recognition of explanatory power.

Again, I think this pretty much sums up most of what I read about testing. And, to be fair, this may apply to my own work as well. I hope not but I’d be lying if I said there wasn’t plenty of biases that prevent me from seeing my own work that way.

As Francis Bacon said, “Reading makes a full man, conversation a ready man, and writing an exact man.” I’ve been trying to become more exact in my ability to articulate testing as a discipline but without necessarily using “exactness” to convey a false understanding. In The Bed of Procrustes, Taleb says

“We humans, facing limits of knowledge, and things we do not observe, the unseen and the unknown, resolve the tension by squeezing life and the world into crisp commoditized ideas, reductive categories, specific vocabularies, and prepackaged narratives.”

Again, I see this in spades with discussions of testing. But, also again and also being fair, it’s highly likely I’ve done the exact same thing. What I’m trying to do is (1) recognize that and (2) see beyond it. From the same book, Taleb says:

“Because our minds need to reduce information, we are more likely to squeeze a phenomenon into the Procrustean bed of a crisp and known category (amputating the unknown), rather than suspend categorization, and make it tangible. Thanks to our detections of false patterns, along with real ones, what is random will appear less random and more certain — our overactive brains are more likely to impose the wrong, simplistic narrative than no narrative at all. Our detection of false patterns is growing faster and faster as a side effect of modernity and the information age: there is this mismatch between the messy randomness of the information-rich current world, with its complex interactions, and our intuitions of events, derived in a simpler ancestral habitat.”

I believe this is very much the case with testing as a discipline as it is applied in an increasingly complex technical context where the notion of what it means “to test” grows blurry at the various boundaries where other disciplines kick in. A key part of handling this is via the narratives that we — as specialist testers — create.

I care very deeply about words and I very much believe in the power of language to convey ideas. As such I’m very cognizant that trite phrases and oft-repeated phrases or concepts can, like slogans, become shopworn, especially those that lack analytical, historical or descriptive power. I do believe this is exactly the position I see most testers in when they try to describe how what they do is different from what other people, even though “everyone tests.”

A Look Forward – Provide the Narrative

In 2017 I want to look at bringing the ideas in those thematic threads I mentioned before together: the Modern Tester, with its somewhat technical focus and the Indefinito, which focused on the thinking aspects and the correlations about how testing “is like” other disciplines. I want to start exploring what “testing” means in the modern era of development.

I want to do so not by joining the ranks of the semanticists — even when I agree with them. Yes, “testing” and “checking” are two different things. However this rarely, if ever, provides any insight beyond a bunch of testers debating with each other.

But I also want to get the industry as a whole to question the notion of why “tester” and “developer” keep getting conflated. I want to understand where the technocrat focus is coming from. Not because I necessarily want to stop it, per se, but rather because I want to harness it while refocusing it.

A Look Back – The Technical Side

As just one example, I said the following in The Integration Pact:

“The first step to improving testability in an application is to establish a natural feedback loop between application code and test code, using signals from testing to improve application code.”

So as an example of what I would like to consider is this: Can we consider an alternative for testing the interaction between components without resorting to “heavier” integrated testing? Well, what about by focusing on the compatibility among interfaces (at boundaries) before focus on integrated testing.

Great! But developers have already long thought of this. Yet there’s still a lot of room for a specialist tester to deal with this concept at a different level of abstraction than developers do

As another example, I would like to consider this contention: strong integration tests, consisting of collaboration tests (clients using test doubles in place of collaborating services) and contract tests (showing that service implementations correctly behave the way clients expect) can provide the same level of confidence as integrated tests at a lower total cost of maintenance.

Is that true? If so, what does it mean? How do testers deal with that contention? Do they simply relegate to developers? Do they figure out how to converse with developers on it? Do they provide their own tooling to support the ideas?

A Look Back – The Human Side

Okay, so there’s a challenge. Everyone tests. There is the idea of the democratization of testing, which I happen to believe in. But that can lead people to feel that “anyone” can do testing and thus everyone should. As specialist testers we need to understand how to not just deal with this polarity, but embrace it.

I also want testers to realize that in many cases, companies, hiring managers, developers are thinking this:

“I find you valuable because … you’re doing a job I don’t want to do.”

As opposed to this:

“I find you valuable because … you’re doing a job I can’t do, at least not without putting in the effort you did to learn the discipline.”

That’s a problem. And no matter how popular you think you are or how well you think your company “gets it” when it comes to testing, trust me: this idea is endemic in the industry as a whole and latent within many, many practitioners of supporting disciplines in our industry.

William McNeill a great article called “Mythistory, or Truth, Myth, History, and Historians” said “[our] practice has been better than [our] epistemology.” The idea here being that the methods were more sophisticated than the practitioners’ awareness of them. He was referring to historians, but I think the same could be argued of a large swath of the software development industry, particularly testers.

This is a blind spot many testers have. And, speaking of that, the book Blind Spot: Hidden Biases of Good People talks about the idea of “mindbugs.” These are basically ingrained habits of thought that lead to errors in how we perceive, remember, reason and, ultimately, make decisions. A very interesting area of study — and one I believe specialist testing has to focus on — is the extent to which our minds have designed wonderfully efficient and accurate methods that still manage to fail us miserably when we put them to use in different contexts. This is one of the key things that explains why projects still “fail” no matter whether we are waterfall, agile, post-agile, etc.

A key thing that testers have to understand is that judgment and decision making have a lot in common with perception. And, like perception, they are subject to illusions and those illusions give rise to various biases, prejudices, and fallacies.

A Look Forward – The Context of Testing

Testing, like development, deals with facts. Testing deals with current facts but also historical facts and it does so, I would argue, more so than other disciplines. The facts we use to reason and make decisions on our projects are like a skein of time, stretching from the past up to our current work. And there are vibrations, for lack of a better term, in the facts around us.

The book The Half-Life of Facts: Why Everything We Know Has an Expiration Date introduced me to an interesting concept: the mesofact. Mesofacts are facts that change at the meso-, or middle, timescale. This leads to what’s called the “invisible present.” I actually think this resonates quite a bit with the project singularity I wrote about, which is what I see the invisible present as.

At the simplest level, facts are how we organize and interpret.

Knowing how facts change, and thus how knowledge spreads, or how we adapt to new ideas as that knowledge spreads, is all important on our projects. If we can understand the underlying order and patterns of how facts change, we can better handle all of the uncertainty that we do have to deal with. Only by knowing the pattern of our knowledge’s evolution can we be better prepared for its change.

Due to a great deal of understanding that we have regarding cognitive biases, much of what each of us knows, even as it changes, has a clear structure.

The creation of facts operates according to certain principles that you may even call scientific. So too does the spread of knowledge. How each of us hears new information or how error gets dispelled, adheres to certain rules and dynamics of interaction between human beings, particularly as many of our bits of information get encoded in a technical context: source code, for example, or automated tests.

This is, I believe, one of the key areas of testing as it continues to evolve in our industry. In fact, I honestly think that “tester” and “testing” — as a specialist concept — probably need to go away in terms of how they are described. Everyone tests and everyone should. They can scarcely do otherwise if they are thinking human being. But not everyone has the capacity, desire, or ability to act as a historian, and knowledge engineer, and a technologist. That, however, is what I do believe the future of testing is. At the very least, it’s what I want my future in testing to look like even as I continue to put a healthy focus on the technical aspects. (If you look at my GitHub repositories, you’ll see I’m building up a healthy ecosystem of tools that support testing!)

My ability to explore these ideas in 2017 is going to hinge on my ability to find a supportive environment in which I can formulate and utilize ideas day-to-day but also interleave the ideas from those thematic threads of Modern Testing and the Indefinito.


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.