Select Mode

Modern Testing and Resilient Strategy

In two previous posts (on design pressure and sources of truth) I talked about the context that a modern test team is often fitting in with. Here I’ll get more specific, particularly in regards to some strategic elements.

What I’ve talked about so far is the goal of just about all development projects, which is discovering exactly how a system is expected to contribute value to the business. I’ve at least skirted around the idea of using real-world examples to determine the minimum set of features that will allow this value to be delivered.

This goal is achieved via an ongoing conversation in order to progressively explore, elaborate, and expand on a shared understanding of what needs to be delivered and why. It is critical that test teams are adding value by contributing to this process in a proactive way and not just reactively reading or executing something handed to them.

Resiliency and Pressure

There’s an aspect of resilience to this. Resilience is the intrinsic ability of some system (whether of people or technology) to adjust its functioning prior to, during, or following changes and disturbances, such that the system can sustain any required activities under both expected and unexpected conditions.

In previous posts I talked about the constant communication among team specialties as a way to put pressure on design such that options are generated and quick decisions can be made about how features are developed. Test teams must be resilient enough to support this kind of approach.

Practically speaking, for test teams this means it’s imperative not to tie themselves to too many artifacts or massive test suites — automated or otherwise — that are difficult to change based on the results of these conversations. When you keep in mind that the ideal situation is that these conversations are iterative and incremental, this means that test teams must rely on having a minimal footprint in terms of artifacts like test cases and test tooling.

So a few key points tend to fall out of this, in terms of how I believe test teams need to think about acting:

  • You must ensure an easy correlation between behavior and tests so that both can evolve together. If we write behavior-as-tests then the correlation (traceability) is built in.
  • Not all tests have to be written down. You can be condition-based, rather than case-based.
  • Manual Checks and Automated Checks can be the same artifact.

Regarding that first and second points, while I don’t currently take quite as hard a stance as James Bach, I do agree with the idea that test cases are not testing, which references an article of that name.

Regarding my third point, I use “checks” in the purely algorithmic action sense.

  • Testing is necessarily a search process that is curiosity-based and speculation-focused. This can’t be automated.
  • Checking is the process of applying algorithmic decision rules to specific observations in order to make evaluations. This can be automated.

I’ve already talked about automation being a technique rather than testing so I won’t go more into that here. But I will say that when I mention manual and automated checks can be the “same artifact”, that does not necessarily equate to the idea of tools like Cucumber wherein scenario files are tied directly to executable code.

Resilience Strategy

So let’s talk about the strategy here.

Let’s start with this: anything that can answer a question can be considered a specification. This means that development code — and ultimately production code — can be a representation of knowledge and a communication artifact. So can tests in the form of scenarios.

Going with everything I’ve said so far, this means that a test team should want the test artifacts to be written along with the code in the same increments of time that the code is developed. Keeping in mind, of course, that a large component of how developers discover is only when they are coding something. Also keeping in mind that sometimes people can only make further decisions when they see some minimal set of working functionality. This operating at the “speed of decisions” — which means at the speed of coding that provides feedback — is a key aspect of a resilient strategy for test teams. If your team can do that — and if those tests can be directly automated when applicable — you have a chance of doing this effectively and efficiently.

Ah, but tests at what level?

My belief is that a test team should encourage favoring design decisions that in themselves encourage programmers to run more tests more easily, since that tends to encourage them to run more tests more often, and that mitigates some of the key risks associated with changing code. This means a focus on unit and integration tests. (Keeping in mind the focus on the difference between integration and integrated, which I cover a bit in the testing intersections.) This also puts a focus on consumer-driven contracts in the context of emerging microservice-style architectures.

All of this is a means by which coding, as an activity in the service of development, becomes resilient.

Further, what this means is a test team has to have a conscious focus on pushing testing to the various places in the stack where it does the most good. What’s the most responsible time that we can be catching certain classes of problems?

One such time is clearly when everything is being coded and where different modules are being constructed with interfaces that will have to communicate with each other. Thus does the resilience of coding intersect with the resilience of tests that validate the code.

But there’s an earlier responsible time that we can also think about. That’s when we’re talking about how certain features add value. This is the time when we’re making decisions about whatever it is we think we’re going to build to make those features work. This is when we’re checking if we understand what we’re talking about so that we don’t waste time coding things we don’t need to.

Interleaved Activities

So then that leads to a core question test teams need to answer: How much are the “talking about it” and “coding it” phases interleaved?

The more interleaved they are, I would argue the more resilient you are.

Your entire team — and here I mean everyone involved — can be iterative if you do small things and interleave activities. This is exactly how, in my mind, you get past the idea of a “QA team” and realize that quality emerges from the results of the actions of everyone on the team working toward a shared understanding. This is exactly how you democratize testing by making it an activity that occurs at various levels, whether that be coding or discussion.

And, ultimately, this goes right back to that minimized source of truth idea and the focus on testing as a design activity. If tests — or test thinking — drive design, I should be able to read a series of product-oriented, business-focused tests to understand the why and the what. If the tests are executable against the code, I should be able to drive them to understand the how.

This allows you to start mitigating the need for ancillary documentation that supports a project. No documentation can provide as detailed and up-to-date description of the code as the code itself. Implementation code provides all needed details while test code acts as the description of the intent behind the production code.

Tests can be executable documentation that are reflective of what the code does when it is operating. TDD and BDD are currently the most common approaches to create and maintain this shared code base. Shared in the sense of production code + test code. This sharing is a key aspect of the resilience required.

Reflective and Resilient

With all of this being said, keep in mind that core strategy that I seem to be swirling around: Testing as a reflective design activity and as one part of an executable source of truth.

Once you’ve encoded the execution, it’s a source of truth that can executed regularly or on-demand. Ideally by any one. This allows for another core element to this strategy: the democratization of testing.

But this gets into an interesting idea: how many tests do you need when the code itself is serving as its specification? When you start having this discussion, it becomes easier to talk about “checks versus tests” and realize that testing is the human activity done via collaboration and communication, while checking is done via scripted and/or algorithmic activities.

When testing — as that human activity — is performed, it doesn’t create a series of testing artifacts. It creates a series of “checking artifacts.”

This kind of focus also gets into some considerations for automation. Specifically, I believe that it means test teams should be focusing on the idea of a micro-framework. These frameworks are small enough to execute the particular checks they are asked to. Since they are “micro”, they should be easier to maintain.

You might also allow yourself more of a polyglot strategy, utilizing different languages to craft tooling that best supports testing by providing checks at an appropriate level of implementation.

Unpack the “Big Questions”

My thinking is evolving on this a bit, so I realize there’s a lot to unpack in what I’m covering here. I risk defocusing a bit if I keep plodding along in this post. So I’ll start closing this post by framing the context that I’m engaging with. If there’s one thing I want you to take away from this is basically thinking about answers to questions like these: What does testing look like if we go with the idea of minimal documentation and development/production code as the most reliable specification?

Note that this question leaves open what “documentation” is and what a “specification” is. It’s a question that may seem simple at first but, again, there’s a lot to unpack in order to give it due consideration. These “Modern Testing” posts are one part of my attempt to do that.

One thing I think most of us could agree on is that in our modern software development contexts, the idea of managing scope isn’t about eliminating uncertainty by defining and locking down requirements as early as possible. The goal is rather a shift towards managing the uncertainty in a way that helps our teams collectively and progressively discover and deliver an effective solution that matches up with the underlying business goals.

This means that we want to reduce waste to reduce cost. What’s waste in this case? It’s the things we want to minimize. At minimum, this is bug fixing, communication churn, and manual checking.

I believe that modern testing strives for a democratized view of testing our assumptions and the results of our decisions such that we enable fast and useful feedback and facilitate fail-fast, safe-to-fail changes.

This “fail-fast, safe-to-fail” experimentation model means that we want to drive the cost of bugs as close as possible to zero because that means we can start reframing the cost of change curve to the cost of mistake curve. In other words, we do not put the emphasis on the cost of having to change. If we fail-fast and discover quickly, that cost should be minimal. We do put the emphasis on the temporal aspect of our work. We look how long the duration is between the time we introduce a problem (whether that be a bad assumption or faulty code) and the time that we find the problem.

From Classicist to Modernist

I would argue that test teams framing themselves around these concepts look very different from the so-called “traditional” or “classic” test teams. What I want to do is encourage people not to agree with me, but rather to think with me on these subjects.

Testing is constantly in danger — often from its own practitioners — of being reduced to a clerical discipline. Quality Assurance, likewise, is then relegated to a bureaucratic discipline. This often happens in times of pressure, when the software industry as a whole undergoes rapid change, with testing and quality assurance practitioners struggling to keep up with that change.

Speaking as a person with a test and quality specialist focus, I sincerely and passionately believe that we have to unpack some of the bigger questions around our discipline, putting particular focus on those that challenge our assumptions and our ways of working.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.