In my previous post on modern testing and resilience, I indicated that testing and quality assurance spend a lot of their time, as disciplines, being in danger from their own practitioners. This is most often a problem when the disciplines are under pressure to change. Here I’ll focus on that a bit, with the understanding that these are obviously purely my opinions, even if they seem stated as fact.
First, when do the disciplines feel pressure to change? Well, consider that quality assurance and testing operate in the context of a wider technical discipline, that of the development of applications, products or services.
The Morphing Context
In that wider context, for many years now we’ve seen the notion of what’s come to be called “shift left”, the idea of pushing testing further up (or down, depending on your viewpoint) the overall development stack of activities. Approaches like TDD and BDD have been used to drive this shifting. Do note that these approaches were established by and receive most attention from developers. Testers, as a whole, have had very little input into the direction their discipline goes.
The focus of TDD and BDD put pressure on the ideas of testing and where it “belonged.” These were in response to the wider idea that more software has to be built quicker, but we still can’t afford to lose “quality.” Whatever ‘quality’ means for us at a given point in time. And that’s important because quality is a shifting viewpoint and an emergent aspect. (Or, quality is as quality does, as I said a few years ago.)
Consider other recent pressures in the industry as a whole: the so-called “Big Data” movement, focused on data science, predictive analytics, and even machine learning. Consider the continuing focus on mobile devices and the platform fragmentation this has led to in the industry. Consider the continued and refined focus on APIs as microservice architectures and containerized environments become more prevalent. Consider the shift of web services and the “traditional” Service Oriented Architecture as those adapt to a world based on cloud computing.
The point is that you have a constantly shifting landscape. It is this landscape that developers have to live in all the time. So do testers. And since I believe quality assurance is something that multiple teams do, rather than just one team, this means quality assurance itself must also shift. Not in terms of what it means necessarily, but rather in terms of how the skills and attributes of multiple teams are harnessed to generate activities that allow quality to emerge and shift.
There are two ways that testers, speaking generically, can deal with this problem: adaptation or mitigation. In other words, either find a way to cope with the coming changes or find ways to prevent those changes from coming or being as severe as they could be.
In case there’s any doubt, I fully believe that testers need to adapt. They have had to do so for a long time. They have simply, by and large, failed to do so, abdicating the evolution of their discipline to developers.
Reframe for the Directly Actionable
Let’s start with the, hopefully not-too-contested, viewpoint that Testing and Quality Assurance are two very different things. Now, let’s ask ourselves this: what does Testing look like when it is framed as a product-oriented, development-focused, design-guiding activity? And where does Quality Assurance fit in with that view?
Let’s consider that every team working in the context of software development already does something very specific: they turn working ideas into software and use that software to create business impact. Great, but how? How do we all do that?
Well, we already know acceptance criteria — of some sort — guide development activities. So let’s ask this: What’s the minimum we can do to reframe the acceptance criteria as scenarios that are directly actionable?
What does “directly actionable” mean here? By this I mean that testers and developers, working together, can reason about the feature being discussed and provide options that allow the business to better understand the feature, in terms of how it will be developed in order to provide value.
Okay, hold on. Let’s unpack all that.
The (Project) Hero’s Journey
Each development project is a journey of discovery. Can we agree on that? If so, let’s also agree that on the these journeys, the real constraint isn’t really time or the budget. It’s not even by some metric of man-hours for programmers or testers. Rather the real constraint is the lack of knowledge about what needs to be built and how it should be built.
If we can at least agree on that, then what’s the job of a software engineering team? I would argue it certainly is not to know how to build a solution. I would instead argue that it’s knowing how to discover the best way to build the solution.
Discovery is the key. And discovery is, by definition, iterative and incremental.
The team’s collective understanding will naturally increase over the duration of whatever project is being worked on. Certainly everyone will becoming less ignorant over time. Toward the end of the project, a good team will have built up a deep knowledge of the user’s needs and will be able to proactively propose features and implementations that will be better suited to the particular user base.
That said, this learning path is neither linear nor predictable. It’s hard to know what you don’t know, so it’s hard to predict what you’ll learn as the project progresses. Thus there is a certain amount of built-in uncertainty.
So let’s see if we can agree on something else. Let’s agree that the main challenge in managing scope isn’t so much to eliminate uncertainty by defining and locking down requirements as early as possible. Instead let’s agree that the main challenge is to manage this uncertainty in a way that helps everyone progressively discover and deliver an effective solution that matches up with the underlying business goals behind a project.
That sounds like I’m saying we all figure out a way to manage the uncertainty and reduce the risk that comes with it. And, indeed, that is exactly what I’m saying. But I think this goes back to what I said in the post about resilience: it’s critical for test teams to have a certain manufacturing mindset. In other words, reduce waste to reduce cost. Operationalizing that idea means, among other things:
- Minimize bug fixing, minimize churn, minimize manual testing.
- Drive the cost of bugs closer to zero.
- Morph the cost of change curve into the cost of mistake curve.
All of these things really focus on feedback ultimately. Or, rather, on providing feedback such that you drastically lower the time between when problems are introduced and when they are found. You want to enable fast and useful feedback which in turn allows you to fail-fast. When you can fail fast, it means you have safe-to-fail changes. When your changes are safe-to-fail, that means you can worry less about failure and more about experimenting with ideas, creating working solutions quickly, and even abandoning work that is going down an unfruitful path.
Fast Feedback, Fewer Metrics
It’s a truism that the value of feedback is proportional to the speed at which you receive it. This means faster feedback is always preferable.
In the context of development projects, the relationship between delivery teams (testers, developers) and their stakeholders (product), and the feedback and communication involved in that, can be the difference between success and failure.
So, clearly, you need an approach to develop and support that relationship. This approach must account for technical and non-technical participants. This approach also needs to provide a mechanism for fostering collaboration and discovery through real-world examples. Finally, this approach must accommodate the context of project management, product documentation, reporting, and integration into the build process as a whole.
If you take this to a logical extreme, and you assume a very agile (by which I mean “iterative and incremental”) delivery process, you can frame all of your work around two key metrics: Feature Readiness and Feature Coverage.
There’s more to that idea and I’ll likely explore that in a different post, but think about what that means. The benefits of fast feedback should be reflected in the amount of metrics you have to capture that tell you whether you are being effective and efficient. Metrics are often just another form of churn, quite frankly, and remember that churn is one of the things we want to minimize.
First-Class Citizens Are Your Specs
Let’s briefly go back to the relationship between delivery teams (testers, developers) and their stakeholders (product).
We normally hear some variation on this idea:
“Teams use conversations around concrete examples of system behavior to help develop a shared understanding of how features will provide value to the business.”
The theory being that this lets you determine which features matter and frames discussions about how those features should work. This means that the activity of defining stories places emphasis on a dynamic, iterative process that is designed to facilitate communication and a shared understanding of the problem space.
What I still think is missing is the ability to often harness the “dynamic, iterative” part. We still have roadblocks because development and testing activities are not interleaved.
I believe you harness the dynamic and iterative aspects by minimizing artifacts so that you come as close as possible to a single source of truth with built in traceability.
Now, I’m certainly not saying anything too revelatory there. Except that usually this idea is framed as you working to align your requirements as closely as possible with statements of test. I’ve even advocated this very same thing. I’m not so sure this is the right way to advocate this any more. But, for now, just note that the idea is that these “tests” read more like specifications than so-called “traditional tests.” They focus on the behavior of the application, using tests simply as a means to express and verify that behavior.
Uh … isn’t that BDD?
Hey wait! Good point. Aren’t I really just circling around the idea of executable specifications in the traditional BDD approach?
Executable specifications written in natural language are, as it turns out, a very poor way of communicating requirements. Note that I’m not saying requirements in natural language are a poor way to communicate; I’m saying executable specifications are.
The problem is that they conflate two areas of discipline and regardless of discussions about how imperative or declarative these artifacts should be kept, it’s a constant balancing act to “speak the right way.” I’ve given lots of examples of this in my TDL posts. Yet, having said all this, it’s no doubt true that examples are a great way to clarify any requirements.
I think a lot of our tooling around testing makes the mistake of pulling natural language down rather than pushing natural language up. I think this approach is how you build and maintain that relationship between stakeholders on a team — yes, even the non-technical ones who don’t want to, nor should be expected to, read code. Yet I fully maintain what I said before: code should be a first-class citizen and it should be the source of truth.
This idea, when staged within the context of what I’ve talked about so far in this “Modern Testing” series, is where I’m convinced testers have to provide tooling that adapts to not just the immediate context of a shifting development landscape, but also in terms of the shifting nature of how decisions are made on a project.
I’m leaving this post on a bit of a cliffhanger here because all of this was very descriptive. What I need to do is get prescriptive and say what our activities look like in the context of the truths I’m hoping get unpacked.