I’ve recently had reason to give some training on my overall philosophy and rationale for the introduction of Quality Assurance and Testing into modern development environments. I’ll put up a few posts as an attempt to gather my own thoughts and check if my own thinking is consistent.
One thing I should clarify here: whenever I mention the word “artifacts” in this post, what I mean is anything that gets created as part of turning ideas into working software. This might be requirements documents, design documents, test cases, source code, wire frames, automation logic, etc. These all become sources of truth and I believe modern testers work to minimize those.
Also, when I say “modern development environments,” I mean those that place a heavy emphasis on the continuum of continuous inspection, continuous integration, continuous delivery, and continuous deploy. I’m presuming most people reading this will know what those terms mean but just to remove ambiguity:
- Continuous inspection – build system is running, analyzing, and reporting code quality
- Continuous integration – project builds are able to be integrated back into the main development effort
- Continuous delivery – when the build system is able to deliver a project capable of being deployed
- Continuous deploy – automatic deployment
Most of this context takes place in the realm of what is the emerging DevOps movement. Actually, it’s not so much “emerging” any more as it has fully emerged. But a key aspect seems to be missing from the phrase “DevOps” which is some aspect of quality assurance and testing. But is it really missing?
Developers have to understand a lot of complexity about the environment they will be shipping software into, and production operations teams need to increasingly understand the internals of the software they ship. The “quality” aspect is right there, it’s front and center. That said, to see it as front and center does require seeing testing as a mediating mechanism. And to see that, you must see testing as a design activity. And when you do, when you truly see testing as a design activity at all levels, do you start to open up your options.
Putting Pressure on Design
I believe that testing exists largely to put pressure on design. I believe that the best way to put pressure on design is not via a series of documents and/or artifacts that you create in tracking tools but rather via the collaboration and communication of a team hashing out and refining concepts as they turn ideas into working software.
In my overall approach environments that focus on the “continuous” attributes should focus on collaborative construction, where several team members work together and produce artifacts that evolve continuously over time. This is where testing is it’s most viable and where it is also its most democratized. In these environments, tests are a broad artifact. This means tests are not just there to detect changes in behavior. Instead tests are to specify and elaborate on the behavior.
Automation is a key strategy in environments that focus on the continuous spectrum. But it’s important to realize that what you are really ‘automating’ here is the visibility of whether the software is good enough to go into production. You need to create just enough automation to be able to answer that question. This is as opposed to getting bogged down in creating monolithic “test automation frameworks.”
This terrifies some test teams, particularly those who are a bit more “old school,” if you will, or who adhere to the Control and/or Factory schools of thought regarding testing.
Before we get into that, what gets you to this place I’m talking about? What gets you to that kind of environment?
One interesting point is figuring out how to minimize all the artifacts and tools that you use now. Consider what it would take for documentation or those artifacts to become unnecessary. This becomes the direction you should move towards. You don’t want nothing at all. You want the minimum artifacts, ideally acting as a single source of truth with built-in traceability, that allow product, development, and testing to reason about and make decisions regarding the evolution of the software. You want just enough that lets you encode decisions but that remains flexible enough to let you change your mind about those decisions.
This means your development team as a whole focuses on minimum viable products (i.e., small features providing defined capabilities) using iterative releases with fast feedback loops and validated learning.
The goal is short cycles of activity that demand sustained focus and that allow for brief recovery if things don’t go so well. This cycle of focus and recovery builds rhythm and this rhythm builds momentum.
This also gives you the ability to fail without a great deal of fear. If you can fail fast enough so that the failure isn’t too costly, you can learn from those mistakes to improve. Experiments, or so-called Safe-To-Fail Probes, are particularly useful when you want our practices (the context of your activities) and your outcomes to emerge as a result of a dynamic culture of collaboration and communication.
I firmly believe that safe-to-fail and fail-fast make the cost of change curve become the cost of mistake curve. J. B. Rainsberger has said that if you drive the cost of a bug down to zero, it doesn’t matter how many of them you create. And I fully agree with that. The cost of change is generally not the problem. What is the problem is the cost of the mistake. So what this means is that it’s not how “late” in your process you try to change something. Rather it’s how long between when you create a problem and when you find it. That’s the feedback loop you want to shorten. And minimum artifacts is one of the ways that you will shorten it.
But this does mean that you need strategies in place to preserve maximum flexibility at minimum cost. Once again, this is where I think modern testing steps in to help developers and product teams put appropriate levels of pressure on design.
Keep Agile and Lean
And this all certainly seems to be in line with agile and lean practices, right? After all, agile focuses on early delivery; it handles changing values. Lean focuses on cheap delivery; it minimizes waste.
In fact, let me digress here for a bit because there are two concepts that I think all testers should be aware of but often are not. The theory of options suggests you shouldn’t exercise your options before you have to. That’s agile. The theory of constraints is all about making systems efficient and avoiding bottlenecks. That’s lean.
Regarding the theory of options, most people will associate this with the idea of Real Options, which is where options are kept open until the last responsible moment when a decision is required. Paying to extend options builds in time to learn about alternative solutions, which allow teams to make better choices.
Regarding the theory of constraints, consider the bucket brigade idea or the candy box value chain. The general idea is that any system is efficient when the production line moves at the same speed all the way through, preferably producing single items at a time and avoiding batches. Batches occur when things pile up.
The more artifacts you have in place, the more things tend to pile up. The more they pile up, the more difficult it becomes for a team to be dynamic and, I would argue, the more difficult it becomes for the team to put pressure on design.
Testing PUsh, Not Pull
So, speaking of that, let’s go back to that putting pressure on design idea. Let’s look at this from a test team perspective. Here’s one of the critical points that I think modern testers have to be aware of: we have to put pressure on design at a pace that matches how product teams specify and how development teams write code.
There are two paths here:
- From customer conversation down to where it intersects with code.
- From code conversation up to where it intersects with the user.
Testers need to feel comfortable working in both spheres of activity. In fact, I would argue tests are the connective glue that brings those two paths together, using testing as a communication mechanism and tests as a collaboration artifact. But this means testing and tests have to be approached with a wider angle lens of what they are and how they are applied.
This means that, in my view, testing, as a function, needs to be more push, rather than pull.
So, if there is a “Test Team” or a “QA Team”, then that team needs to be pushing information into the developers and product proactively. This is as opposed to reactively having those teams pull information from testing. The reason for this is because if teams are pulling from testing, it means they are likely looking for test reports about execution. That’s testing as an execution activity, not a design activity. If those teams are getting pushed from testing, that means testing is informing, guiding, and encoding their decisions. That’s testing as a design activity.
This seems like a good place to end this post because, knowing me, I could easily go off in about twenty different directions here.
The main thing I hope is clear from this post is that I believe testing exists to put pressure on design. I believe that in modern software environments, testing does this with the minimum of artifacts and tooling, relying instead on the dynamic collaboration and communication of teams who share in the development process.
Certainly some elements of decisions need to be encoded as artifacts. But I believe effective and efficient testing serves to minimize those artifacts. In the next post in this “Modern Testing” series, I’ll dig into that a bit and talk about what I’ve found all of this means for test teams.