The general idea of a prison of representation is when you are locked into some means or method of being understood. This means of “being locked” can come from the past and, interestingly enough, from the future. I believe testing, as a discipline, is in danger of being in such a prison. Let’s talk about this.
I end my post on the
constraints of (testing) history by stating that
“we need to start applying [narrative intuition] to our own discipline before these constraints become our prison.”
I’ll expand on that here but you’ll have to forgive, or at least tolerate, a bit of a digression first. John Gaddis in his book
The Landscape of History talks about metaphorical ghosts that haunt historians of the future if the past is not liberated. He says, speaking of those future historians, that ultimately those ghosts are
“our own haunted spirits, locked up within a prison that’s a future in which no one respects or perhaps even remembers us.”
And looking the other way, into the past:
“We make the past legible, but in doing so we lock it up in a prison from which there’s neither escape nor ransom nor appeal.”
His overall point is that a liberation from the past and a (possible) future is often what’s required when a group or a discipline finds itself in a prison of representation.
The liberation from the past that I see as necessary is the idea that testing is purely a clerical or bureaucratic activity that can be automated away or just given to some artificial intelligence to handle. The liberation from the future is two-fold: from the technocrats who turn testing into a programming problem and our paid test consultants whose career is ostensibly to get testers thinking better but actually making them sounding less relevant.
On that latter point, I see a certain “class” rising that, along with many good ideas and thoughtful criticisms, provides a bit of
fundamentalism but also aa “professional class” akin to what I talked about regarding
what politics teaches us about testing. This is the professional class that causes the
future angst by near-constantly speaking in the negative about artificial intelligence, DevOps, automation, and so on. Basically: all of the things testers need to be getting excited about and learning about in order to be relevant in their career and, for many, to even have a career in the first place.
Developers Problems Are Tester Problems
In a previous post I talked about
testing fitting into a DevOps context. Let’s actually scale this slightly up — or maybe just laterally — from that context.
Consider the idea behind Conway’s Law. This is named after programmer Melvin Conway who around 1968 gave us the idea that the architecture of a system will be determined by the communication and organizational structures of the company. In other words, what you design will tend to reflect how your teams and your company is organized.
However, there is an inverse of Conway’s Law that is also valid and is especially relevant to the fully emerged distributed service-based, and microservice ecosystems. This inverse law states that the organizational structure of a company is determined by the architecture of its product. So in other words how you design things will then reflect on how your company organizes itself.
What’s been interesting however is that even though we have notions of so-called “full stack” developers, the Inverse Conway’s Law has shown us that developers will be, in some ways, just like those distributed, service-based or microservice type systems: they will be able to do one thing, and (hopefully) do that one thing very well, but they will be isolated — in responsibility, in domain knowledge, and experience; maybe all three — from the rest of the ecosystem. At least to varying degrees and extents.
Regardless of the extent to which this happens, when considered together, all of the developers
collectively working within these ecosystems will know everything there is to know about it. However,
individually they will still tend to be extremely specialized, knowing only the pieces of the ecosystem they are responsible for.
As a tester, you might say: “Well, that’s a developer problem.” I would say if it’s a developer problem, it’s also a tester problem.
But why is it a problem necessarily?
Tester Career Situated in Development
The situation I’m describing can pose what seems to be an unavoidable organizational problem, even though many fail to recognize it. Even though services — including and especially microservices — must be developed in isolation, leading possibly to isolated, siloed teams, they don’t live in isolation and must interact with one another seamlessly if the overall product is to function at all.
This requires that these isolated, independently functioning teams work together closely. While we pay lip service to this and while some companies do it very well, it can still be something that is difficult to accomplish, given that most team’s goals and projects are specific to a particular service or the distributed components they are working on. I’ve been in plenty of organizations adopting the so-called “Spotify model” where we all knew our piece of the pie; we sometimes only peripherally knew about the other parts of the pie in any sort of depth.
Even with a DevOps approach in the mix, there can also still be large communication gaps between service/component teams and infrastructure teams that need to be closed. Application platform teams, for example, need to build platform services and tools that all of the microservice teams will use, but gaining the requirements and needs from many microservice teams before building one small project can take a long time. Getting developers and infrastructure teams to work together is not always an easy task. And, again, that’s the case even with the alleged panacea of DevOps.
But what does this have to do with testers and particularly with testers articulating their value?
Well, what I just described is exactly one area where testing fits. Not just as something that produces artifacts that can execute but as a communication mechanism. Testing, as a discipline, provides a shared understanding of what quality means and how that quality is sought. That means there must be a focus on communication and collaboration.
And communication and collaboration is
exactly the problem I was just describing. This is something that developers, of the programmatic sort, are generally not trained to do very well.
Tester Career Situated in Project Clarity
Let’s consider another aspect to this. Technical debt tends to increase with developer velocity. The more quickly a component or service can be iterated on, changed, and deployed, the more frequently shortcuts and patches will be put into place. Even if those are done well it still leads to varying paces between groups who are making changes and deploying.
Organizational clarity and structure around the documentation and understanding of a component or a service can cut through this technical debt and shave off a lot of the confusion, lack of awareness, and lack of architectural comprehension that tend to accompany it. But what kind of documentation? Where is that understanding encoded?
Well, isn’t that another aspect of tests? Tests can act as a form of documentation. They can even act as a form of executable specification, should we want to do that. Those tests, if capable of evolving with business rules and business implementation, can encode our understanding of how the application should function. We can get to the point where any variation in the execution of our tests means our documentation is out of date and is due to an unanticipated change.
And that’s important because what we get to here is the idea that testing is a bit of arbiter of history.
In my post on the
shifting polarities of testing I said this:
“Now let’s say that testing as a discipline is the means by which our project culture sees beyond the limits of its own immediate senses. Without hyperbole or intentional grandiosity, I do believe that the the human aspect of testing is the basis, across time, space, and scale, for a wider view.”
Okay, okay – I can hear it now. Some of you are saying: “Well, Jeff has
finally gone off the deep end here.” Bear with me.
In the book
Tacit and Explicit Knowledge there is mention of a specific idea.
“The idea is to generate a Google Earth-type view of the entire united domain that will make it possible to ‘zoom in’ on any area with ease and understand its relationship with all the other areas.”
Well, that’s just it! I believe that is one aspect of what testing, as a broad-lens discipline, does.
Testing offers a compression of historical experience that allows teams to vicariously enlarge current experience. Testing in the software world — just as testing in other disciplines — is a truth-preserving mechanism. More than that, it’s a truth-preserving mechanism that lets us reason about how products and services are created. As such, testing is directed exploration and evaluation of how people turn ideas into working software, where “working” means providing demonstrable levels of value with as reduced costs as possible.
And — bringing this around to the start of this discussion — that is what we are doing whether we are in a Conway’s Law or Inverse Conway’s Law situation.
Tester Career Situated in Knowledge
One of the key areas that testing deals with is epistemology, which is focusing on “how do we know what we know?” And the reason that’s interesting is because both the philosophy of history and that of epistemology seem inseparable from the empirical study of times series data. Yes, this is another “bear with me” moment. Time series data just refers to a succession of numbers in time, a sort of historical document containing numbers instead of words.
There’s another context that’s interesting here. Differential equations — equations of motion — are the mechanic for studying the dynamics of something whose position depends on time. And, again, that’s an empirical study of what amounts to time series data.
What do all these things have in common? Well, epistemology, the philosophy of history, statistics, differential equations: they all aim at understanding truths, investigating the mechanisms that generate them, and separating regularity from the coincidental.
Still with me?
Now consider that our projects — the means by which we provide products and services that add value to people — are essentially batches of history. Our source code and our tests are “truths” that come out of that history. We, and the processes we saddle ourselves with, are the generators of those truths.
So while you might feel that I just went on a terribly wide tangent, if you think about what I just said here from a philosophical standpoint, you’ll see that it matches quite well with the communication and collaboration challenges that we face in our environments.
I said above that testing provides a shared understanding of what quality means and how that quality is sought. That means testing is not just looking at what exists but is dealing with how what exists comes into being. And how what comes into being changes but yet still stays fundamentally the same.
Consider the industry in which we are situated. There are processes by which we all take ideas in people’s heads, collaborate on those ideas so that we understand what’s in people’s heads, and then turn those ideas into working implementations. Along the way we are balancing the idea of and search for quality, which is both objective and subjective. And we are doing this under
project forces and the
project singularity.
Relevance?
But —
wait! — don’t I sound like that professional class I was moaning about earlier? Aren’t I essentially saying a bunch of stuff with dubious relevance? After all, try bringing up some of what I said here in your average meeting at work and you’re likely going to get some strange looks.
So I think part of what we, as testers, need to do is find a way to incorporate these very real aspects of philosophy — well, I believe they’re real anyway — and tie those to the fundamentals of what testing is as a discipline. Not just what
software testing is as a discipline; but
testing itself — an empirical method of exploration, experimentation, investigation, and discovery.
The Relevance of Testing Craft
Someone on LinkedIn asked me: “So how do we overcome this and build a larger body of strong testers who know their craft deeply?”
I think that’s a key question right there that helps us break the prison of representation.
My response was basically that the craft of testing stretches across empirical methods. Those methods are experimentation, exploration, and investigation. Those are definite skills.
While everyone can explore, experiment and investigate, not everyone can do it well. Even more to the point, not everyone can combine those approaches in such a way that the process of discovery is harnessed and attached to the idea helping teams generate insights. And even fewer can combine those approaches and then craft a compelling story about the questions that leads to the approaches and the possible answers that stem from them.
Testing is more than just a set of practices. It’s ultimately about having an empirical mindset. There is much out there on the tactical aspects of testing and quality assurance. And those are important. But tactics have a much shorter lifespan than a shift to a larger mindset about testing and quality assurance.
And it’s that larger mindset — the ability to articulate it
and turn it into relevant practice — that I believe will make sure
testing (as an activity) and, perhaps more importantly,
testers (as a specialized role), are not locked into a prison of representation that an entire industry decides it doesn’t need.