I was recently re-reading Houston: We Have a Narrative by Randy Olson and I was struck by certain concepts there that reminded me how poorly framed testing often is, particularly by its own practitioners. Clearly an opinionated statement, of course, but I very much believe that many testers in the industry currently lack a narrative or are using a malformed narrative. And this is hurting the industry more broadly as we see quality problems get worse and worse.
The Context
This is a horn I’ve been sounding for awhile. To put it very bluntly, testers often sound either delusional or needlessly contrarian. Yikes. A bit harsh, right? It is so I’ll give two examples I talked about previously:
As testers embark on their contrarian journey, they have formed these various cultures, for lack of a better term, which I also wrote about previously:
Then we ended up with a slew of model literalists. And then we’ve gotten some testing fundamentalism. We got the rise of a professional class in the test industry (by itself not a bad thing at all) but we have to be wary of the lessons of politics.
And as a result of all this cultural implosion, I feel like testers have been locked into a prison of representation. It just happens to be a prison they created.
Negative Framing
Testers have gone down the path of framing everything as negations.
This has had an interesting effect lately. You’ll often see prominent testers nowadays on LinkedIn, apparently visiting every post they can to do nothing more than dismiss the ideas or “correct” the poster. To be sure, critique is valid and should be welcomed. Yet there is obviously a balance between thoughtful engagement and simply being consumed so much by your own ideas that you simply must correct these errant people — but without actually helping them learn to frame a better narrative.
And that framing of the better narrative — “better,” at least, from your own viewpoint — is what I’m focused on here. Whether the narrative actually is better could only be measured by its impacts on the wider industry, particularly if you are seen as an important player in said industry. This is where the professional class in testing should, presumably, have been having an effect after all these years. But seemingly have not.
I can already hear the objection many might have to that. I’ll revisit that point later.
One problem, as I see it anyway, is that testers like to frame a lot by negation. Consider a few of the arguments that make the rounds:
- “There are no test cases!”
- “You don’t write tests, you perform them!”
- “There is no automated testing!”
- “You don’t automate a test, you automate a check.”
- “There is no such thing as manual testing!”
Let’s say these are things you genuinely and truly believe and that you have a good basis for based on your experience. Great!
But I would ask you to consider this. These are easy things to say when you are established in your career and you can get away with promoting ideas like this. But now put yourself in the shoes of people who are not in that position. And then look at what you’re asking them to promote by way of talking points. And then consider that this is the army you are mobilizing in the fight to shift the industry’s perception of testing.
You are equipping your army with arguments they are not prepared to have nor defend. My personal view is that these arguments are malformed and thus not worthy of being defended but let’s leave that aside.
This Mindset Stifles Innovation
A guy named Jim Rohn said:
“The same wind blows on us all. The difference in arrival is not the blowing of the wind, but the set of the sail.”
I truly do believe testers have been focusing more on the blowing of the wind while developers have been focusing more on the set of the sail. What I mean by that is there has been much more innovation and experimentation around testing with developers than with actual testers. Whether that experimentation and innovation has been good, bad or indifferent, I’m not going to cover here. The fact that it exists at all and well more in that context than it does with testers is what I’m concerned about from a cultural, sociological and, I suppose, anthropological point of view.
In Lost in Test, I gave brief context to how physics right now is in a bit of crisis mode and that’s because theory and experiment have drifted so far apart, that we can’t actually test a lot of the ideas that we’re coming up with. And since the ideas we come up with can’t readily be put to the test, we’re never quite sure if the ideas are a way to drive innovation. What we end up with is camps of over-specialized (at best) groups.
But science, in large part, is all about exploring new ways of conceptualizing the world. At times, radically new. Science demands, as part of its practice, the capacity to constantly call our concepts into question. Science, at its best, rewards the visionary force of a rebellious, critical spirit, capable of modifying its own conceptual basis, and thus being capable of redesigning our world view, sometimes entirely from scratch.
I think some of our professional class in the testing world think they’re doing exactly what I just described. But they’re actually not, I would argue. What they are doing is creating a gap between theory and experiment such that practitioners are great at sound bites but not so much at actually moving the needle in the industry.
If you’re a tester, I would say this: ask yourself, as you look over the landscape of the industry for the last decade or so: where have you seen the innovation in testing happening the most? In the development context driven by developers? Or the testing context driven by testers? Don’t get hung up here on tooling or anything like that. The question is simply: who seems to be talking about testing in a way that, at the very least, it’s getting an audience and is getting some experimentation?
Personally, I’ve seen the entire development industry shift, sometimes seismically, in its practices and thought around testing over the decades. Certainly since around 2005, and arguably a bit before that. (I actually see a lot of the trends from the 1980s reflected well in certain developer thinking.) That accelerated a bit in 2009. And then again in 2015. Again, whether those have been good or bad shifts is up for debate. It’s largely a mix of both because that’s reality. But the shifts showed an important thing: engagement about testing (and tests) that actually worked and led to experiments and thus to innovation.
Just like in the sciences. Go figure, right?
By contrast, the same arguments testers were having around 2005, they are still having today. In some cases, literally using the exact same phrases and arguments that didn’t work back then. And still convincing no one except themselves, usually in self-contained echo chambers where they repeat the same points to each other, often just quoting somebody else’s sound bite where that “somebody else” is usually a member of the professional class. And they end up with a bunch of those negations I stated above: “There is no test automation! There is no manual testing! We don’t write tests! These aren’t tests, they’re checks!”
Theory and Practice in Your Context
Testers have to stop and think about how they’re being heard. Just like a lot of developers did when they started talking more about testing and their managers were saying they should be focusing more on developing. “What’s all this ‘test’ talking you keep bringing up?!? Write some code!”
Because, yeah, that’s a thing that happened. Many developers were sounding the alarms on quality issues and the increasing degradation of quality and that’s why they came up with various tooling that tried to at least help them in that regard. Tooling because, yeah, these were developers. That’s the context they are situated in and the context in which they knew their arguments would receive the most hearing and the most traction.
And I very much do speak from experience here.
Theory and practice move largely in lockstep in a development context. Not so much in a testing context. Again, just like in the sciences right now. Particle physics is in their “crisis mode” because theory and experiment have become drastically disjointed. I very much believe that is the case with the testing industry and its primary practitioners.
I would challenge someone to point me to where testers — as test practitioners and test specialists — have demonstrably and empirically introduced concepts, solutions and ideas that have — again, demonstrably and empirically — shifted the industry. I’m not even saying “good shifting.” Just shifting at all.
- When I asked “Do testers understand testing?”, I often have to answer: “No, I don’t think so.”
- When I asked if testers are “forgetting how to test”, I often have to answer: “Yes, I think so.”
Let’s consider the framing and narrative bit that I mentioned earlier around one of the simpler negations many testers focus on.
“You Don’t Write Tests!”
There’s two ways to frame this. One way might have you thinking that testers who say this just mean that you shouldn’t write tests. Perhaps they are arguing that the practice of having written tests is perhaps not all that necessary or leads to certain bad habits. That’s a fine argument to have. But that’s not what’s being said here.
Instead you’ll often hear it phrased as: “You don’t write tests, you perform tests.”
Well, yes, you do (or at least can) write tests. You also perform tests. It’s not an either-or. In the sciences, we often write down our experiments so we have a basis to frame and distill our thinking. We, quite literally, write these things down. We then perform those experiments.
Just like with tests. We can literally write them. Saying we don’t do this sounds delusional to people and is why testers who sit there and say this are often dismissed, at least in terms of relevance.
Note what’s happening here; testers frame this as a literal statement of it is impossible to write down a test. They seem to be saying that the ontological nature of a “test” precludes it being something that can be written down. These same folks will say that you can write down a “check.” But you can’t actually write a “test.”
A really important point here, however, is that these kind of argumentation — if I can charitably call it such — convinces very few people. Certainly not the people that most testers are hoping to convince, although sometimes I do question what audience they’re actually trying to reach with these statements.
Usually when you frame a narrative, you do so such that people will respond in a way that you want them to. So here all you would be asking people is to still write that thing you write — but stop calling it a test and instead call it a check. This is an example of bad framing around a poor narrative.
That distinction of the “test” or “check” takes us into automation and testing.
“There Is No Automated Testing!”
Here, just like those testers who say you literally cannot write down a test, testers are now saying you literally cannot automate testing. It’s not that you shouldn’t do this or that you should be cautious while doing so. No, to these testers, you are talking about a literal impossibility. Again, for those folks, ontologically, “testing” is not something that can automated.
What can you automate? Checks!
I don’t want to risk mischaracterizing the arguments here. (I have talked about this topic in Testing vs Checking and Revisting Testing vs Checking. And I talked about some of the basis for ontological reasoning in The Theseus of Testing.) So rather than risk that I’ll just state what I do.
Rather than say “there is no automated testing” instead, perhaps, testers can say “There is a risk in turning testing solely into a programming problem.”
Framing it that way, just as I did, is a way to at least get hiring managers to ask you what you mean by that (if you’re in the interview process where you believe they have a compromised view of testing) or your decision makers (if you are in an environment that you believe has a compromised view of testing). Framing it that way is at least enough to get most people who feel testing is nothing more than automation to ask what that means.
And that’s key, right? Testers are (or should be) good at framing things. Testing, as a discipline, is often about framing a narrative. So in dealing with an industry that does often put automation front-and-center — and often well above and beyond testing — testers have to combat that by framing the narrative.
So please note what I’m saying here. I’m not denying there’s a problem; I’m denying that testers are engaging with the problem in a way that most people will respond to.
Note that my framing above doesn’t dismiss automation. It doesn’t even remove the practical need that people do have to learn automation as part of their skill set and thus be marketable in their career. But it does suggest that there are degrees to which automation can overshadow testing, where the mechanism (tool to support testing) overwhelms the meaning (effective testing). See that? Yet another way to frame the discussion.
The framing is crucial in order to engage in the discussion. And I very, very rarely see testers engaged in that kind of framing at all.
Another way to frame automation in this context is not just around the idea of learning tools so that the tester can automate but also learning tools that developers already use to automate.
So if the developers are using, say, an integration testing framework and writing a series of integration tests — maybe contract tests, maybe API tests, whatever — there is value in knowing how those tools work, how developers use them, and how those tools can impact what good and bad tests look like.
This not only frames testing as an activity that recognizes code-based testing — note not calling it “checking” — but also suggests that there is a communication path that is more important here: dialogue with developers about the right abstraction to test at and then how to avoid losing test thinking just because we are using tools to support our testing and thus reducing the human dimensions of our testing to programmatic tests.
And that takes us right back to what test specialists want to start with: put testing first. What makes a good or a bad test is a style of thinking. How that style of thinking is executed — whether by human or machine — is secondary. If you think about testing poorly, it doesn’t matter whether your test is “manual” or “automated.” It also doesn’t matter if you call it a “test” or a “check.”
But, again, it’s all in the framing.
The Context of Automation
Now, admittedly, since automation is, demonstrably, something that has an outsized gravitational pull on much non-tester thinking, let’s keep on this topic for a bit. Specifically, let’s talk about “automation” for a second. Where does this apparently contentious word even come from? Generally, automation is framed as a labor-saving activity and “frees us” from what is perceived as toil. And that’s pretty much where the term came from.
Consider Adam Smith in his 1776 Wealth of Nations when he talked about “very pretty machines” that would “facilitate and abridge labour.” Eventually the idea of industrialization led to mechanization. This was seen as a boost to industrial productivity.
Consider Robert Hugh Macmillan in his 1956 Automation: Friend or Foe?, where he said earlier machines “replaced man’s muscles” but now will “replace his brains.”
Consider Bertrand Russell in his 1924 essay Machines and the Emotions:
“Machines are worshipped because they are beautiful and valued because they confer power; they are hated because they are hideous and loathed because they impose slavery.”
A key point is that mechanization was seen as an economic elixir to industrialization. And then from that we lead up into more modern history of automation in terms of factory floors.
Were there concerns here? Of course!
It was very early on seen that automation can narrow responsibilities to the point that the work consists of nothing more than monitoring some output, like a factory line or something on a screen. It was certainly seen that the making of judgements — such as about quality — can be erroneously reduced to just a data-processing routine. And so that leads people to ask: how do we measure the subtle erosion of skill and engagement or the deterioration of skill as automation rises? How do we measure the waning of agency and autonomy?
The point here is that automation, and the arguments that have taken place within its context, have existed for quite some time. But you will find many testers are completely ignorant of this history. Thus they are often unaware of how narrative — both negative and positive — were framed around automation.
As our technologies developed, machines could replicate our ends (in some cases) but without replicating our means. The strategies are different. The question then became whether the outcomes, for all practical purposes, were the same. For example, in the 1950s and 1960s, it was posited that if computers got quick enough, they could mimic our abilities to spot patterns, make judgments and learn from experience. From that we eventually got “machine learning.”
Machine learning is a context many more testers are getting involved with these days. And yet you’ll hear a lot of testers in the professional class issuing another negation: “machine learning is not really learning!”
I would argue that any worry about whether “machine learning is not really learning” is unproductive when you realize you just have to frame it as a distinction between “human learning” and “machine learning.” The same applies to “automated testing is not really testing.” It can be unproductive (as it has proven to be) unless you frame it as a distinction between “human testing” and “automated testing.” And, yes, a “written test” and “performed test” are two different things — but both are still a test.
In fact, that framing I’m showing there is crucial because it calls out a key point: some aspects of human activity can be automated. This is just fact. How we test can be automated — but only to an extent. There are parts of testing that are very algorithmic and, yes, we can “train” a computer program to do those parts. But that’s not the same thing as saying we can have a computer program perform all aspects of testing that a human can perform.
A computer program will not be curious. It will not investigate. It will not explore around edges and boundaries. It will not feel frustration, which can help us understand where quality is suffering. And it will only ever be able to recognize as issues those things it has previously been told are issues. And, yes, I state all this even as our artificial intelligence pundits attempt to convince us otherwise and suggest that we’re on the cusp of some post-modern singularity in testing. (See my AI category for more on that range of topics.)
So rather than testers engaging in this tiresome debate, how about we focus on a question that’s much more interesting? How do you take an unthinking, unreflective, unreasoning thing and use it in the context of a discipline that is all about thinking, reflection, and reasoning?
I very rarely, if ever, see testers engage in the discussion like that. I have seen even the most ardent automation supporter capable of recognizing the distinction, however, and being curious as to what this means for us as a team working to provide quality experiences. And if said person isn’t curious about this based on this framing, then telling them they’re “checking” rather than “testing” really won’t matter anyway.
The Asymmetric Skew
I hope I gave an idea of what I consider to be good discussions to have based on the framing of small narrative hooks. For a lot of testers today, they counter much of what I said by saying: “Well, the industry just doesn’t want to have that discussion.” And I would counter that argument by saying that developers who promoted a lot of testing ideas heard that same comment.
The authors of the following texts all heard that same argument or some variation on it.
- Developer Testing: Building Quality into Software
- Growing Object-Oriented Software, Guided by Tests
- Refactoring: Improving the Design of Existing Code
- How Google Tests Software
- How We Test Software at Microsoft
- Test Driven: Practical TDD and Acceptance TDD for Java Developers
- Test-Driven Development with Python: Obey the Testing Goat: Using Django, Selenium, and JavaScript
But they persisted. And they experimented. And they published. And they spoke in a language that they knew would be heard and at least listened to in order to start infiltrating ideas until those ideas became capable of ever more experimentation.
What hasn’t happened is a concomitant assist from the wider testing practitioner industry to backstop the more technology-focused experimentation and innovation of the developers.
Thus was a one-sided battle waged and thus did we get an asymmetry in how we practice testing in the wider industry. And that asymmetry has been quite detrimental.
I believe that this asymmetric skew — this imbalance, if you will — is perpetuated by the level of argumentation of many testers today. This is a large part of why I struggle to stay situated within the discipline, even though the joy of testing that I talked about is just as valid for me as ever and why my self-response to “why have you stayed in testing?” still resonates for me.
All this said, as I argued in the evolution of testing, I very much believe we need more pragmatic, as opposed to reactionary, evangelists for the discipline. I stand by what I said in the testing pedigree:
The very notion of testing undergirds disciplines like physics, geology, historiography, paleobiology, chemistry, social science, archaeology, linguistics, and so on and so forth. Our discipline has an incredibly rich pedigree that can inform us. Let’s start acting like it.Part of “acting like it” is making sure testers are passionate, but responsible, advocates and evangelists for testing as a specialist discipline. Part of “acting like it” means testers show people expansive views of testing rather than views of testing framed around negations or what testing is not.