Nothing to Do with Testing?

You will hear a certain segment of people say things like “TDD has nothing to do with testing” or “automation has nothing to do with testing.” This is often an ill-framed argument. Let’s talk about why this matters.

I should probably note that what I’m describing here is a minority viewpoint, so far as I can tell, but there are enough people that might get influenced by it that I feel it incumbent to at least address what is, in my admittedly strong opinion, a dysfunctional view that does more to harm testing than to help it.

Related to vs. Identical To

Let me first start with a New York Times Magazine article from 14 June 1992. The article was called “Could the Red Wolf Be a Mutt?”

The issue that the Environmental Protection Agency was determining at the time was the status of the red wolf. This was an endangered species found only in Texas and parts of the southeastern United States. The question on the table was this: is the red wolf, in fact, a species, on a par with other wild canine species? Related questions: is the red wolf, in fact, a subspecies? Is the red wolf the product of crossbreeding between coyotes and gray wolves, hence a hybrid, a mulatto, a mutt, and not a species at all?

The question was interesting because it put geneticists, who had determined by DNA testing that the red wolf was, genetically at least, a coyote-gray wolf hybrid, against field biologists (naturalists), who were certain that the red wolf was neither a coyote nor a gray wolf but a species in itself. Okay, great, but how is this related to the testing discussion? Well, some of those people who make the “nothing to do with” argument seem unable to make a distinction between an idea that has something to do with testing and something that is testing. They are conflating the two.

Incidentally, there was a similar case in that genetic testing of guinea pigs in 1996 suggested that these animals, long seen as members of the order of rodents, were not rodents at all. Members of an order are all supposed to have descended from a common ancestor. We say that they are monophyletic. Yet recent genetic testing had shown that guinea pigs couldn’t have descended from this putative ancestor. Hence, it seems, we had to conclude either that a taxonomic order can allow for multiple lines of descent (that is, an order can be polyphyletic) or that guinea pigs are not rodents. The challenge was that the first conclusion would go against the standard definition of order while the second conclusion would go against accepted wisdom.

Again, the parallel here with some of those people making the “nothing to do with” argument is obvious to me. Either they fear we are disregarding standard definitions or they fear we are disregarding accepted wisdom.

Let me bring the point home here and then I’ll look at some examples. At one point in time, it seems, we “knew” what a red wolf was. We “knew” what a rodent was. Then, at another point in time, we weren’t so sure. Is the same happening in testing? Well, that will happen with testing if people conflate “has something to do with” and “is”. I’ve stretched that analogy to its breaking point so let’s take two examples that have come up recently.

TDD Has Nothing to Do with Testing

The people that you see say this often have not actually practiced TDD in any sort of capacity. As in literally not at all. What they are often doing is reacting against something they read or heard about or think they know rather then actual lived experience. Now, whether or not the word “test” should have been used in the formulation of “Test-Driven Development,” the fact is: it was. We can all read the conversations between people like Kent Beck, David Heinemeier Hansson, and Martin Fowler and nitpick at the margins about details. That’s fine. Here’s something to keep in mind, however. The people who make decisions about the value of testing, particularly in their organizations, give about as much of a crap about those debates as they do about the status of the red wolf or the guinea pig.

Here’s how I approach this. Testing is, in part, about the notion of being testable and thus about the nature of testability. TDD does — or at least can — help put pressure on design. Putting pressure on design can lead to better testability. Better testability can enable more and better testing.

So, while TDD is not primarily about testing (the process) and while TDD is not primarily about tests (the artifact), it does — or, again, at least can — have impacts on both.

So, yes, TDD does have a lot to do with testing. That would be the case even if the word “test” was not in the actual name of the concept.

I’m all for coming up with better names for things if we think we have muddied the waters. But then have that discussion. That is a far more relevant discussion to have than the immediately disprovable “TDD has nothing to do with testing” discussion.

What I say here is why, incidentally, good testers — like good experimenters — are trained to avoid, as much as possible, speaking in absolutes. “Has nothing to do with” is just as bad as saying “has everything to do with.” Good testers are also trained to consider the differences between direct application and indirect application of terms. It’s a more nuanced way of thinking. More nuanced ways of thinking tend to have a better chance of convincing people, particularly people who need to be convinced the most.

Automation Has Nothing To Do With Testing

Automation. Nothing to do with testing. Is that true? In my view, no. Even if it wasn’t my view, it would utterly stupid for me to not realize that it is the point of view of many, and for good reason. However, let me show I frame this a bit.

Test automation and testing are not mutually exclusive; rather, automation supports and enables some aspects of testing. Test automation is akin to using scientific instruments in experimental research. Just as a particle accelerator allows physicists to probe phenomena they couldn’t otherwise observe, automation allows testers to repeatedly and efficiently execute scripted tests, freeing their cognitive resources for activities like exploratory testing.

In this context, we wouldn’t say “particle accelerators have nothing to do with experimentation.” We would simply say that they are one part of experimentation, but certainly not one that replaces the full gamut of what a human brings to bear on the scientific discipline. The same applies to testing and automation.

There’s an obvious point here: even if in no other way, automation has something to do with testing from a sociological perspective. This is quite literally undeniable. Why? Because the two have literally been discussed together and evolved together in the software context since the early 1990s. So even if someone agrees that test automation should have nothing to do with testing, there’s these things called culture and history and social context (not to mention reality) that says automation does have something to do with testing. It has for quite literally decades. To state otherwise not only denies reality but denies an obvious reality.

I talked about deniers in another context previously. Denying obvious realities is a way to quickly be reduced to irrelevancy.

Some History of Tools, People and Experiments

This is where it also helps to have some inkling of history and what we can learn from it so, bear with me, and let’s do a little dive into that.

  • When Galileo used the telescope to make groundbreaking observations (moons of Jupiter, phases of Venus), some skeptics argued that the telescope was unreliable and distorted reality. Others believed relying on such a device removed the direct human experience of observing the heavens. The criticism was less about the instrument itself and more about whether it could be trusted to deliver “true” knowledge. Some feared that relying on tools might disconnect scientists from the natural phenomena they were studying.
  • Hooke’s Micrographia (written in 1665) introduced a world of “invisible” details, like the cellular structure of plants. However, some contemporaries argued that microscopes introduced artifacts or misrepresented nature, making them suspicious of the “reality” of what was being observed. We see here an early skepticism about whether instruments were clarifying or distorting nature, a tension that parallels fears of automation distorting testing outcomes.
  • In the eighteenth and nineteenth centuries, some chemists resisted the use of precise balances to measure chemical reactions, arguing that such reliance on quantitative tools might overshadow qualitative understanding or intuition. The fear was that overemphasis on measurements could make science “mechanical,” reducing the role of insight and creativity.
  • The rise of computational science introduced debates about whether heavy reliance on models and simulations replaced empirical observation and experimentation. For instance, it was asked: Could climate models replace direct environmental measurements? Were physicists spending more time programming than experimenting? The concern here was that scientists might mistake the model (or tool) for reality itself, similar to concerns in testing about relying on automation outputs without deeper investigation.

Now let’s say someone argued that all these instruments, in all cases, should have “nothing to do with” experimentation or science. Actually, none of them really did. But even if they did, that still wouldn’t have obviated the fact that those instruments did have something to do with experimentation and science. And the people who argued the latter were the ones who were listened to. Those same people, once being listened to, had a more ready audience to who would listen to their next message, which was that these tools did not replace human beings.

This is similar to the approach teachers took to combat the idea of “automated teaching” as a human replacement. They didn’t argue that such machines had “nothing to do with teaching.” Instead, they focused on a narrative that had a chance of working.

Let’s stick on this theme for a bit. Paul Feyerabend, an Austrian philosopher, argued that scientific methods and instruments can become rigid dogmas, stifling creativity. He worried that instruments could constrain what we perceive as “valid” science, marginalizing alternative ways of knowing. Others have similarly debated the epistemological role of instruments: do they uncover truths, or do they create their own realities? These historical concerns mirror modern arguments about test automation. Just as some scientists worried that instruments might obscure or distort science, some testers fear automation reduces the creative, exploratory aspects of testing. Others worry about “black box” automation tools producing results without clear understanding or interpretation.

That matches up with history. As automation became more prevalent in labs (automated pipetting systems, chromatography machines, etc), some scientists worried that researchers might lose hands-on skills or fail to fully understand the systems they were studying. The critique often boiled down to a fear of over-reliance: that tools might become a “black box,” distancing scientists from the core of their work.

Ultimately, one way to look at this is that, just as instruments didn’t replace scientists but transformed how science was done, automation doesn’t replace testers but redefines their roles, offering opportunities to combine creativity with efficiency. Key to this, however, was how scientists approached these debates. They didn’t approach them by saying these instruments had “nothing to do with science.” Instead, they indicated that the instruments had “something — but not everything — to do with science.” There was much that humans brought to the science enterprise that the tools, no matter their sophistication, could not replicate.

In case any of those testers that I’m speaking out against happen to be reading this, all of the above is what I would consider a much better style of presenting these thoughts. The goal is to engage people, particularly the ones you most want to convince. All of those are public examples of scientific instruments that serve as tools for experimentation but do not replace the scientist’s role in interpreting results, refining hypotheses, or designing experiments. No informed person would say that those tools have “nothing to do with experimentation” or “nothing to do with science.” In the same way that no one (should) say that “automation tools have nothing to do with testing.”

The mere fact that automation is consistently discussed and debated within the context of testing reflects its integral role in the testing ecosystem. While automation is not testing itself, it unquestionably intersects with and influences testing practices, just as scientific instruments do with experimental research.

Framing a Narrative That Works

The more constructive conversation isn’t about whether automation has “anything” or “nothing” to do with testing, but how it can best complement human creativity and judgment in the testing process. Similarly, with TDD. The much more constructive conversation isn’t about whether TDD has “anything” or “nothing” to do with testing, but how TDD can help focus on design decisions, where said design decisions provide the basis for introducing more and better testability into what is being written.

I’ve written before how some testers need a better narrative. That remains true. Testers need to always be thinking about: what is the audience for whatever it is I’m promoting? Testers of the type I’m explicitly criticizing in this post are often writing for an audience that’s already predisposed to believe the message anyway. But those aren’t the people that most need to be convinced.

The people that do need to be convinced are the ones that we tend to assume are most demonstrably hurting the craft or discipline, often by an assumption (perhaps a correct one!) that those people have a compromised view of what testing is, how testing functions, the role of technology in the context of testing and so on. The good news here, as I’ve pointed out above, there is a long history of this very same debate playing out in different contexts — from teaching to numerous sciences — and that history shows us very well what works and what doesn’t when it comes to narrative framing and convincing the people who most need to be convinced.

As I close this post, note the possible irony: the people I most need to convince — those “nothing to do with” testers I speak of — probably won’t have read this. If they do, I can almost guarantee that they would have no solid refutations because, in most cases, from what I’ve found, they actually can’t do the practices they most condemn. Okay, so maybe I don’t convince that audience. That’s actually okay with me, because then the alternate strategy is just to convince everyone else that those “nothing to do with” testers, perhaps in a delicious irony, “have nothing to do with testing.”

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.