Years ago I asked about what makes testing complicated. At that time I didn’t really have a very distinct nuance between “complex” and “complicated.” But I think my instinct was accurate. So here I want to focus on what makes testing complex (which is often inevitable) and that can help frame what makes testing complicated (which is not inevitable).
Many companies I’ve been at are in a race to see how much like Spotify they can be and apply concepts of Chapters and Guilds. What I routinely see is companies get this bit wrong. Particularly around so-called “quality guilds.” So let’s talk about this.
I’ve often talked about the idea of tests putting pressure on design. I’ve also talked about this idea in the context of code-based testing. Here I want to revisit those concepts while including a cautionary tale about how testing at the code level has its own interesting challenges.
Thankfully most testers that I come across do realize that the notion of having “zero defects” is, in fact, a fallacy. But this notion of something being “defect free” still persists in the wider industry. And it’s important to quash that perception. How I frame this when encountering the thought differs a bit. So here I’ll give a brief overview of various ways I respond.
My original title for this post was “Thinking Clearly About Automation” but I realized there was a wider ambit to that discussion. We have a technocracy that likes to turn testing into a programming problem, suggesting that “manual testing” (testing done by humans) should be automated away as much as possible. That’s a danger. Some testers have combated this by suggesting automation has nothing to do with testing. I believe that’s also a danger.
We often say testers have to “think like an architect” or “think like a builder” or, perhaps even, “think like a developer.” Here’s the problem: to actually think like any one of these people, you have to try to do something they do. So, really, you have to act like a developer. Let’s talk about this and where the testing relevance comes in.
Here I’m going to write one of my posts that I think are the most fun but are probably the ones that many testers struggle with in terms of seeing how (or even whether) I’m being relevant. I want to talk a little about an aspect of testing that I think is consistently underused and consistently undersold in the industry: recovering a context that has been buried under years of major and minor decisions.
In my previous post on human and automated testing working together, I showed an example of test design by humans that could be executed by automation. However, the focus point was making sure that the automation helped the humans adhere to an unambiguous domain model of what is being tested. Here I’ll follow up on that practical example with some of the theory.
One of the obstacles to covering the gap between principles of testing and the practice of testing is the mechanics of writing tests. This is particularly the case if you work in a business domain with a lot of complex business rules. This is even more particularly the case if you want to use automation. So let’s dig in to this a bit with a case study.
I recently participated in a discussion around the idea of whether testers “own quality” in some sense. The answer to me is obvious: of course not. But an interesting discussion did occur as a result of that. This discussion led to my post about what it meant to own quality across the abstraction stack. But there’s a more systemic concern with this level of thinking that I want to tackle here.
I had no idea what to call this post. My focus here is on the notion of owning quality. As in: who does so? I won’t tackle all the nuances of that wider topic here. But, due to recent discussions, I did start to think about what it looks like for testers to own even limited bits of quality in our industry that is currently focused on some form of DevOps.
I find that many testers still like to “talk in quotes.” Meaning, they like to throw out quotes or sentences and then act is if they’ve said something profound. And maybe they have. But I’m seeing more of this lately without, it often seems, the necessary ability to think beyond the quote. Let’s dig in and see if I have a point to make.
So, not surprisingly, the AI test tooling community didn’t want to engage on my AI test challenge. They saw it as being inherently unfair. And, to a certain extent, it could be. But what this really showcased was people are talking about AI and test tooling with way too much hyperbole in an attempt to gain traction. So was my test challenge unfair? Is there too much hyperbole? Let’s dig in a bit.
This is not a challenge for testers to test an AI. Although that is a worthy challenge, one I tackled a bit. For right now, I want to propose a challenge for those promoting tools that claim to perform testing, particularly when the claim is that such tooling stands a chance of replacing human testers.