I recently participated in a discussion around the idea of whether testers “own quality” in some sense. The answer to me is obvious: of course not. But an interesting discussion did occur as a result of that. This discussion led to my post about what it meant to own quality across the abstraction stack. But there’s a more systemic concern with this level of thinking that I want to tackle here.
I had no idea what to call this post. My focus here is on the notion of owning quality. As in: who does so? I won’t tackle all the nuances of that wider topic here. But, due to recent discussions, I did start to think about what it looks like for testers to own even limited bits of quality in our industry that is currently focused on some form of DevOps.
I find that many testers still like to “talk in quotes.” Meaning, they like to throw out quotes or sentences and then act is if they’ve said something profound. And maybe they have. But I’m seeing more of this lately without, it often seems, the necessary ability to think beyond the quote. Let’s dig in and see if I have a point to make.
Awhile back I talked about a possible test apocalypse and how you might respond to that. In honor of the current film Avengers: Endgame, I started thinking about this topic again. Here I’ll use the well-known “snap” of Thanos as my starting off point, trusting that this makes the article title a little more clear.
The title of this article is actually a little too simplistic. It’s more about asking: “Will AI Truly Perform Testing?” Or perhaps; “Will AI Perform Actual Testing?”
I’m asking here: do testers understand testing? By which I mean: do testers truly understand testing? By which I mean … okay, you know what, let’s just dig in to the basis of testing for a moment.
We all know the situation, right? We find an “issue” but in many cases it comes down to determining whether our issue is something for the developers to fix or if it’s something for product to clarify. It’s often a question framed as “Is this a bug?” Hidden within that question is dealing with the protean nature of a “bug” in current software development.
This post will continue on directly from the first post, where I introduced you to the concepts as well as an application that can serve as a guide to some of the ideas. I highly recommend reading that first post for the context if you haven’t already.
In this two-part post, I want to cover the distinctions between integration testing and integrated testing as well as the distinction between edge-to-edge and end-to-end testing. I also want to show how this thinking should be leading testers to think more in terms of contract testing. And, finally, so all of this isn’t entirely theoretical, I’ll provide a React-based code example. That’s a lot to cover so let’s get to it.
I still see many testers talking about the number of bugs found as some sort of barometer of success in terms of effective testing. But lately I’ve seen this framed around the “quality” of bugs found, rather than just their quantity. Still, you have to be a bit careful here. Let’s talk about this.
You’ll often see questions about the practice of software testing that essentially boil down to this: “Which is better: manual testing or automated testing?” This is how many engineering managers do view the world and while they may throw in more words, what they are asking is that question. And the answer they generally come down to is: “Automated testing is better.” How do testers combat that? And should they?
In this post I want to explore how a theory of constraints can be combined with cost of mistake curves to consider how testing operates, first and foremost, around the concept of design. Keeping design cheap at all times is a value of testing that I rarely see articulated. So here I’ll give that articulation a shot.
My contention is that specialist testers know enough to not use the term “non-functional.” And if they are in environments where this term is used, they seek to shift people from this vocabulary. This is one of the ways that I spot specialist testers. Let’s talk about my rationale for this and why I think it’s important.
The general idea of a prison of representation is when you are locked into some means or method of being understood. This means of “being locked” can come from the past and, interestingly enough, from the future. I believe testing, as a discipline, is in danger of being in such a prison. Let’s talk about this.