Many companies I’ve been at are in a race to see how much like Spotify they can be and apply concepts of Chapters and Guilds. What I routinely see is companies get this bit wrong. Particularly around so-called “quality guilds.” So let’s talk about this.
I recently participated in a discussion around the idea of whether testers “own quality” in some sense. The answer to me is obvious: of course not. But an interesting discussion did occur as a result of that. This discussion led to my post about what it meant to own quality across the abstraction stack. But there’s a more systemic concern with this level of thinking that I want to tackle here.
In our testing industry we’ve borrowed ideas from the physics realm to provide ourselves some glib phrases. For example, you’ll hear about “Schrödinger tests” and “Heisenbugs.” It’s all in good fun but, in fact, the way that physics developed over time certainly has a great deal of corollaries with our testing discipline. I already wasted people’s precious time talking about what particle physics can teach us about testing. But now I’m going to double down on that, broaden the scope a bit, and look at a wider swath of physics.
In the United States we are currently going through one of our normal rounds of political craziness as we move towards a new election. This is not a political blog and I don’t want to add to the crazy. Thus this post will not discuss current political viewpoints, whether for or against, and will have nothing to do with current candidates. Rather this post will discuss one specific aspect of politics that has a historical context that relates to how our testing industry has evolved and continues to evolve.
Awhile back I talked about being cross-discipline associative. I did something similar to this approach when I asked what time travel could teach us about testing. Let’s see how this works with another domain entirely.
I’ve always been interested in the different ways that testers think and how those modes of thinking directly apply to the work testers do. What it comes down to for me is how people learn. This ultimately impacts how they evolve their career. And, in a somewhat loaded statement, how a tester has evolved their career tells me how useful they are going to be.
One of the worst things that can ever happen to a quality function or test team is a credibility gap. When perceptions of quality take a hit — whether internal or external — you are on a bad path. Here I don’t want to talk about how you get out of that situation. What I want to do is talk about how you can avoid getting into it in the first place. But I want to talk about it at the level that it really matters. This means talking about morals and values. But this is tricky because such discussions are fraught with peril and subjectivity. Yet we need to tackle these head on.
For those of you who work in agile environments, maybe nothing I say here will be new. Even for those who don’t work in agile environments, you may have found yourself thinking along these lines but not necessarily sure of how to articulate it. That’s a challenge I’ve found myself in where I had to explain to people that the process gates you typically see in a “waterfall process” can be accommodated in an “agile process.” So let’s talk about that.
There is notion in quality assurance and testing between verification and validation. Verification asks “Are we building the product right?” Validation asks “Are we building the right product?” Some people use this very distinction to draw a line between the activities of quality assurance and the activities of testing.
In talking about test teams as inventors, I mentioned that Albert Szent-Gyorgyi said “Discovery consists of looking at the same thing as everyone else does and thinking something different.” I wanted to go back to that thought because it’s not the act of “thinking something different” but rather the act of “thinking differently” that really matters to me. This is even more so the case in an industry where testing and development continue to move closer together and, in fact, often merge.
In a previous post I talked about inventors as people who see and think differently. I also brought the idea of inventors having to institute cultural change in some cases. What I didn’t do is how an inventor utilizes their skill-set and mind-set to get that change going. So let’s talk about that. Hopefully I won’t make too much of a fool of myself.
In various posts I’ve tried to show how I believe testers can invent solutions to problems they are encountering. These solutions do not always have to be tool-based in nature. Sometimes you are presenting a new way of thinking about processes, sometimes you are reframing problems, sometimes you a providing a hopeful vision, sometimes you are in fact coming up with ad-hoc tools, and sometimes you are coming up with new techniques. At the core of this, however, is thinking about problems and thinking about solutions. It’s about being an inventor.
I find myself in a philosophical mood today and it’s based on some experiences with testers that simply don’t ask questions. As testers we have to ask questions. Lots of questions. We also have to recognize when we are getting answers. Sometimes, however, we have to realize that sometimes answers aren’t that easy to come by. And here I’m not just talking about the idea that no one has the answers. What about when answers just aren’t possible at all? Can that happen? Of course it can. It depends on the types of questions being asked and the context in which answers are expected. (See my time travel post for one example of this.)
So — just bear with me here — I want every tester out there to consider a question: what existed before the big bang?
Some testers — and most managers — like to talk about metrics. One thing that often doesn’t get discussed is what I call metric dissociation. Here’s my thought: metrics should be easily or directly interpretable in terms of product or process features. If they’re not, I think they are dissociated from the actual work that’s being done and thus there’s great danger of the metrics being misleading, at best, or outright inaccurate, at worst.