I recently participated in a discussion around the idea of whether testers “own quality” in some sense. The answer to me is obvious: of course not. But an interesting discussion did occur as a result of that. This discussion led to my post about what it meant to own quality across the abstraction stack. But there’s a more systemic concern with this level of thinking that I want to tackle here.
I believe that semantics matter. I do realize not all semantics matter equally. But, still: semantics matter. It’s disappointing when otherwise intelligent people seem to dismiss something simply because they feel it’s just semantics. Let’s talk about this.
I talked before about tradition and dogma and not too long ago, on LinkedIn, I saw someone post yet another one of those bits of dogma in our industry without considering the context. The discussion that ensued showcased exactly the problem with simply regurgitating the “received wisdom” of others. So let’s talk about this.
The question periodically comes up as to what the difference is between “Quality Assurance” and “Testing.” And a disturbingly large number of test professionals will say that “Testing is a subset of quality assurance.” This is a terrible response. Let’s talk about that.
I’ve talked about the notion of test description languages quite a bit. A lot of these discussions get into debates about being declarative versus imperative, or focusing on intent rather than implementation. All good things to consider. But such “versus” terminology tends to suggest there is a “right” and a “wrong” when often what you have is “What makes sense in your context.” And you may have to flexibly shift between different description levels. Let’s talk about this.
One thing I often talk with testers about is a prime focus of our work: being credible reporters of useful and timely information in a diplomatically persuasive way. Coupled with that, I’m just coming out of a particular job wherein I feel my career took two steps backward and I’m now in process of regaining my forward momentum. The “steps backward” have to do with personal credibility and it’s why I’ve been silent for a month or so.
I keep running into testers, and others, who like to categorize testing into functional and non-functional aspects. This needs to stop, in my humble opinion.
I was recently in a work environment where we had literally thousands and thousands of test cases that were stored in a tool called TestLink. A major problem was that there was very little impetus for the testers to ever really analyze all these tests or to ever question if TestLink was the most effective tool to be used. This was due in part to some members of management who, perhaps in fear of bruising egos or seeming too critical, basically said: “What we’ve done has worked.” When the testers heard that, they basically assumed: “Well, that means our testing has been good.” Eventually I came along and essentially argued that our test repository was a mess and that our tool of choice was not the most effective. Here’s a little of what I learned from that experience.
There is a distinction I want to make in this post regarding what you change in a test specification and how a test specification itself my change, in terms of the role it provides. That leads into a nice segue about how team roles also change. Here by “test specification” I mean the traditional “feature file” of BDD tools like Cucumber, Lettuce, Spinach, SpecFlow, and so on.
I have been introducing Cucumber to testers who have little exposure to such tools. I was looking at whether The Cucumber Book would be worth having around the office. And while it may or may not be, one thing I notice is that it (like most resources on Cucumber I find) don’t really address some of the heuristics regarding how you can start thinking about writing test specifications.
I’ve found myself in a position lately of having to explain a lot of concepts that are “obvious” to me. I found myself getting frustrated but then I considered my own words regarding the “obvious” nature of Quality Assurance and I realized that maybe I wasn’t establishing the context of what I was talking about. So I took step back and I started to look at whether many of the testers I work with and meet these days are aware of, much less practice, the idea of requirements being tests; of acceptance test specifications that drive development; of specification workshops. As it turned out, no, most testers were not practicing these concepts and many were not even aware of them as a shift in the dynamic of how testing can be done.
For those of you who work in agile environments, maybe nothing I say here will be new. Even for those who don’t work in agile environments, you may have found yourself thinking along these lines but not necessarily sure of how to articulate it. That’s a challenge I’ve found myself in where I had to explain to people that the process gates you typically see in a “waterfall process” can be accommodated in an “agile process.” So let’s talk about that.
One thing I can claim to know: as any company continues to grow its quality practices (not just its test practices), its challenges will grow. One of those challenges will be making sure that the company can operate in a so-called “agile” fashion while still building a solid base of actionable knowledge related to quality and testing. So let’s talk about that.