I was recently in a work environment where we had literally thousands and thousands of test cases that were stored in a tool called TestLink. A major problem was that there was very little impetus for the testers to ever really analyze all these tests or to ever question if TestLink was the most effective tool to be used. This was due in part to some members of management who, perhaps in fear of bruising egos or seeming too critical, basically said: “What we’ve done has worked.” When the testers heard that, they basically assumed: “Well, that means our testing has been good.” Eventually I came along and essentially argued that our test repository was a mess and that our tool of choice was not the most effective. Here’s a little of what I learned from that experience.
Now, of course, I didn’t just say to everyone: “Wow, our tests really suck. Who created all this? And this tool is awful. Who picked that?” Instead what I did was promote a more modern approach to testing based on a TDL-driven, executable specs-based repository, very much like what you would see in a “BDD shop.” I promoted this not just because I thought it was an effective approach but also because I believed it solved some real problems that I had perceived, both in our test content and in our test management tool. I advocated moving away from TestLink and, in effect, moving away from the style of test writing that made TestLink seem like the “right” tool for us.
This argument of mine was largely based on values. I’m going to approach what I mean by that somewhat obliquely. I say this because there was concern among some that their choices would be seen as irrational. But, as many of us know, many decisions that may seem irrational were probably rational according to a different set of values.
Starting With Things That Didn’t Really Matter As Much
The elephant in the room was that some of the testers disagreed (or perhaps agreed to differing extents) about the problems that were perceived in TestLink, whether tool or content. There was also a vested interest in what had been produced in the past — not so much because it was working, but because of fear of how it would reflect on people if we largely had to throw away thousands of tests. There was a latent fear of leaving TestLink as well because most of the testers simply couldn’t conceive of their testing artifacts outside of the tool any more. The approach to test design and the test management tool had become intertwined.
What this was showing me is where people put value. There was value in a tool that could hold these thousands of test cases and report on their status, regardless of how easy it was to write tests in this tool, regardless of how flexible the tool allowed us to be in test expression, and regardless even if whether we should have a tool that was capable of holding nearly ten thousand test cases. People also valued how their past work was perceived more than they valued being critical of it now in the present. People valued sticking with the relatively certain past as opposed to moving into a somewhat more uncertain future. People valued their choice of TestLink in the past being validated over questioning if TestLink still served our needs.
The problem with all these concerns, from my perspective, was that we were looking at something that was largely epiphenomenal. The question of how we “go from” TestLink to a TDL-driven, executable specs-based repository — assuming such was to be our path — had nothing to do with why TestLink was chosen as a tool and has only little to do with how the content got into TestLink. As I promoted a new test writing approach, what got wrapped up in this tool/content issue was the issue of the TDL. And, naturally, everyone drew comparisons and correlations. The problem was that we risked being unfair to both tool and approach if we compared inaccurately or correlated too tightly.
This change in thinking had everything to do with the kind of approach that was likely to be most successful in taking our existing test knowledge (which just so happened to be encoded in tests in a tool called TestLink) and placing that knowledge in a different context — given what we knew about ourselves. And that comes right back to values.
Starting With Things That Really Mattered
A lot of people really wanted to focus on the debate of what test management solution was being used or what structured approach was used for test writing. But I argued that we really needed to take those things off the table for a moment. I argued that we should instead focus on the more operative questions which, to me, were this:
- What makes a good test?
- How do you recognize a bad test?
Quite literally all of our issues stemmed from the fact that not everyone seemed to agree on the answers to those questions. Or, actually, it was even worse: no one had ever bothered to really discuss it. So people didn’t even know if they agreed or disagreed about such fundamental questions. And given that they certainly didn’t know if their disagreements (if there were such) were about fundamentals or simply more about details. My point to the team was: if people were disagreeing on these things, it really didn’t matter where we wrote our tests or what approach we used.
Once you start settling on answers to those questions, you get into these further questions:
- Given our application and the test cycle we would like to have…
- What are the most effective test techniques for finding bugs?
- Of those effective techniques, what do we feel are the most efficient techniques?
Clearly a team will try to establish techniques that are effective and efficient. But sometimes you may have to settle for effective, sacrificing some efficiency.
There is usually no single “best” technique. Rather, you’ll have a choice of good techniques, all of which require certain compromises. When you implement a technique, you need to consider the one or two that will best suit your needs with the least number of compromises that will cause you problems. Notice that making this kind of decision will be, in part, based on your values.
When I presented thinking like this to the team, I had to stop and ask: “Does everyone agree with that? What techniques do each of us use, either here or based on our experience? What are techniques that we’ve found more helpful (whether here or at previous employment)? What are techniques we have found less helpful (again, here or elsewhere)?”
The testers were entirely unprepared for this, as it turned out. And I realized it’s a good question to ask of testers: if someone asked what techniques you have found most helpful in your career, what would you say? A lot of testers like to throw off terms like “equivalence classes”, “basis path testing”, “orthogonal arrays”, “pairwise tests”, “test sessions” and so on. But then you have to question if they have actually used those techniques? Or just read about them? Or saw someone else the technique? This, too, tells you what they value. Does the person value direct experience over and above indirect experience? Does the person value simply having heard of a technique more so than actually applying it?
Notice here with all this that we’re not even really talking here about how to write tests, per se. We’re talking, so far, about how we express tests, which can be written or verbal. And how we express tests tends to reflect how we think about them in the first place.
And that was really the key issue I needed the team to be thinking about, over and above who wrote what test cases in the past and why TestLink was chosen as the tool to store them.
Speaking for myself here, every time I look at a test artifact, assuming I don’t know the tester who wrote it, I’m trying to determine — almost in a forensic way — how the tester thinks. I try to understand what values guide their expression and what ethic of investigation informs their thinking. And, obviously, I try to get that information from each individual tester but sometimes I have to do the equivalent of archaeology.
My overall point here being this: once you can answer the questions I posed above, you can then start to ask what kind of approach (not tool!) best supports you in:
- Writing what you said are good tests.
- Helping you avoid writing bad tests.
- Allows you to incorporate the techniques you said were effective.
Finally, when you have that pretty well understood, you have the basis for considerations regarding what tool needs you have in order to support all this.
Even if you were handed a tool because someone else bought it, nothing stops you from making the above analysis.
So my final point to the team was that when people heard my opinion on TestLink (or any tool), they should keep in mind that my opinion was based on the above analysis I just provided here and upon the promotion of a specific approach (– in my case, TDL-driven, executable specs-based repository –) that is consonant with how I think, what values I try to perform my work by, and the ethic of investigation that I use to justify my work, not only personally but also in terms of what I believe it means to be a practitioner of quality assurance and testing.
As Above, So Below…
Awhile back I wrote about resilient teams and the values they have so this post here actually came about because I realized I did practice what I preach, at least in this instance. The previous article was certainly a more broad focus than this one but I guess that’s also part of the point in that, as they say, the devil is sometimes in those day-to-day details. How you relate your theory to your practice does matter. I guess that’s another one of those value things.