A Values-Based Approach to Systemic Test Team Problems

I was recently in a work environment where we had literally thousands and thousands of test cases that were stored in a tool called TestLink. A major problem was that there was very little impetus for the testers to ever really analyze all these tests or to ever question if TestLink was the most effective tool to be used. This was due in part to some members of management who, perhaps in fear of bruising egos or seeming too critical, basically said: “What we’ve done has worked.” When the testers heard that, they basically assumed: “Well, that means our testing has been good.” Eventually I came along and essentially argued that our test repository was a mess and that our tool of choice was not the most effective. Here’s a little of what I learned from that experience.

Now, of course, I didn’t just say to everyone: “Wow, our tests really suck. Who created all this? And this tool is awful. Who picked that?” Instead what I did was promote a more modern approach to testing based on a TDL-driven, executable specs-based repository, very much like what you would see in a “BDD shop.” I promoted this not just because I thought it was an effective approach but also because I believed it solved some real problems that I had perceived, both in our test content and in our test management tool. I advocated moving away from TestLink and, in effect, moving away from the style of test writing that made TestLink seem like the “right” tool for us.

This argument of mine was largely based on values. I’m going to approach what I mean by that somewhat obliquely. I say this because there was concern among some that their choices would be seen as irrational. But, as many of us know, many decisions that may seem irrational were probably rational according to a different set of values.

Starting With Things That Didn’t Really Matter As Much

The elephant in the room was that some of the testers disagreed (or perhaps agreed to differing extents) about the problems that were perceived in TestLink, whether tool or content. There was also a vested interest in what had been produced in the past — not so much because it was working, but because of fear of how it would reflect on people if we largely had to throw away thousands of tests. There was a latent fear of leaving TestLink as well because most of the testers simply couldn’t conceive of their testing artifacts outside of the tool any more. The approach to test design and the test management tool had become intertwined.

What this was showing me is where people put value. There was value in a tool that could hold these thousands of test cases and report on their status, regardless of how easy it was to write tests in this tool, regardless of how flexible the tool allowed us to be in test expression, and regardless even if whether we should have a tool that was capable of holding nearly ten thousand test cases. People also valued how their past work was perceived more than they valued being critical of it now in the present. People valued sticking with the relatively certain past as opposed to moving into a somewhat more uncertain future. People valued their choice of TestLink in the past being validated over questioning if TestLink still served our needs.

The problem with all these concerns, from my perspective, was that we were looking at something that was largely epiphenomenal. The question of how we “go from” TestLink to a TDL-driven, executable specs-based repository — assuming such was to be our path — had nothing to do with why TestLink was chosen as a tool and has only little to do with how the content got into TestLink. As I promoted a new test writing approach, what got wrapped up in this tool/content issue was the issue of the TDL. And, naturally, everyone drew comparisons and correlations. The problem was that we risked being unfair to both tool and approach if we compared inaccurately or correlated too tightly.

This change in thinking had everything to do with the kind of approach that was likely to be most successful in taking our existing test knowledge (which just so happened to be encoded in tests in a tool called TestLink) and placing that knowledge in a different context — given what we knew about ourselves. And that comes right back to values.

Starting With Things That Really Mattered

A lot of people really wanted to focus on the debate of what test management solution was being used or what structured approach was used for test writing. But I argued that we really needed to take those things off the table for a moment. I argued that we should instead focus on the more operative questions which, to me, were this:

  • What makes a good test?
  • How do you recognize a bad test?

Quite literally all of our issues stemmed from the fact that not everyone seemed to agree on the answers to those questions. Or, actually, it was even worse: no one had ever bothered to really discuss it. So people didn’t even know if they agreed or disagreed about such fundamental questions. And given that they certainly didn’t know if their disagreements (if there were such) were about fundamentals or simply more about details. My point to the team was: if people were disagreeing on these things, it really didn’t matter where we wrote our tests or what approach we used.

Once you start settling on answers to those questions, you get into these further questions:

  • Given our application and the test cycle we would like to have…
    • What are the most effective test techniques for finding bugs?
    • Of those effective techniques, what do we feel are the most efficient techniques?

Clearly a team will try to establish techniques that are effective and efficient. But sometimes you may have to settle for effective, sacrificing some efficiency.

There is usually no single “best” technique. Rather, you’ll have a choice of good techniques, all of which require certain compromises. When you implement a technique, you need to consider the one or two that will best suit your needs with the least number of compromises that will cause you problems. Notice that making this kind of decision will be, in part, based on your values.

When I presented thinking like this to the team, I had to stop and ask: “Does everyone agree with that? What techniques do each of us use, either here or based on our experience? What are techniques that we’ve found more helpful (whether here or at previous employment)? What are techniques we have found less helpful (again, here or elsewhere)?”

The testers were entirely unprepared for this, as it turned out. And I realized it’s a good question to ask of testers: if someone asked what techniques you have found most helpful in your career, what would you say? A lot of testers like to throw off terms like “equivalence classes”, “basis path testing”, “orthogonal arrays”, “pairwise tests”, “test sessions” and so on. But then you have to question if they have actually used those techniques? Or just read about them? Or saw someone else the technique? This, too, tells you what they value. Does the person value direct experience over and above indirect experience? Does the person value simply having heard of a technique more so than actually applying it?

Notice here with all this that we’re not even really talking here about how to write tests, per se. We’re talking, so far, about how we express tests, which can be written or verbal. And how we express tests tends to reflect how we think about them in the first place.

And that was really the key issue I needed the team to be thinking about, over and above who wrote what test cases in the past and why TestLink was chosen as the tool to store them.

Speaking for myself here, every time I look at a test artifact, assuming I don’t know the tester who wrote it, I’m trying to determine — almost in a forensic way — how the tester thinks. I try to understand what values guide their expression and what ethic of investigation informs their thinking. And, obviously, I try to get that information from each individual tester but sometimes I have to do the equivalent of archaeology.

My overall point here being this: once you can answer the questions I posed above, you can then start to ask what kind of approach (not tool!) best supports you in:

  1. Writing what you said are good tests.
  2. Helping you avoid writing bad tests.
  3. Allows you to incorporate the techniques you said were effective.

Finally, when you have that pretty well understood, you have the basis for considerations regarding what tool needs you have in order to support all this.

Even if you were handed a tool because someone else bought it, nothing stops you from making the above analysis.

So my final point to the team was that when people heard my opinion on TestLink (or any tool), they should keep in mind that my opinion was based on the above analysis I just provided here and upon the promotion of a specific approach (– in my case, TDL-driven, executable specs-based repository –) that is consonant with how I think, what values I try to perform my work by, and the ethic of investigation that I use to justify my work, not only personally but also in terms of what I believe it means to be a practitioner of quality assurance and testing.

As Above, So Below…

Awhile back I wrote about resilient teams and the values they have so this post here actually came about because I realized I did practice what I preach, at least in this instance. The previous article was certainly a more broad focus than this one but I guess that’s also part of the point in that, as they say, the devil is sometimes in those day-to-day details. How you relate your theory to your practice does matter. I guess that’s another one of those value things.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

4 thoughts on “A Values-Based Approach to Systemic Test Team Problems”

  1. Hi: I’m Team Leader of TestLink, and I’m very interested in understand (if you have enough time) what are

    the weak points of the tool.

    I’m really open to understand what are the things that can be improved or why is awful.

    I think in your article there is not a clear, really clear position regarding the tool, but there are hints that seems

    to point  to TestLink as a not a good tool.

    I would appreciate a lot any kind of comments that can be analised in order to understand things that can

    be improved.

     

    Best Regards

     

    Francisco

  2. Greetings. Thank you for your comment. You mention “I think in your article there is not a clear, really clear position regarding the tool” and that’s true, but mainly because the article wasn’t entirely about TestLink so much as it was a particular type of solution.

    Why was TestLink not that kind of solution? I kind of say it in the article:

    “The question of how we ‘go from’ TestLink to a TDL-driven, executable specs-based repository…”

    TDL-driven, executable specs-based repositories are those wherein you don’t rely on test management in the same way that many tools, like TestLink, promote by their very structure, such as enforcing numbering of steps or having a separate set of numbered steps for expected results.

    Mind, nothing says this is bad in some categorical sense and it certainly doesn’t imply that TestLink itself is bad or deficient in what it does. What this article was talking about was adhering to a structuring mechanism that requires test formats to be more flexible in some ways and, in fact, not constrained to a particular management solution particularly if you follow solutions where requirements are made to serve as manual tests and those manual tests are automatically converted to automated tests.

    In the article I state “…had nothing to do with why TestLink was chosen as a tool and has only little to do with how the content got into TestLink.”

    That was sort of my point: it wasn’t that TestLink was bad in what it was doing. It was simply predicated upon a particular notion of how tests are constructed — and to be sure, that past notion is still very much alive and well. But more BDD-based approaches tend to go a different route, which is the route I was advocating.

    The article also says:

    “So my final point to the team was that when people heard my opinion on TestLink (or any tool), they should keep in mind that my opinion was based on the above analysis I just provided here and upon the promotion of a specific approach (– in my case, TDL-driven, executable specs-based repository –)…”

    The “or any tool” being key there. TestLink just happened to be the tool in use in this situation. But I could have said the exact same if the tool was, say, Quality Center. In my experience, tools like TestLink are simply not as amenable to the notion of TDL-driven executable spec-based repositories wherein tests are, in many cases, emergent and are directly tied to an underlying orchestration layer. That by no means makes TestLink a “bad tool”. You’ll note in the article that I say:

    “Now, of course, I didn’t just say to everyone: ‘Wow, our tests really suck. Who created all this? And this tool is awful. Who picked that?'”

    Again, I did NOT say the tool is awful. What I did argue was that the process had been reversed: a tool was chosen and only much, much later was any thought given to the type of tool needed to support the kind of test design that was most effective. As opposed to deciding on the test design first and judging tools based on that criterion and whether they would or would not support our chosen approach.

    I’m not sure if this helps clarify things at all. The best I can say is that TestLink is not an awful tool to me at all, any more than Quality Center is or QA Traq or whatever else. It’s simply that most test management tools in my experience do not support the more fluid test writing that uses instrumentation and orchestration layer to combine requirements-as-tests, tests-as-specifications, and the direct conversion of manual tests to automated tests.

    1. Unfortunately the term “TDL” (Test Description Language) is not used as often as it should be but essentially it is any solution that allows you to craft a language that serves as requirements, manual tests and automated tests. Even more specifically, it goes back to the idea of testing being a design activity and how that works in relation to domain-driven design. (If you want to go back even further, it could be aligned with Tom Gilb’s PLanguage which in turn would go back to testing that was done in Competitive Engineering.)

      The most common modern application of these concepts is seen in the BDD-type solutions, such as Cucumber, JBehave, SpecFlow, Behat, Lettuce, etc. Note that this is not saying that Gherkin — which tends to underlie those tools — is a TDL. That’s a common mistake some testers make. Gherkin is simply a structuring element for a TDL. The TDL itself is business domain language that is used to construct intent-revealing tests in executable test specifications.

      I have written a few posts in the TDL category.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.