In my previous post on human and automated testing working together, I showed an example of test design by humans that could be executed by automation. However, the focus point was making sure that the automation helped the humans adhere to an unambiguous domain model of what is being tested. Here I’ll follow up on that practical example with some of the theory.
One of the obstacles to covering the gap between principles of testing and the practice of testing is the mechanics of writing tests. This is particularly the case if you work in a business domain with a lot of complex business rules. This is even more particularly the case if you want to use automation. So let’s dig in to this a bit with a case study.
I’ve talked about the notion of test description languages quite a bit. A lot of these discussions get into debates about being declarative versus imperative, or focusing on intent rather than implementation. All good things to consider. But such “versus” terminology tends to suggest there is a “right” and a “wrong” when often what you have is “What makes sense in your context.” And you may have to flexibly shift between different description levels. Let’s talk about this.
I’ve talked about “being lucid” and using description languages before. I have a whole category here devoted to TDL (Test Description Language) and I’ve worked to present examples that are not your standard “shopping cart”. In this post, I’ll cover an example of how I helped testers go from a “bad test” to a “good test.”
Previously I had talked about the craft of testing, focusing on the balance between the creativity of an artist and the methodology of a scientist. In both cases, however, the focus is on communication. To that end, it’s imperative we can express what we test in good English. (Or whatever the native language of your environment is.) So let’s talk about this key skill.
If you plan on using a BDD tool (like Cucumber, SpecFlow, Behat, etc) you are going to want to have some guidelines for how and to what extent you allow parameterized and conditionalized phrases. This is an area that I’ve found can become a rat’s nest of bad habits unless you establish early on how much and to what extent to use these features.
There are many references out there that discuss Gherkin, which is a structuring element for your test description language (TDL). So what I’ll be offering here is really nothing new. This post is simply a relatively concise look at the TDL pieces and parts in terms of how these elements can be used in the context of Lucid or related tools.
Regarding my post on Seeking Requirements in TDL, a comment was made regarding the question of whether or not I was focusing on the conditions as part of the scenario and whether or not this tied into how I view requirements being made manifest in a test spec. The answer to both questions is “yes” but since that response may require a bit of elaboration, this post is my attempt at that.
Regarding my previous post on this subject, a tester I work with asked me a great question regarding the readability of the TDL and the ability to discern the actual requirements from it. This post is essentially what my answer was. Whether my answer is good or not is up to the individual reader.
The goal of a Test Description Language (TDL) is to put a little structure and a little rigor around effective test writing, where “effective test writing” means tests that communicate intent (which correspond to your scenario title) and describe behavior (the steps of your scenario). Since those attributes should be what all statements of requirement strive for, this means that requirements and tests, at some level of approximation, can be the same artifact. That “level of approximation” is the point at which you get down to specifying the behavior that users find value in.
Let’s dig into this a bit more.
A TDL (Test Description Language) is a constructed language that we use to describe, and thus specify, our requirements as tests. Or our tests as requirements, if you prefer. This is what allows testing to be a design activity. What makes a style of writing a TDL is adherence to a structuring element and a set of principles and patterns that are used to guide expression.
Current forms of TDL swirl around various BDD concepts, such as Given-When-Then. But it’s clear that just having that focus in place does nothing for you by itself because there is a lot of thought that goes into how you want to express yourself. I’ve found many testers really struggle with this but, equally, I’ve found I struggle in being able to adequately teach at what level you work at with a TDL.
There is a distinction I want to make in this post regarding what you change in a test specification and how a test specification itself my change, in terms of the role it provides. That leads into a nice segue about how team roles also change. Here by “test specification” I mean the traditional “feature file” of BDD tools like Cucumber, Lettuce, Spinach, SpecFlow, and so on.
I have been introducing Cucumber to testers who have little exposure to such tools. I was looking at whether The Cucumber Book would be worth having around the office. And while it may or may not be, one thing I notice is that it (like most resources on Cucumber I find) don’t really address some of the heuristics regarding how you can start thinking about writing test specifications.