In a previous post, on testing and design pressure, I closed by saying that certain elements of decisions need to be encoded as artifacts, but that putting appropriate pressure on design meant minimizing those artifacts. Here I’ll talk a bit about that and what this means for test teams.
One phrase I like teams to get repeating (or at least thinking) is this: document ‘Why’, specify ‘What’, automate ‘How’.
This is not exactly a new idea, of course, but it’s a relatively concise mantra that I think is descriptive, as opposed to prescriptive, and allows for contextually relevant activities during development. In other words, it allows you to be flexible in terms of how you approach documenting, specifying, and automating but still suggesting broadly what should be done.
Overall, teams creating software have to work to understand the business value that lies behind a feature so that the team as a whole can objectively decide which features are really worth creating. Two things happen here:
- A product team is usually going to provide that business value in the form of a statement or two about why the work should be considered in the first place.
- Developers and testers, sometimes working with business analysts, will then provide possible solutions that will allow this business need to be realized. This is the ‘what’.
It’s very possible at all of these points to create a whole lot of artifacts: epics, stories, narratives, tasks, designs, test cases, and so on. The more of these artifacts you have, the harder it becomes to reason about the system. And the harder it is to reason about the system, the harder it is to discuss it and make decisions about it.
This is important, of course, because the goal is to end up with something that we can reason about and make decisions on in concrete terms. Because if we can do that, it means we can develop it and we can test it.
In the previous post I mentioned what I think is an important point in a modern testing context: test teams have to put pressure on design at a pace that matches how product teams specify and how development teams write code. There are two paths in this context: from customer conversation down to where it intersects with code and from code conversation up to where it intersects with the user.
When these “what” decisions are translated into code, each level of code in the application delivers value to some other piece of code, through its own behavior and its own interface. Eventually the sum of those values is delivered to the user interface, which ultimately delivers the value to the customer.
This gets into how approaches like BDD and TDD put pressure on design but at different levels. For all the discussion about these approaches, it really breaks down into some easy specifics:
- BDD is working from customer conversation down to where it intersects with code.
- TDD is working at the code conversation up to where it intersects with the user.
So those paths I brought up prior are essentially handled by the common approaches to merging test and development activities. Yes, merging. I believe that testing activities and development activities often become two entirely separate tracks of activities when, in fact, they should be done in tandem. When that happens, it means that source code and tests both act as sources of truth.
Assuming you have a strategy of automating whatever can reasonably and reliably be automated, then ideally common abstractions will unite test code and production code.
Not everything can be discussed in terms of code and tests, however. In order to achieve working software that delivers value, a bunch of stakeholders have goals which have to be met. These goals are tied in with the stakeholder’s roles. Test teams need to work with Product and Development to do two things:
- Deliver the stakeholder’s goals by providing different capabilities.
- Create features that will implement the capabilities.
Just as no one group “assures quality,” no one group is responsible for each of those activities. That responsibility falls to all relevant teams entering into a dynamic cycle of communication and collaboration.
You could argue that, along with code and tests, this is yet another source of truth: a sort of business view of the application. What the source of truth here really is would be simply this: options. Product Teams provide and document the why, testing and development teams — using common abstractions — provide the what and the how. “Providing the what” is just another way of saying providing options.
A few points fall out of this:
- By chunking up from the features to the capabilities, we can give ourselves more options.
- By chunking down from goals to capabilities, we also give ourselves more options (different ways to implement).
“Chunking” is a questioning technique that’s used to vary the amount of information you have to take in. Chunking down means getting more specificity while chunking up means looking for a more generalized understanding.
It’s hopefully obvious to everyone that options have value. Those options will dictate in many cases the kind of code and tests we produce. Since code and tests should be aligned with business vision and need, my belief is that this means we can best preserve options if we start to merge all these sources of truth. Or, put another way, if we work towards a single source of truth with built-in traceability.
That always sounds good — but how do you do it?
Single Source of Truth
Well, this is where the modern test team, and thus the modern tester, must have certain skills around communication, both verbal and written, so that information can be gathered from discussions with product and developers and then encoded as tests. This is probably the one aspect of BDD that I feel has not been focused on very well.
What this communication fluidity means is that you help your teams:
- Specify for behavior, not implementation.
- Specify for confidence, not proof.
- Specify only changes when behavior or contract being specified changes.
As a test team, however, you must keep in mind that your test suite is directly dependent of the feature set of the product. Basically, you, like developers, have a test-first approach where a new coding/testing task — yes, both! — can be created only when a change in the product happens: a new requirement, a change in an existing one, or a new bug. This clarification changes the first rule of development test-first, from Don’t write any new tests if there is not a new coding task to Don’t write any new tests if there is not a change in the product.
This also means that your tests focus on the ‘what’ — i.e., the part being specified. The ‘how’, which is made up of implementation details, is delegated to tooling when and where that is possible. Very simplistically speaking, the delegation to tooling is where tests become checks. This means that test teams have to focus on the following:
- Tests change with business rules; not necessarily implementation of them.
- Checks change with the implementation, not tests.
This gets into automation as a core focus and automation being checking rather than testing.
If the test team, working with product and development, can do this per feature and if everyone can keep the features small and incremental, all of this can be done with a primary focus on constant interaction between developers, testers, and product. Then you encode only the most relevant aspects of that communication and interaction as your source of truth.
If you can keep this cycle very tight — very, lean; very agile — you have a chance of creating as close as possible to a single source of truth that encodes business intent (‘why’) and business understanding (‘what’). Both developers and testers will be responsible for creating automation platforms that allow the business implementation (‘how’) to be handled.
This is exactly where automation comes into play for a test team. Ideally your automation tightly reflects the source of truth discussed. This way automation does not becomes some separate source of truth, but rather reflective of the existing specifications, which is exactly what source code should do as well.
Note that “being reflective of the source of truth” is not the same thing as “consuming the source of truth,” which is what many BDD practitioners focus on with tooling like Cucumber, SpecFlow, jBehave, and many others.
BDD certainly has value other than for testing purposes, but it can be an expensive and brittle process compared to skilled testing. Most important, I believe, is stepping away from the tools and being sure not to confuse simple automated checks with large fixtures tied into restricted English protocols for specification.
Without going into too much detail, the reason I find this worrisome is because I find testers who focus on the BDD tooling tend to take modularity out of the code (where it belongs) and try to put it into the English (where it does not). I feel somewhat comfortable making what are admittedly broad generalizations like this because I have studied and practiced description languages for quite some time, as my TDL category will attest.
This and the previous post have talked about certain aspects of modern testing and here I’ve started to drill down into some specifics. In the next post I want to get much more specific about what I think all of these thoughts mean for a modern test team, particularly in terms of how it has to act as a strategic, not just tactical, component as part of the development process.