In my last post in this series, I talked about acceptance testing being a core intersection between product development and engineering. So let’s dig into that a little bit more. Specifically, I want to provide a prescriptive framing device that I’ve found to be helpful when getting delivery teams onboard with these concepts.
Here are some high-level prescriptive goals:
- Use testing as a mediating influence to make sure that requirements, when elaborated, are encoded as testable statements.
- Those testable statements are further broken out by valid and invalid conditions.
- All such conditions can be slotted into examples that show the feature delivering value.
With those goals in mind here is that prescriptive framing device:
- Decide when discovery become elaboration.
- When elaboration occurs, spec workshops create feature specifications.
- The value of the feature is determined by feature injection: As a {who}, I want {what}, so that {why}.
- Define stories around scenarios of what users expect this feature to deliver.
- The size of stories is determined by the number of scenarios added to the feature specification.
- How difficult it is to come up with scenarios is an example of how much we don’t understand.
- Each scenario is an example; an illustration of the feature providing value.
- Define a set of concrete examples that illustrate key outcomes of the feature.
- Examples are an effective way of communicating elaborated requirements.
- Scenarios are acceptance criteria but that does not indicate “done.”
- The acceptance criteria provide a shared understanding of what quality means for the feature.
The bolded lines are ones I really like to emphasize.
How the Focus Works
Any parts of a story that are elaborated — meaning that they specify behavior and thus require validation — must become representative examples of expected outcomes.
That thinking right there is critical.
You can use the examples in conversations and planning/grooming meetings to uncover assumptions, edge cases, corner cases, ambiguities, and inconsistencies that would remain hidden behind the high-level wording of a user story or be buried in the implementation details of code.
This is how we use testing to help us avoid discrepancies between our two primary specifications: code and customer expectations.
And this bit of reality is important. Code is ultimately what our customers interact with. They don’t interact with our tests or our user stories. They interact with what we deploy and, as such, code is the ultimate specification, no matter what else we produce during our project work. The focus on the customer is, of course, the value they receive.
So, in this context, the examples, collectively, specify the direct value the feature brings to users regarding the behavioral functionality being described. They demonstrate how a desired feature should behave in different scenarios. Those scenarios can be have a general or specific persona injected into them.
Those scenarios are fleshed out with test conditions and data conditions. Test and data conditions are constraints that you should follow when you test something. Scenarios are composed of these conditions and are written to make the conditions as clear as possible.
The format for writing examples should provide an effective structure for evaluating missing conditions, measuring shared understanding, or spotting potential mistakes.
- Here “mistakes” refer to quality degraders: ambiguity, assumptions, inconsistencies, and contradictions.
- Here the “shared understanding” is that of what everyone believes constitutes “quality” for the given feature.
Is This BDD? TDD? Spec By Example?
I purposely didn’t use a lot of industry terminology around approaches with the above material.
Ultimately we’re dealing with primary sources of truth. This is where the idea of “executable specifications” and the idea of “living documentation” come together. And this leads us to yet another prescriptive formula:
- Accurate documentation, instrumented with tests, always in sync with the code.
This allows the delivery team, enabled by test specialists, to create documentation that evolves at the same pace as the code but that is also capable of being executed (by human or machine) against the code. This is how all of this remains relevant in a product development context.
All of that can fit within a BDD or TDD context. I do think it’s important for test specialists to align some of the industry terminology around what the delivery team actually does. This is also how the product development side of things is allowed to intersect with testing and with engineering. So to take two of the terms I used above, I define them as such:
- Executable specification: a requirement written as a test that illustrates and verifies how the application delivers a specific business value.
- Living documentation: a set of executable specifications instrumented by tests and capable of being executed against code.
A few other terms I mentioned above are some that I like to have defined for my delivery team:
- Specification: any way of expressing what a given feature should do to add value.
- Behavior: the outcome produced by functionality under certain preconditions.
- Design: the practice of making sure that change remains cheap at any point in time.
- Feature: a tangible, deliverable piece of functionality that helps the user achieve their goals.
- Acceptance criteria: a way to empirically judge whether a feature has been implemented correctly to deliver the promised value.
I’m generally not terribly concerned with how much our terms match the industry usage — although that can obviously be beneficial if the industry usage makes sense — but I am concerned with making sure the delivery team uses consistent definitions for day-to-day practice.
There’s two things that come from following this approach in my experience. One is fairly obvious: testable requirements. The other is not as obvious but has to do with metrics that actually matter. I’ll talk about those aspects in the next post in this series.
Great series of articles. Really enjoy them.