Here I’ll cap off my current round of “modern testing” posts by discussing a bit about the lucid approach that I’ve brought up along the way.
If I had to put a tagline to my idea of what lucid testing means, it would be something like Creating Business Impact with Test Solutions or maybe even Turning Ideas into Working Software with Better Testing.
Consider that any application is the end product at a given moment in time. That end product is a reflection of a set of features working together. The value of the application is delivered through its behavior, as perceived through its user interface. “Behavior” and “user interface” are critical there. That’s what your users are going to base their view of quality on. And, to be sure, “user interface” can mean browser, mobile app, desktop app, service (i.e., API), and so on. It’s whatever someone can use to consume some aspect of behavior of the application.
So everything comes down to the behavior and the interfaces and the consumers of those interfaces.
Focus on Why, What, How
The common theme that goes through a lucid approach is that of working from examples. And those examples are based on iterative communication and collaboration. This sounds like BDD, right? And to an extent, there are good practices there. But I find BDD can still lead people off the rails. I talked about this a bit in the BDD Lure and Trap but let’s consider another example.
Liz Keogh, a well known proponent of BDD and a very effective speaker on it, talks about the “three heads of BDD”:
- Exploration by Example (what it could do)
- Specification by Example (what it should do)
- Test by Example (what it does)
The problem I have with this is that it seems to imply “test” — being some separate activity in this list — is not done via exploration and specification. Or, rather, that exploration and specification are not forms of testing. So this division, and it is one I find repeated in the wider BDD community, seems to treat testing solely as an execution activity rather than a design activity.
Along with that, there’s a large “test vs check” debate going on in the wider test community right now. Some have argued that checking things after the fact is just testing, whereas anticipating things before the event is design. In other words, they are saying “testing” and “checking” are the same thing. Anything done prior to execution would be design.
Again, I think think that’s a fundamental misconception and it’s one that numerous practices seem to be reinforcing. So rather than focus on a “check vs test” debate or the “three heads” model of BDD, I’ve instead talked about something that Gojko Adzic brings up a in the context of reducing large test suites:
- Document the why, specify the what, automate the how
But how do we do that?
Go Minimal or Go Home
To my way of thinking, the goal of the lucid approach — the very means by which our work is lucid — is to reduce sources of truth, reduce communication churn between project artifacts, and capture communication as examples that provide a shared understanding of what quality means for each feature that make up the product.
But here’s what I’m shooting for: those examples are reflected entirely in code: production code that makes the examples work and test code that backstops the production code.
I want to minimize the need for too much documentation. I certainly want to minimize the notion of large epics, that have numerous stories, that are broken into cases, that have various tasks applied to them, and so on. What I advocate for is quick work that is done in fast-feedback, fail-fast, safe-to-fail experiments. These experiments let us learn and uncover information and, ultimately, develop expertise about our business domain and how to express that domain in the form of working software.
Use Patterns of Conversation
This is where I think something that Liz Keogh says is right on target. She refers to conversational patterns in BDD. These patterns are in large part:
- Context Questioning (“Is there any other context, which, for the same event, gives us a different outcome?”)
- Outcome Questioning (“Is there any other outcome that’s important?”)
That, to me, is a good way of being lucid. Conversations about our domains tend to go better when we think about the outputs. There’s an important conceptual shift here. Yes, we think about what the user would see. But more importantly we think about what they are actually looking for. We think about why they are looking for it. Presumably it provides value to them. But what is that value?
That, right there, is documenting the “why.” That’s all I want to see from product teams initially. Then the developers and testers, working with the product teams, start to craft the “what.” We have conversations about what has to happen in order for the user to see that input. Ultimately that’s all the context.
The outputs here can be thought of as consumer driven contracts. This means that we start to think in terms of a humane registry. To quote one part of that:
The point of a registry like this is that it does a lot of automated work to get information, but presents it in a way that expects a human reader.
Note how these patterns, and whatever the technology that helps create the humane registry, is very different from the “patterns” of applying Given-When-Then scenarios all over the place or having automation consume natural language, rather than produce natural language reports of what the code is doing.
Heuristic Rather Than Process
The idea of Lucid Testing is not a rigid process; rather it’s a set of actionable heuristics that enable communication, collaboration, and actions that generate confidence.
This approach is, in some ways, aligned with the ideas of the context-driven school of testing. Specifically, a lucid approach is not about touting and enshrining so-called “best practices.” As stated well in a paper by James Bach and Michael Bolton:
We read the situation around us; we discover the factors that matter; we generate options; we weigh our options, and we build a defensible case for choosing a particular option over all others. Then we put that option into practice and take responsibility for what happens next. Along the way we are learning and getting better at this.
This means our choices are guided by dynamically evaluating context and selecting, designing, or adjusting our actions to solve the problems that we encounter.
And this means it’s critical for your test team not to display “The Emmett Effect”. By this I refer to the character Emmett in “The Lego Movie” where, at one point, he says: “Just tell me exactly what to do. And exactly how to do it.” I find way too many test practitioners have this mindset. You don’t want a test team that boxes their thought processes and thus limits their approach to coming up with solutions. You don’t want a test team that sees their activities solely in the context of execution at the expense of design.
My Current Canon
For those of you familiar with the Expanded Universe (EU) of books around Star Wars, you may know that when Disney took control of the franchise they relegated all of the EU to the status of “Legends”. Only material that was being created now, with the refined knowledge of a more shared universe of thought, was to be considered “Canon”.
That’s not quite where I’m at with my current views versus all of my past views as expressed in this blog. But, at the very least, there is a lot of my past thinking that is likely going to become akin with the “Legends” material. So let me spell out a bit of my current canon here.
Going fast is not possible unless the quality is under control at all times, so we need a testing strategy that says testing is a design activity and automation is a core part of the strategy.
Going fast in the long term is not possible unless the quality of the design and of the understanding of the business domain is under control at all times, so we need a executable source of truth strategy so that it’s possible to reason about the system at all times.
Going fast means we have to manage the complexity that comes from having large numbers of different moving parts. The goal is to do three primary things:
- Reduce sources of truth.
- Reduce communication churn and hand-offs between project artifacts.
- Capture communication as examples that provide a shared understanding of quality.
Talking through scenarios is how we discover whether we have a shared understanding or not; by using specific examples to illustrate our understanding or discover our ignorance. But we still have to encode that understanding in a way that doesn’t entirely rely on institutionalized knowledge. This encoding requires finding the right abstractions and sharing those abstractions across product, test, and development.
A focus on communication and collaboration provide the interplay of vocabulary and concept to express a shared understanding of quality. A versatile, shared team language, and a lively experimentation with language are critical to this approach. This means there must be flexibility in how we express tests so we can best have discussions and reflect decisions. This means that our tooling must be able to support this flexibility. Fluent APIs and DSLs are not only useful for this but make it easy for non-technical people to read, and sometimes even create, automation.
Our test tool solutions must provide a fluent interface. The rationale here is that a fluent interface is a semantic façade. This façade sits in front of any, driver, library or other low-level code to reduce syntactical noise and to more clearly express what the code does. With test solutions, this means your automated test code very nearly expresses in the same way as a manual test would. Fluent interfaces also allow you to more readily create idiomatic patterns and recognize when you are violating those patterns.
Ideally, the focus of the test technology solution should be on the widest possible test ecosystem with the most minimal technology footprint. This is what lets you preserve maximum flexibility at minimum cost.
Bring Development Into Testing
A large part of the industry focused on TDD as a way to bring testing down to development. That can be useful at a certain level of work but I think it’s critical that the reverse happens as well: bring development thinking up into testing. This is something that approaches like BDD have absolutely failed to do in my opinion.
Obviously we want to deliver software that adds value to users in reasonable time frames with agreed upon levels of quality. “Agreed upon levels of quality” means that we have a single source of truth with built in traceability between what we talk about, what we develop, and what we test.
The most reliable way to do this is by treating production code and test code as the ultimate specifications.
Keep in mind the truism that projects leak knowledge. Knowledge comes in; but it doesn’t accumulate very well. However, if code is treated as the ultimate specification and a reflection of knowledge, then that knowledge is forced to accumulate when the knowledge becomes represented by code. Further, it’s accumulating as an executable resource which can push the knowledge back up, rather than relying on pulling the knowledge down.
This kind of thinking and approach brings many development practices, including unit test practices and pattern-based thinking, into the domain of the test team. This focus on test-as-code can have the same impact that was seen when we started realizing infrastructure can be code.
This approach forces test teams to surface good design decisions by encouraging the creation of solutions that are simple enough to make the tests pass, but no simpler. If the process of writing the test is laborious, that’s a sign of a design issue. Loosely coupled, highly cohesive code is easy to test. When taken over into test code, this kind of code is much easier to create. Further, it allows you to model the domain much more effectively.
Lucid Charter
I’ll close here with my nascent team charter for all of this which will indicate where my future research and experimentation is going to lead me. As a team focused on engineering test solutions, we provide the following primary value:
- We work with the business and development to turn ideas into working software with demonstrable levels of agreed upon quality within responsible time frames.
- We democratize testing by avoiding heavy tool sets, complex configurations, and elaborate processes.
- We focus on transparency by crafting solutions so that what developers do is reflected immediately in what the business sees. What the business needs is reflected immediately in what developers build.
- We focus on collaboration by crafting solutions that allow the business and developers to jointly explore business needs and the
possible ways of answering those needs in a technical context.
- We focus on allowing our solutions to evolve as both developers and the business become more experienced with expressing the problems they are trying to solve and building focused solutions for those problems.
- We believe that testing is a design activity that works when a shared idea of quality is encoded in single source of truth artifacts with traceability built in.
- We believe that code — production and test — is the ultimate specification.
It’s that last point I plan on exploring quite a bit more as I enter a different phase of my career. I plan on working on a specific implementation of a new Lucid tool that I hope will be reflective of the ideas I’ve been talking about. As with all of the best journeys, I have no idea where this is going to take me.
Jeff, thanks for the shout-outs here.
I think of testing as being something that you can only do with something that actually exists. Maybe I’m being a bit pedantic with that. I do believe in the role of testers in exploration and specification, for sure, and am pretty vocal about their importance.
I also see exploration, specification and testing as separate activities, but not necessarily as separate phases, though I do suggest it’s useful to be aware of which one of those you’re doing just so that you can communicate it better. Part of the reason why Dan North created BDD in the first place was because people were encountering confusion using the word “test” when what they were actually doing was analysis, from which tests emerged. That word “test” tends to make us think we’re in the solution space, when frequently we don’t even fully understand the problem yet.
I think it’s important to have testability, too, and that needs to be an important part of the conversation, even if that just means discussions about coherence in uncertainty. You can see I’m throwing a couple of other words out here, and maybe that’s the equivalent of what you’re referring to when you say that testing should be a part of exploration and specification. So, maybe we’re actually in agreement, but with different words and definitions?