A key question to ask is what you want to get out of your tests. I’ve already talked about how I think the purpose of testing is to find information and then communicate that information. That’s great and all … but … what do you actually want to get out of the tests themselves? The way you answer that question will shape your testing function to a large extent, particularly when it comes to picking supporting tools, such as for test management or test automation.
Don’t test that it does what it does…
This probably sounds obvious but I’ll say it anyway: a test shouldn’t just test that something does what it does. After all, what it does may be wrong. A slightly more formal way of saying this is test the intent, not the implementation.
I think it’s really important to keep in mind that a test is largely pointless as a quality check for the user interface or the code if that test solely replicates what the user interface or the code does. I say this because any bugs that found their way into the user interface or into the code will just be replicated in the test. In the case of code-based unit tests in particular, tests that just replicate code are very dangerous because they’ll give you a feeling of comfort, thinking that the code is tested properly, while really they are just saying the same wrong thing in a slightly different way.
Likewise, a good operating heuristic is to avoid writing tests by looking at the user interface or the code, as you may get blinded by the implementation details. This is one of the reasons why it’s better to write tests before the code, not after. And that can apply to application-level and code-level. It can also apply to the notion of writing the tests as the specifications and vice versa.
I’ve seen many environments where specifications of some sort were simply not going to be written for a given piece of functionality. So the testers simply took the build they were given and wrote tests around what the application was doing. While you can argue this is “being agile” or “being nimble” or “just getting the job done”, it’s not effective testing practice.
Intent vs. Implementation
I brought up this idea of testing to the intent above so I want to talk about that a bit. The notion of “intent” (what is intended for the application to do) and “implementation” (what the application actually does) can serve as a good organizing principle across a wide swath of testing concerns.
As an example, instead of freely mixing pass/fail tests into any sort of UI automation logic, it would probably be more effective to separate your concerns. The tests, which say what the application should do, belong in a different place from the “fixture” logic, which say how the application should do it. So if you have that sort of thinking in mind, that should start to guide your selection of possible tools. After all, some tools simply don’t let you do this. Or they let you do it, but only in a very cumbersome way.
Your tests are also for providing a basis for verifying requirements, in whatever format they may be. Okay, so going with that, what people often call “tests” are really “examples” of how a piece of code — or a set of functionality — is supposed to behave. At the code level you end up with practices like test-driven development (TDD) but the successors to TDD are really of more interest to me: example-driven development and behavior-driven development. The focus here is really on tests — called “examples” or “behaviors” — as conversation pieces. In fact, you can think of “specific behaviors”, “sets of examples”, “application uses” and “tests” as the exact same things. But here’s the question: have you then chosen a tool that will allow you to treat them as the same things? More importantly, have you chosen a practice that will allow you to make these distinctions in an unambiguous way that can be easily explained to anyone?
Rather than get too much into a tool or practice discussion here, I want to focus this on what the tests are for by considering tests in a specific context: acceptance-driven testing.
Acceptance Tests
In a previous post I talked about how acceptance tests are exactly that: tests. So here I’m going to give some examples of what I’m talking about when I use the term “acceptance” and relate that to the notion of “story” that often gets thrown around, particularly in places that call themselves “agile.” Each example will contain some “AT” lines, which stand for “Acceptance Test.” After each example, I’ll toss out some “Things to Notice.” Don’t worry too much about the format; instead let’s just focus on the content.Example #1
Feature: Passwords can be changed. Story: User password can be changed. AT: User can submit a "Forgot Password" AT: User is notified of "Forgot Password" {Content of message? Format of notification?} AT: User can "Change Password" AT: User is notified of "Change Password" {Content of message? Format of notification?} AT: Admin can "Set Password" for user AT: User is notified of new password {Content of message? Format of notification?} AT: Admin can set Preview Password AT: User is notified of preview password {Content of message? Format of notification?}
Here are some things to notice about this example.
- This is a set of elements for discussion, or “conversation” in agile-speak. There might be some cases where the whole feature set is thrown into your tracking system as one task for developers to work on; other elements may require separate tasks in your tracking system. In fact, there could be one ticket for the story as a whole and then separate, but associated tickets for tasks.
- Notice here how the Feature and the Story are largely the same. No need to really translate between them. (Although you might want to check out Testing’s Brave New World where I talk about distinctions between feature and story.)
- Notice how the Story + Acceptance Tests (AT) really spell out the Acceptance Criteria for the feature. So you really have Feature + Story + Acceptance Tests: all wrapped into one. Each AT can be a separate development task to disaggregate the story.
- Notice with each such grouping that you can talk about the implementation separate from the intent. Just looking at the above, you can determine at a glance if those are the only ways that a password can be changed. That’s the intent. The implementation discussion can then center on how you want to actually put those elements into action. A key point is that whatever design is decided on, it must pass the tests.
- Notice with the above that you can check if any tests — which serve as drivers to — are getting away from the feature/story. Along those lines, look at the last AT. That may suggest your feature / story needs to be more like this: “User public and preview password can be changed”. Or this may be showing that you need two stories: “User public password can be changed” and “User preview password can be changed.” Why might these be two stories rather than one? Perhaps it’s different functionality, or perhaps the outputs are different enough, or perhaps you simply want to disaggregate the tasks that way.
- The elements in braces show what needs to be discussed so that the acceptance tests can be fleshed out as functional tests. It’s at that level that you could consider writing them as behavior-driven, example-based tests.
I could write all of that on a text editor in a meeting or, if going totally retro, on a piece of paper. I don’t need a specific test management tool or BDD tool for that.
Example #2
Feature / Story: Content can be published. AT: Admin user can publish content for site AT: Admin user is notified that content has been published {What notification?} AT: User will see published content {How long?} AT: System must set {whatever} flags AT: System must update log file {Which log file? With what information?}
Two things to notice about this example:
- The {how long} shows that the discussion can talk about constraints and service level agreements on certain functionality.
- The last AT shows that you can have business-facing elements and technology-facing elements to the story.
Example #3
Feature / Story: Notifications to users must be consistent in appearance. AT: Error/Problem notifications AT: Success notifications AT: Status notifications
This is a slight variation and there’s really just one thing to note here:
- Notice that you can discuss high-level stories like this. Also notice that decisions about what these notifications are would be referenced by other stories (tests), such as the one above about the user being able to change their password. There’s no reason that stories can’t have a hierarchy and be modular, just as tests can be.
So I can hear someone reading this saying: “Okay, but wasn’t all this obvious? I mean, don’t testers write tests like this all the time?”
Yeah, in many cases, they do. The trick is when and to what extent those tests serve as input to development rather than output of what development hands over. The above test examples could all serve as specifications, or at least parts of specifications minus any narrative. These tests could all have been written very quickly in a design meeting with developers and business analysts such that the tests were written based on conversation about what is desired and how it should work.
Test writers often tend to focus on a series of test steps for any given test case when, really, all that’s needed — at least initially — is a simple set of statements indicating the intent of the functionality and how that intent can be verified as working correctly.
It sounds simple — but so few testers I meet and work with actually do it that way. I’m convinced that the biggest challenges that test writers face — and test writers don’t just have to be testers — is the ability to narrow down into specific cases of use, abstracting out irrelevant details, and providing effective combinations and permutations of intended behavior.
Test writers have to be nimble in terms of being able to write up these cases during the course of a specification meeting, willing to halt discussion to make sure that a particular point has been captured as a test, and able to mentally organize a series of test statements on the fly, sometimes changing the structural and hierarchical specifics of how the tests are put together.
Test writing is a skill that is teachable. Like any practice, it can be honed, focused, and sharpened. In my view of the world, this is what testing is for: the ability to capture information and encode decisions about quality as tests. The resulting tests are for communicating that what we intended is in fact what we implemented.
Simple. But far from easy.
Nice post and good use of examples in it. Made me think – and that has to be a good thing