Stories, Features, Acceptance Tests: Which Am I Writing?

If you’re a tester that works in an environment where your teams practice test-driven development (TDD), behavior-driven development (BDD), or some other variation that sounds really similar, you’ll quickly be learning that people toss around terms like “story”, “feature”, “acceptance criteria”, and “scenario.” What they often won’t toss around is the context for those terms. That’s context is important to have because it depends entirely on how the terms are used in your environment. While there are not necessarily standards, there are lots of good opinions that have led to good practices in context. Liz Keogh’s Acceptance Criteria vs Scenarios and Matt Wynne’s Features != User Stories are two good examples.

As a tester, what do you have to know? How involved are you in these things? Is all this just for developers? If you’re writing stories, are you also writing tests? Is acceptance criteria the same thing as an acceptance test? Here’s my take on a few of these questions.

I’ll first state that from my perspective just about everything comes down to a test. Not everything is a test; but, ultimately, it all comes down to some form of test. I say that because I want testing to be driving the thinking behind various concepts. Testing is organized skepticism and is an interactive, self-correcting dialogue that acts relative to a given goal. Testing will use various artifacts to achieve that goal, such as stories, scenarios, and so on.

So let’s talk about a story. Stories are often a way to decide on the value of work and then, based on that value, to decide which work is going to be planned out. A story gives your development groups a high-level view of what work they would be doing and what value that work has. When a customer (internal or external) wants something, they are going to say something like this:

My people deal with tasks. It would be great if the system would let me manage these tasks in some way.

What the client has just told you is the what. They’ve told you what they want. Another way to word it is that they’ve told you their business need. As you fulfill that need, what you’re effectively going to say back to the client is something like this:

We can give you the ability to create and edit tasks and then view the information about those tasks for specific people.

What you’ve just told the client is the how. What you’re effectively telling them is how to use the solution that you’ve built to get what they want. However before you get to the point of “how” — and in order for you to deliver what a customer truly wants — you have to turn that business need into a story. While the above statements could work, they can also leave a lot out. This is why user story or business story formats tend to be used. Here’s an example:

As a [X],
I want [Y],
so that [Z].

Here Y is some feature or feature set, Z is the benefit or value of the feature, and X is the specific person (or role) who will benefit from the feature. Here’s an example with the generic details filled in a bit more:

As a [user role],
I need [the ability to something],
so that I can [get some benefit or avoid some consequence].

It’s really, really important to discuss the “so that” part of the story, partly so that you can determine if the story is representing certain business values and thus giving your product development and application development teams a basis for prioritizing. All business decisions are presumably made upon values of the business, right? The most common values I’ve come across are:

  • Protect revenue
  • Increase revenue
  • Manage cost

The idea here is that if you’re about to implement a story that doesn’t support one of those values, chances are that you’re about to implement a non-valuable story — at least when considered relative to other stories. This means the particular story in question is a good candidate for pushing down the backlog of requests. You’ll sometimes hear the stories that are accepted from this process as the “minimum marketable features.” You’ll mostly hear this phrase in the agile project management world and it simply means those features that will yield the most value.

What’s also important to realize here is that you do not have a test. A user story is not a test. To make this a little more concrete, let’s consider a specific example.

As a project manager user
I need the ability to see active and completed tasks
so that I can determine what tasks are still being worked on.

That’s a story. Are there rules for what is and is not a good story? Generally yes, but people tend to apply them based on need rather than worrying about how everyone else is doing it. As an example, with the above story, it might be a little heavy on certain implementation details for some people’s liking. That’s the case for me. I think the user story makes a few too many implementation assumptions. I would prefer the above to be reworded like this:

As a project manager user
I need the ability to manage tasks
so that I can make effective decisions regarding work being done.

A story is good for a sniff test in terms of estimation as in, “Hmm, this seems really large” or “This seems like small amount of work.” A story can certainly provide the basis for a high-level estimation. In fact, I like to have what I specifically call out as QA Estimates (as opposed to Test Estimates). You may have a story where the scope of the story is not really known quite yet. The QA estimate is how long I think it will take us to get to a shared notion of quality. A test estimate is then how long I think it will take to execute tests against a working application such that we can prove if the shared notion of quality has been reached.

I want to make sure this point is clear. With the first story I gave above it was pretty obvious that there was a desire for the ability to see two very specific types of tasks. That might imply a filter. That’s what a customer might produce for a user story if they are workign with developers or are being guided towards a particular solution. The second story was a bit more high-level in that the number of tasks or the types of tasks was not stated. What was desired was a general ability to manage tasks, which beyond filtering them could very well mean adding tasks, deleting tasks, editing tasks, associating tasks with entities in the system, and so on.

This brings up a point that I think is important: stories are about conversation and ideas. I like to have user stories worded such that they open up conversation rather than box it within predefined boundaries. That’s how I gauge whether my stories are too low-level. And, yes, admittedly this is often a judgment call and there’s lots of wiggle room. That’s not a bad thing.

So if the story is high-level enough, where do the exact details get specified? To my way of thinking, that’s where specifications come in. A specification can help you estimate the extent of the work by detailing the actual work that should be done. That “detailing of the actual work” is usually referred to as acceptance criteria. A given set of acceptance criteria, usually broken up by scenarios, will be written up for individual features of the story.

With my above example, “(Ability to) Manage Tasks” could be the story title. Features of this story would be “Add Task”, “Delete Task”, “Filter Tasks”, and so on. Each of those features would then be broken down into specific scenarios. For the “Add Task” feature I might have these scenarios:

  • “Add a valid task.”
  • “Add an invalid task (missing required fields).”
  • “Add an invalid task (date already passed).”
  • “Add an invalid task (associated to an inactive entity).”

A feature broken down into scenarios is what I refer to as a test specification.

The specification will be talking about a feature. It’s a test specification because it will be telling you about the different ways to test this feature once people decide how the feature should behave. It serves as acceptance criteria because the point is that everyone is willing to accept these tests as defining what it means for this feature to be acceptable and thus working as intended.

So, to my way of thinking, a story’s behavior is ultimately its acceptance criteria. You could say that those criteria are broken down into acceptance tests. The test specification communicates the agreement about what is considered acceptable for each feature of a story.

So here’s what my example might look like:

Story:
As a project manager user
I need the ability to manage tasks
so that I can make effective decisions regarding work being done.
Feature: Ability to Add Tasks
Scenario: Add a valid task.
Scenario: Add an invalid task (missing required fields)
Scenario: Add an invalid task (date already passed)
Scenario: Add an invalid task (associated to an inactive entity)

Feature: Ability to Filter Tasks

Scenario: Filter on "show active and overdue tasks"
Scenario: Filter on "show overdue tasks only"
Scenario: Filter on "show completed tasks only"

As a note, some testers might disagree with my lumping of adding invalid tasks within the feature of “Ability to Add Tasks”, on the logic that attempting to add an invalid tasks would not, in fact, add the task. True enough. You have to decide if you want to call out another feature, such as “Inability to Add Invalid Tasks” or even “Error Handling for Invalid Tests.” This is where context comes in. You and those working with you have to decide what the appropriate organizational structure is.

As a final thing, notice that I don’t have the word “test specification” anywhere there or “acceptance criteria” or even “acceptance test.” Those are sort of implicit to me. The document or issue ticket or whatever that communicates the above is a test specification. The combination of stated features and scenarios is the acceptance criteria. The details within each scenario that spell out how the scenario should behave are the acceptance tests.

Whether people accept those latter terms or not is largely incidental to me since I really only need them to focus on a bare minimum of structural elements: Story, Feature, and Scenario. I’ll also note that everything I’ve described here is tool-agnostic. These concepts can be applied outside of any BDD-specific tool, like Cucumber or Concordion that you may come across or be asked to use. Those types of tools are meant to encode what I’ve described here but it’s very important to realize that ultimately, the tool does not matter — what does matter is the concepts behind gathering this kind of information.

I’ll have another post that talks about what acceptance tests look like.

Incidentally, for another explanation of all this stuff, you might check out Dan North’s What’s In a Story. In terms of not focusing on the tools, you might check out Liz Keogh’s Step Away from the Tools.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.