For those of you who have read my posts on test specifications you know that I tend to focus on the idea of writing requirements as tests. This, to me, is essentially a large part of the benefit of collaboration-based approaches such as those suggested by Behavior-Driven Development (BDD) practices. (Although I still prefer Business-Driven Development, but whatever.) What I like about this approach, among other things, is that it builds in traceability. And I like that because requirements traceability was always something that I had an issue with as my career developed.
The perceived value of testing is often related to how test teams believe — and promote — the idea that they “test for quality.” The reason I say that is because I maintain quality is value to someone whose opinion matters. You might want to check out my post on testing for quality where I covered a lot of these points. What I didn’t cover in that post is the value of testing itself. The value of testing can be trickier than you may think. Let’s start with this: people will place different value to the quality they perceive. Do we agree on that? If so, I argue that the value of testing that people perceive will be in direct proportion to how much they believe your testing tells them what they need to know. Different people will need to know different things given that their valuations regarding quality will be different in some cases.
What you end up with is dynamic assessment of value and it’s here that testers can start to think a bit more deeply about how what they do provides value.
Sometimes you’ll hear testers talk about how they “test for quality.” My problem with that phrase is that it’s a potentially dangerous one. It’s an even more dangerous expectation unless some ideas of what the phrase actually means are sorted out. Once you have an idea of what “test for quality” actually means, maybe you can craft intelligent processes around the phrase.
If you’re going to do that, make sure you know why you test and make sure you have some idea of what quality means. Without some thinking around those two areas, “testing for quality” is nothing more than something you say to make managers feel good.
I’ve worked with lots of testers who often take on more work than they should because they have the idea that the work they are taking on is what testers should be doing. It’s often not a question of whether they should do this work given their other constraints, but rather just a focus on how they can regardless of those constraints. Granted, many companies seem to foster this by job descriptions that suggest someone needs to be a developer, DBA, business analyst, and tester all in one. A main aspect of my job description as a quality assurance and test practitioner comes down to defining a role, usually separate from the relatively anodyne job descriptions we’re all used to.
In my post on Stories, Features, and Acceptance Tests I talked about a lot but I didn’t really cover the acceptance tests themselves. The tests are quite important. Keep in mind that the goal is to have specifications that are clear, sufficiently testable and falsifiable, and in line with client expectations prior to significant development or testing. In essence, the idea is to work out how a system would be tested as a way to check whether the requirements give you enough information to build the system in the first place. You shouldn’t dive straight away into how to implement something, but rather think about how the finished system will be used — i.e., acceptable use — and then double-check the requirements armed with that knowledge.
Acceptance tests should reflect the customers’ perception of when the application meets their requirements. This does not mean that such tests must be defined by a customer or business analyst but it does often mean that such tests are discussed with them.
But how do you write them? What do they look like? Well, let’s talk about that.
If you’re a tester that works in an environment where your teams practice test-driven development (TDD), behavior-driven development (BDD), or some other variation that sounds really similar, you’ll quickly be learning that people toss around terms like “story”, “feature”, “acceptance criteria”, and “scenario.” What they often won’t toss around is the context for those terms. That’s context is important to have because it depends entirely on how the terms are used in your environment. While there are not necessarily standards, there are lots of good opinions that have led to good practices in context. Liz Keogh’s Acceptance Criteria vs Scenarios and Matt Wynne’s Features != User Stories are two good examples.
As a tester, what do you have to know? How involved are you in these things? Is all this just for developers? If you’re writing stories, are you also writing tests? Is acceptance criteria the same thing as an acceptance test? Here’s my take on a few of these questions.
As a tester the bad news is that you’re not assuring quality. That’s also the good news.