The notion of a “QA team” can set up unrealistic expectations about who “mandates” quality and who is “responsible” for quality. We know quality by indirect means of how things function or provide value; not as a result of the actions of a team. Does that sound silly? Let’s talk about it.
I’ll put my cards on the table: I don’t believe in the idea of a “QA team” at all. (I talk about this a bit in my post about “obvious” quality assurance.) I realize there are industries wherein there are regulations and requirements that a team called “Quality Assurance” must exist. I get that; I’ve worked in some of them. None of that changes my views here.
So here’s my view: there should be a quality assurance function that exists among collaborative teams that use shared artifacts — like tests — as a communication mechanism in order to build up a shared notion of quality that is operationally useful.
But let’s figure out what that means in relation to quality. I say that because when I say something like I just said, some people come back with: “But if we don’t have a QA team, how are we making sure quality is there?”
Okay, so to start this off, let’s consider something that many people (who should know better) often say:
“Testers should have a ‘test to break’ mentality.”
What usually follows on from that is some statement of what makes a “good tester”, with the idea being that “Testers should like to break software.”
Yeah, see, here’s the thing: testing does not break software. Software comes to a testing function broken. What testing does is prove that software is broken and provides information about the context of the nature of the aspects that are broken. It may sound like I’m splitting semantic hairs there but, in fact, I believe those distinctions — and thus the expectations — are important. I’m reminded of this perhaps ‘colorful’ quote from Linda Wilkinson:
“Overall, an experienced test team can present bugs in such attractive wrapping paper (words) and ribbons (understanding of your problems and issues) that it will take you some time to realize the present you’ve just been given is really just an exceptionally large bag of poop. They’re actually just re-gifting; it’s the large bag of poop you gave them to start with, but somehow you didn’t realize it smelled quite that bad when you turned it over to them.”
Now, “broken” can mean some functionality is obviously broken, as in, say, a web page fails to render or some calculation fails to trigger. “Broken” can also mean that the page is rendering or the calculation is giving a result, but the page is “ugly” or the result is “questionable.” Speaking to the calculation, that issue then becomes: what should the result be? Do we even know?
A shared notion of quality, which a quality assurance function can make sure exists, would indicate what an acceptable value for the calculation is, utilizing all available resources to answer that question and encode that answer in a test specification that serves as an executable requirement.
That’s a key point. To answer the questions that inevitably come up — and thus get that shared notion of quality — we require input from a business analyst and/or domain expert, a developer, and a tester. While working on that issue, all three roles are exercising the quality assurance function. They may have different roles but they are part of the same team and they are collaborating on a shared notion of quality that can be communicated with — and proven by — a test.
What this means is that when we talk about “Quality” — as if it had a capital Q! — it’s important to realize there is an organizational, operational, and technological context to the work we do to prove that quality does or does not exist, at least to the extent that we would like it to.
Quality Is …
Can you do all this and have a strict team called Quality Assurance? Sure you can. As long as that team routinely brings together the relevant people from various other teams to discuss and answer questions about those organizational, operational, and technological contexts. So then what’s the problem with the notion of a Quality Assurance team?
Well, okay — let’s look at it this way. What does the QA team “assure”? I would argue that this team, by itself, is most assuredly not assuring quality. After all, what is quality?
I think it’s important for people to get around the “Quality as (False) Substance Problem.” What I mean by this is that people sometimes treat quality as if it were literally some sort of substance; something that could, in principle, be read from a gauge, and thus measured easily.
But quality is more like an optical illusion of sorts; a complex interplay of light and shadow, if you’re in a poetic mood. Much like subatomic particles in quantum theory, quality is a tessellation; a complex abstraction that emerges partly from what’s being observed, partly from the person doing the observing, and partly from the process of observation itself.
I really do believe that behind the veneer of metrics and coverage, quality is just a convenient gathering point for a set of ideas regarding what everyone is willing to consider “good enough for who it’s for.” What’s ultimately good enough is what impacts the bottom-line of the business positively and what allows the business to keep its competitive advantage while working effectively and efficiently.
… as Quality Does
And ultimately that is done by delivering more value than not to your users and customers — the people it’s for.
I belabor this point a bit because Quality Assurance is often tasked with saying when the software is “ready” by which is usually meant “done.” Well, that’s just it: software is only “done” when it’s out there in the field, delivering value to customers.
If you agree with anything I just said, ask yourself: do most “QA teams” that you’ve been part of or seen have that wide of a remit? Or even if they do, do they act like it?
My belief is that quality, when treated as a concept, is inhibited by various interrelated factors. I believe James Bach stated this quite well in The Challenge of “Good Enough” Software:
Quality is inhibited by the complexity of the system, invisibility of the the system, sensitivity to tiny mistakes, entropy due to development, or maintenance, interaction with external components, and interaction with the individual user base. … Creating products of the best possible quality is a very, very expensive proposition, while on the other hand, our clients may not even notice the difference between the best possible quality and pretty good quality.
I really think this is an important point because all of this goes to recognizing that quality is organic. Quality morphs. For me this means that the function of quality assurance is everything you would do or are capable of doing to minimize risk by understanding how quality morphs in your particular context and what role each team has in harnessing the creative forces that act to both support and disrupt the production of something that delivers value.
As James again says in his article:
Quality is the optimum set of solutions to a given set of problems.
I like it. The only caveat I would add is that that quality is the optimum set of solutions to a given set of problems that people are having right now.
Now, I know what James said (and what I’m agreeing with) may sound a little trite. But think about it. Okay, yeah, it’s sort of like saying “quality is as quality does” but, in a way, that’s true!
Quality is an emergent, shifting perspective
Quality is not some end thing you achieve; it’s the sum total of everything you put in to begin with based on everything you expect the “qualities” (plural!) to solve … and its how everything you put in is actually put to use to provide value by solving real problems that real people are having right now.
Quality ultimately reduces to a viewpoint. So the ultimate questions about quality center around answering this: whose viewpoint?
That’s why I said quality is a bit like a tessellation: it’s a mosaic built up out of various viewpoints. Testers, developers, and business analysts need to be conscious of those viewpoints. Each viewpoint will bring with it different insight into the project, product, and process. Further, each viewpoint can serve as a type of reference for recognizing problems and dealing with those problems.
When I take all this into account, it means that for me the goal of our development projects is coming up with the optimum set of solutions to the problems that each viewpoint makes clear. That means gathering those viewpoints, finding the commonalities in those viewpoints, and ultimately encoding decisions based on those viewpoints as executable artifacts that not only show what we just did but provide a clue on how to keep doing new and better things.