I often frame whatever role I’m in as a Quality and Test Specialist. It’s not really a term or phrase that our industry agrees upon. Normally people want the word “Engineer” somewhere in their title as if that term somehow wasn’t terribly vague. So let’s dig in to what I mean when I talk about being a specialist.
I’ve often described myself as a generalist with specialist tendencies. And I’ve talked a little about how that makes me a technology generalist. And I’ve talked a little about how I believe you can go about finding the specialist tester. I’ve even asked whether we should bother hiring test specialists.
Similarly, and related, I’ve talked a bit about my “ideal role” in testing. And I’ve talked about the joy of testing, at least from my perspective.
As I was doing some thinking on my career lately, I started to look at all that and say: “Okay, but how would I frame all this to an up-and-coming tester?” I’ll use this post to try and answer that.
Interpret and Study
My answer is: I wouldn’t frame “all this” to an up-and-coming tester. What I would do is give them some conceptual hooks to latch onto. I don’t happen to be Jewish but I’ve read more than my fair share of the Babylonian Talmud. One part that always resonated with me was a statement attributed to Rabbi Hillel who said:
That which is hateful to you do not do to another; that is the entire Torah, and the rest is its interpretation. Go study.
That’s from page 31a of the Tractate Shabbat.
My mantra for specialists is oddly similar.
Think and act experimentally at all times. That is my entire mandate to you and the rest will be interpretation. Go study.
But what is “the rest” here? What am I asking them to go study? I’ll try to answer that by framing what I do more operationally.
Costs of Mistake and Testability
I help people think more broadly and deeply about quality and testing and I do that in the context of either creating an entirely new quality and testing effort from scratch. Or I refine, refurbish and overhaul an existing one. Sometimes refining, refurbishing and overhauling is almost equivalent to creating a new one.
One way I help people do that is by shortening their cost of mistake curve — from the time we make a mistake to the time we find it, what is that? The longer that duration, generally the more problems we likely have. Certainly the more risk we are exposing ourselves to.
One primary aspect of doing this is helping teams see how to treat testability as a primary quality attribute. With testability comes controllability and observability. And with those two aspects come reproducibility. And with that aspect we can get some focus on predictability. I talked about a lot of this in my plea for testability.
We become mistake specialists. And, as I’ve talked about before, I do believe this is an ethical mandate.
A corollary to this focus is making sure people don’t focus on a cost of change curve. Cost of change is a different thing that’s misapplied quite a bit. It’s not that it’s irrelevant. It’s just that it has so much wiggle room that’s it hard to define your operational specificity around it.
What a lot of this helps me do is avoid having teams worry about whether they’re “shifting left” or “spreading left.” In reality, you’re spreading testing both right and left. Essentially you’re distributing various ways of doing testing across the broad spectrum of activities that delivery teams do. Essentially, anywhere the team can make a mistake.
Design and Execution
I also help people see how to frame testing as a design activity, not just an execution activity.
Here when I say “design activity” I don’t mean just the idea of “test design.” What I do mean is the idea of putting pressure on design.
I talked about this idea of test to put pressure on design before and the broader idea of testing and design pressure.
What this does is allow me to have teams avoid getting into never-ending debates about TDD and BDD and what they actually mean. What I ask teams to consider is that perhaps those approaches should just be thought of as ways we put pressure on our designs at different abstraction levels.
Putting pressure on design is a lot easier when you have testability in place. And finding mistakes in your design is a lot easier when you have a short cost-of-mistake curve. And having testability plus short cost-of-mistake curves means you are thinking and acting experimentally.
Avoid the Purely Technocratic Approach
I’ve talked about the dangers of the technocrat tester and, in that context, I also help people avoid turning testing solely into a programming or algorithmic problem.
Key point: the idea is to avoid turning testing solely into programming or algorithmic problems. Certainly programming and algorithms can support certain testing activities. The most obvious of these being the automation of scripts that check for regression. This helps me avoid entirely fruitless debates about “testing versus checking” and instead just focus on testing as a multi-faceted discipline, some parts of which can be automated. This allows me to also caution teams on how the most important parts of testing cannot be automated.
In this context, one of the cautionary notes I make sure delivery teams understand is that while automation is allegedly put in place to free up humans, I often see entire teams dedicated to automation. Which would imply that far from freeing humans up, it has locked them in to a perpetual maintenance mode.
In those contexts I help teams see how to keep the automated solution as the simplest thing that can solve the problem that it’s being used for. Why?
Well, because, we do have to maintain the solution. Whether that maintenance is monthly, weekly, daily, whatever solution and code we come up with are things that someone would need to debug, improve, troubleshoot inconsistencies in, and just keep running. The more “sophisticated” our solution (and note the quotes), we generally find that it takes longer to diagnose failures. And that it becomes harder to troubleshoot. And that it’s more frustrating to change its logic.
The point of pursuing simplicity in solutions — the simplest design and approach that still solves the problem — translates directly to less time spent maintaining solutions to problems that we’ve already solved. That frees us up to solve more problems, bring more value to our company, and generally give us exposure to more problems.
Making sure we are exposed to more and different problems is a way of thinking and acting experimentally.
Further, if I can reliably implement automation that supplements but does not replace humans, then I’ve probably done a good job with getting testability in place. And if automation truly helps support the rote tasks, it means I’ve probably set up the team to focus less on the execution activity and more on the design activity.
In fact, in that case, the distinction between “execution activity” and “design activity” starts to blur a bit. The distinction ceases to matter in direct proportion to how much your testing activities are framed as experiments.
Asking Good Questions
I help people on delivery teams ask questions like this:
- What kind of mistakes can I make?
- How can I catch those mistakes?
- How early can I catch those mistakes?
I help people on delivery teams build a system around potential mistake modes. (I don’t call them “failure modes,” which is the term you’ll often see used.) By that I mean those places where the chances of mistake are higher given certain conditions. Some of those conditions — in fact, most of them — are not technical in nature.
Instead those conditions tend to be wrapped in how we think when we engage with complex things that we build and when we undertake various forms of decision-making under uncertainty. This has to do with the categories of error, and thus cognitive biases, we are all subject to in the context of humans intersecting with technology. If you’ve watched the Westworld series, you probably heard the character Robert Ford say:
Evolution forged the entirety of sentient life on this planet using only one tool: the mistake.
Absolutely! Everything else was just materials, but the tool was the mistake. And seeing mistakes as a particular kind of tool is a way to think and act experimentally.
A very necessary aspect of this is that I help teams honestly ask and answer questions like this:
- Will I test at all?
- Have I made it as easy as possible to test?
- Will I listen to what the tests tell me?
That last question is absolutely crucial. If you aren’t going to listen to what your experiments — your tests — are telling you, then why run them?
To think and act experimentally, you have to be able to discern when experiments are going well (and thus augment them) or going poorly (and thus dampen them). To do that, you have to be able to listen to your experiments. Listening to your experiments is really only useful, of course, if you’ve set up good experiments to begin with. Good experiments, in our context, tends to mean recognizing that there is not one Quality (capital Q intentional) but rather there are many qualities.
Various Qualities
There is a distinction between what I refer to as internal qualities and external qualities. The latter are what we deliver to customers, or consumers, of whatever our solution is. The former are what we afford ourselves in order to provide what we deliver.
This very much exists in the realm of software development in general. We have external qualities such as security, performance, accessibility, usability, and so on. We also have internal qualities: maintainability, scalability, extensibility, understandability, and so on. Teams often sacrifice the internal qualities and that very much has an impact on their ability to deliver the external qualities.
All of those qualities — whether internal or external — can degrade over time. So one thing I help teams do is figure out how, when, where and to what extent those qualities are degrading. To do that requires putting testing in place at various points. Which points? Well, the points where we are most likely to make mistakes!
Doing this at all requires the ability to put pressure on design via testability. Doing this well requires having a short cost-of-mistake curve.
Distribute and Democratize
I help teams see how to distribute quality assurance and democratize testing. There’s obviously a lot to unpack — or interpret! — around such a statement. But it comes down to something fairly fundamental: we challenge ourselves to prove ourselves wrong.
Instead of looking for what we did right and verifying our belief, we try to establish where we didn’t do it right; we falsify what we believe. We implausify what we believe. (I talked a little about implausification in terms of getting lost in test.)
I’m talking about beliefs, notice. It’s a way to shift how experimental thinking is done. That’s always why I focus on quality along with testing. Some people feel testing has nothing to do with quality, either directly, indirectly or both. I believe very much that it does. I act out that belief by getting more people involved in testing and thus more involved in assessments around various qualities, particularly around where we think certain qualities are degrading.
And I encourage people to focus more on how we’re probably doing it wrong than we are doing it right.
The idea of assessing qualities and having that be the ambit of one team (called “QA”) was always ludicrous. The industry is slowly learning that. So, ideally, I help teams avoid making that mistake. I thus help teams distribute the activity of quality assurance.
Testing is how we find out when we’re making mistakes. But we make mistakes in lots of places. And counting on one team to find all those has never been workable. The industry is slowly learning that too. So we have to democratize the activity of testing.
And Thus…
Let your guiding focus be to think and act experimentally.
The rest is interpretation of that statement.
Go study.
It’s worth leaving out thoughts such as, ‘The idea….was always ludicrous.’ I thought this post is better than the posts which are more focused on opinions like, ‘The very idea of testcases’. It is, barring the few sentences about certain ideas being ludicrous.
Why would I leave out a statement that I feel is the truth? No one team has ever been able to realistically have a mandate to “assure quality” across the broad spectrum of what quality can mean. Attempting to achieve an impossible goal via an impractical method is ludicrous. Now, granted, that’s just an opinion on my part. But that’s what blogs often are. These aren’t science papers that are peer reviewed. I’m not sure why “more focused on opinions” would be troublesome to you since this post is no more or less opinion focused than the one you reference (The Very Idea of Test Cases).
Granted, it’s not my opinion that I do the things in this post. After all, I do. But it is certainly my opinion that these are the good and proper ways to be framing a quality and test specialty. Which someone could certainly disagree with.
And regarding opinions, isn’t what you just stated to me here your own opinion? It’s hard to escape having opinions! And there’s most certainly nothing wrong with expressing them.