If being completely accurate, I would have to title this post something like “The Danger of the Companies that Frame Testing as a Technocratic Discipline and Hire Testers Who Reinforce This View”. But that’s a really cumbersome title to write! However, I believe that the technocrat tester is a big problem in our industry and many companies are reinforcing this problem. So let’s talk about this.
Let me get a few points out there that may be relevant.
- I’ve talked about the danger of the certified tester. But, honestly, that barely holds a candle to this other pernicious problem that I see getting worse.
- I’ve also talked about the value of the modern tester, separate from technical skills and how the concept of the SDET is becoming a dangerous conflation.
- I’ve talked about whether we’re defocusing our development and test practices.
- I already wrote about how I believe in the tester specialist who is a technology generalist.
- I’ve talked about how I have little respect — and probably even less tolerance — for the so-called technical interview that doesn’t consider broad skills.
- I’ve talked a lot about test solution architects, which is not an idea (it seems) some companies can get behind.
With the exception of the first post, all of that was in response to the current trend in the industry of putting a focus on so-called Software Development Engineer in Test (SDET) roles where the emphasis is on being a developer rather a tester. Putting my cards even more clearly on the table here: I think how many companies hire for test positions today is not only ridiculous but, in many cases, harmful. In the Chicago market alone, this has reached proportions that are borderline ridiculous.
What’s The Focus?
The main culprit here is usually a focus on (1) developer interaction, (2) automation, and (3) an assumption that both of these tasks are best handled by a “developer with some testing background” rather than a “tester with some development background”. I see companies who have a lot of turnover in their roles because they get this distinction wrong. Yet they continue to adhere to it.
This is something companies must make clear: do you want a developer with some testing background or a tester with some development background? If you don’t know or you don’t understand why the distinction would matter, do yourself a favor: figure it out and stop wasting a lot of people’s time, including your own.
Let me throw out a few other points:
- I’m not against testers interacting with developers. I have a series of posts in my learning category about where I think testers should spend time learning development practices and tools.
- I’m not against automation. I have a huge category of posts on this blog about automation. I have a GitHub repo where, for better or worse, I’ve released some of my own open source automation tools.
Automated Code Developers — Or Testers Who Can Write Automation?
I’ve talked about how I know that automation is not testing. I still see and hear about (and sometimes talk to) companies that, during an interview for a QA and/or Test position, take the candidate through various coding exercises as if they were a developer. I recall one environment where someone was drilling me on the nuances of nodes in a LinkedList versus the looser structure of an ArrayList and why I would or wouldn’t use one or the other.
They never once got around to asking me how I test applications. Or what I think effective test techniques are. Or what I consider some of the key challenges when testing an application or a service.
I do understand that they are trying to determine if you can think “algorithmically” in some cases. But the fact is that as a person writing automated scripts, while some level of algorithmic thinking is important, most of what you are using are libraries that do much of the work for you. (Or you should be if those libraries are out there.) And if those libraries are not, you work with people who specialize in development and your teams and interact with them to help you out.
The same applies for when I interact with developers. I don’t need the entire store of their knowledge at my immediate disposal. Because, see, I don’t specialize in development. I specialize in quality assurance and testing. But I know that one key aspect of being able to do testing efficiently is the proper use of automation. So I learn enough automation tools, techniques and approaches in various languages so that I can most effectively use these tools.
But even all this is still focusing on the technology side. What I really want from testers is to know the tradeoffs in abstraction levels when you go from human testing to automated checking. What I really want to know is how testers work to spot ambiguity, inconsistency and outright contradiction — in requirements as well as in a functioning application.
Some of that test design and test thinking may get encoded in the form of scripts or other solutions in tools that support — but do not perform — testing. This is usually where the so-called “technical tester” comes in.
Do You Want a “Technical Tester”?
So what do you do as a “technical tester”? Well, in my view, you learn just enough, when you need it, in order to do a task that is related to the implementation of automated checks against your system or discussing code-based checks with developers. Many companies are simply not looking at situations in this nuanced way. I don’t even know that calling it “nuanced” is accurate; it’s simply common sense to me.
I honestly think some of these people building so-called “test teams” assume that you walk around with algorithms stored in your head that you just recall on demand. If I did that, maybe I would be a developer. But, quite frankly, I don’t store minutiae that I can look up via a simple web search.
What I do store in my head are the heuristics by which I approach testing and quality assurance, and how those disciplines interface with developers and business analysts. Further, I focus on how all test artifacts — including automated scripts — are reflective of a shared source of truth that indicates what the level of quality is across a series of features that, when taken together, provide business value for customers.
I still see and hear about places who interview testers — even if by the sadly misnamed idea of Software Development Engineer in Test — as if they want coders. This shows a complete and fundamental lack of understanding of testing, not only as it is applied in general but as it is applied in the context of collaboration and communication with developers and business analysts.
Okay, Okay — We Get It. So What?
The reason this concerns me is because I’m seeing an entire “generation” of testers who are being indoctrinated into this view. They are then becoming proponents of this view. They become really good developers and what they do is turn testing problems into programming problems.
Let me repeat that and be even more specific:
People who don’t really get testing — or don’t want to do it — tend to want to turn it into a programming problem instead.
What does it mean to “get it”? In this case, I mean where you have internalized a series of practices and ways of thinking that informs how you approach testing. I’ll come back to this at the end of the post.
What you end up with instead are a large group of practitioners who don’t “get it.” Thus you get a technocratic obsession promoted by people who like to write code and call that testing rather than dealing with the social discipline of quality assurance and testing as a whole.
Speaking from another bit of career experience, I worked in one environment where the entire test team was made up of coders who didn’t want to test at all, but wanted to be Java developers. They were just biding their time. The management not only supported this, they encouraged it. In fact, one of the rules for hiring anyone for the test team — regardless of role — is that they must “be able to code.”
It is my belief that we are seeing an industry that is perhaps slowly — but no doubt effectively — supporting an abdication of tester responsibility by making assumptions that a developer mindset can and should do most of the testing when that is demonstrably false. We are seeing assumptions made that automation of any sort is testing when, in fact, it is absolutely not.
Roll Out BDD and TDD
This has gone even more crazy when you factor in how some developers-as-testers have gone the full BDD route. I’ve talked about the BDD trap, at least as I’ve seen it, but I’ve also talked about BDD as a form of human engagement which gets lost if you don’t, as Liz Keogh has said, step away from the tools.
BDD — and concepts like it — often come into the picture because certain technocrats think that it’s “cool” or career enhancing — the latter unfortunately often being true. What’s more, these folks often have an ideological commitment to a style of development wherein everyone tries to think of everything that’s going to be done before it’s done. That way we can write our “feature files” or our “executable specifications”.
What you’re actually doing – with TDD or BDD – is creating yet another layer of technology. You are creating a whole lot more code that is itself untested. Think about that: you’re actually adding to your cognitive and technical workload.
Okay, now, c’mon. I’m being purposely facetious here, right? Well, I’m not saying TDD and BDD have to be this way — but they often are.
Consider that TDD and BDD can be about writing test cases or test scenarios but more often it becomes about automating a series of output checks before something is implemented. What they should be about is putting pressure on design at various levels of abstraction so that we can discover problems in our understanding of a shared idea of quality. These practices need to be about clarifying intent, encoding understanding, and pressuring design. And, to be sure, these technocrat practitioners say that’s what these practices do.
But what do you end up with? With TDD, you mostly end up with a set of code-based unit tests that some developer wrote — and maybe reviewed with another — that serve as automated checks of classes and methods. With BDD, you mostly end up with a certain constrained form of English that has “glue code” that matches those English statements to some code that, in turn, delegates down to more code by using word token extraction and regular expressions. And thus again do we basically get automated checks, usually working at the UI level.
What gets lost? Well, what a good tester would tell you — and what most developers will not — is that the efficiency of authoring and evolving tests is just as important, and arguably more so, than the readability of those tests. The efficiencies of authoring and evolving do not come built in just because you use tools like Geb or Rspec or Cucumber or whatever the flavor of the month is. Just making something readable does not mean it is expressive nor that it reveals intent. Further, all the natural language, business-domain focused tests do not necessarily mean that you get benefits of authoring and/or evolving those tests. In fact, it can often seem like the opposite.
I have written more than my fair share of test description language posts on this site and one of the key problems I’ve had in every situation where I’ve applied these thoughts is having business teams, testers, and developers build up a picture of the application and how it is tested when these repositories of natural language tests get even slightly large.
For all that TDD and BDD talk about being design-guiding, I’ve rarely found this to be the case except with the simplest of applications. What I see are people claiming that tools like Cucumber are misunderstood because they’re referred to as “testing tools”, not realizing that testing is a design activity, not just an execution activity. Even developers who tout TDD rarely seem to make that distinction. And even when they do, they certainly don’t seem practice it.
Why do I say that? Well, let’s wrap this post up by what I consider to be relevant points of test thinking that are getting lost in the shuffle to have testers be developers.
A Focus On the Danger
Earlier I mentioned to not “get it” (referring to testing as a discipline) means a lack of applying effective and efficient thinking about testing to a technology context. I said I would come back to that at the end of the post and here we are.
First let me just reiterate something I indicated above but that bears repeating:
- You can’t automate testing, at least not if you believe testing is a sapient process that requires an engaged human brain that collaborates in equal measures with applications (learning them) and with people (talking to them). You can support the test process — which is always a human process — with tools. But those tools do not test anything.
This “automation” includes not just UI-level automation but also code-based automation. So a tester — as opposed to a developer — who doesn’t know every design pattern or doesn’t have a Computer Science level of algorithm knowledge is not a hindrance. One of the dangers of the technocrat tester — and those who enable them — is that such testers are considered a hindrance, at worst, or less capable of doing the job of testing than a developer, at best.
Let’s step into some basics for a second. A check is a repeatable process or method that verifies the correct behavior of an application or service in a determined situation with a determined input expecting a predefined output or set of interactions.
But … determining “correct behavior” here means, on one level, exercising the range of states and the combinations of operations. On another level it’s recognizing that all testable requirements are one or more data conditions, exercised by test conditions, that lead to observable results. Yet this in turn requires understanding how tests complement requirements. This happens when the tests are written to tell a compelling story that provokes the right questions that refines an shared understanding of what quality means.
This means that testing rules are decoupled from the implementation, so tests — or, rather, the discussion and communication and thinking that comprise testing — can easily evolve along with the business rules. This also allows us to write test scenarios before any code, even before the interfaces for that code are present.
The danger of the technocrat tester is that they tend not to express the above in some way.
A tester knows that requirements — like tests — help clarify intentions. Requirements — like tests — provide an unambiguous description of behavior. When this focus is backed by the human activity of treating testing as a design activity, you are led to these heuristics:
- Use the context of creating tests to see if a design is too rigid or too unfocused.
- Use the context of executing tests to stay focused on making progress.
- Use the context of feedback from tests to raise the quality of the system.
A set of guidelines for action are thus expressed that are something like these:
- Use a minimum set of processes and tools.
- Minimize translation paths between artifacts.
- Provide as unified a “source of truth” as possible.
- Don’t unnecessarily mix your abstraction levels.
The testing world these days is mostly about so-called “cross-functional” teams working in an iterative delivery environment, planning with user stories and testing frequently changing software under the tough time pressure of short iterations. Because of these pressures, a focus on making “testing faster” is put in place and that’s where the technocrat comes in, with promises of tool solutions that actually do testing, as opposed to simply supporting some aspects of the simplest parts of testing — i.e., the execution parts as opposed to the design parts.
No code or tool focus will help you answer the following questions, that the technocrats often barely consider:
- Does the application work well?
- Is the application usable?
- Is the application useful?
- Is the application successful?
The technocrats are often simply interested in the answer to one question:
- Does the application work at all?
And this is important because testing if something works well often leads to tangential information discovery. Unattended automated execution — whether code-based or not — won’t provide that learning. Nor will a focus on algorithms or design patterns. Those things will (possibly) impact code quality and that can in turn impact maintainability. That is certainly a viable measure of internal quality and it’s one that developers should absolutely be focused on to some degree.
But the information discovery we often need the most is at a level of abstraction that is removed from the code — and thus largely removed from the practices that are focused on code. And that means the “skills of a coder” are less important to a tester.
More To Come
This was largely a rant post but this notion of the technocratic focus of testing is one that is deeply concerning to me. My goal here was to get my ideas out there in a moderately digestable, albeit not concise, form and then use that as the springboard to further refine my thinking on this topic.