Select Mode

The Danger of the Technocrat Tester

If being completely accurate, I would have to title this post something like “The Danger of the Companies that Frame Testing as a Technocratic Discipline and Hire Testers Who Reinforce This View”. But that’s a really cumbersome title to write! However, I believe that the technocrat tester is a big problem in our industry and many companies are reinforcing this problem. So let’s talk about this.

Let me get a few points out there that may be relevant.

  • I’ve talked about the danger of the certified tester. But, honestly, that barely holds a candle to this other pernicious problem that I see getting worse.

With the exception of the first post, all of that was in response to the current trend in the industry of putting a focus on so-called Software Development Engineer in Test (SDET) roles where the emphasis is on being a developer rather a tester. Putting my cards even more clearly on the table here: I think how many companies hire for test positions today is not only ridiculous but, in many cases, harmful. In the Chicago market alone, this has reached proportions that are borderline ridiculous.

What’s The Focus?

The main culprit here is usually a focus on (1) developer interaction, (2) automation, and (3) an assumption that both of these tasks are best handled by a “developer with some testing background” rather than a “tester with some development background”. I see companies who have a lot of turnover in their roles because they get this distinction wrong. Yet they continue to adhere to it.

This is something companies must make clear: do you want a developer with some testing background or a tester with some development background? If you don’t know or you don’t understand why the distinction would matter, do yourself a favor: figure it out and stop wasting a lot of people’s time, including your own.

Let me throw out a few other points:

  • I’m not against testers interacting with developers. I have a series of posts in my learning category about where I think testers should spend time learning development practices and tools.

Automated Code Developers — Or Testers Who Can Write Automation?

I’ve talked about how I know that automation is not testing. I still see and hear about (and sometimes talk to) companies that, during an interview for a QA and/or Test position, take the candidate through various coding exercises as if they were a developer. I recall one environment where someone was drilling me on the nuances of nodes in a LinkedList versus the looser structure of an ArrayList and why I would or wouldn’t use one or the other.

They never once got around to asking me how I test applications. Or what I think effective test techniques are. Or what I consider some of the key challenges when testing an application or a service.

I do understand that they are trying to determine if you can think “algorithmically” in some cases. But the fact is that as a person writing automated scripts, while some level of algorithmic thinking is important, most of what you are using are libraries that do much of the work for you. (Or you should be if those libraries are out there.) And if those libraries are not, you work with people who specialize in development and your teams and interact with them to help you out.

The same applies for when I interact with developers. I don’t need the entire store of their knowledge at my immediate disposal. Because, see, I don’t specialize in development. I specialize in quality assurance and testing. But I know that one key aspect of being able to do testing efficiently is the proper use of automation. So I learn enough automation tools, techniques and approaches in various languages so that I can most effectively use these tools.

But even all this is still focusing on the technology side. What I really want from testers is to know the tradeoffs in abstraction levels when you go from human testing to automated checking. What I really want to know is how testers work to spot ambiguity, inconsistency and outright contradiction — in requirements as well as in a functioning application.

Some of that test design and test thinking may get encoded in the form of scripts or other solutions in tools that support — but do not perform — testing. This is usually where the so-called “technical tester” comes in.

Do You Want a “Technical Tester”?

So what do you do as a “technical tester”? Well, in my view, you learn just enough, when you need it, in order to do a task that is related to the implementation of automated checks against your system or discussing code-based checks with developers. Many companies are simply not looking at situations in this nuanced way. I don’t even know that calling it “nuanced” is accurate; it’s simply common sense to me.

I honestly think some of these people building so-called “test teams” assume that you walk around with algorithms stored in your head that you just recall on demand. If I did that, maybe I would be a developer. But, quite frankly, I don’t store minutiae that I can look up via a simple web search.

What I do store in my head are the heuristics by which I approach testing and quality assurance, and how those disciplines interface with developers and business analysts. Further, I focus on how all test artifacts — including automated scripts — are reflective of a shared source of truth that indicates what the level of quality is across a series of features that, when taken together, provide business value for customers.

I still see and hear about places who interview testers — even if by the sadly misnamed idea of Software Development Engineer in Test — as if they want coders. This shows a complete and fundamental lack of understanding of testing, not only as it is applied in general but as it is applied in the context of collaboration and communication with developers and business analysts.

Okay, Okay — We Get It. So What?

The reason this concerns me is because I’m seeing an entire “generation” of testers who are being indoctrinated into this view. They are then becoming proponents of this view. They become really good developers and what they do is turn testing problems into programming problems.

Let me repeat that and be even more specific:

People who don’t really get testing — or don’t want to do it — tend to want to turn it into a programming problem instead.

What does it mean to “get it”? In this case, I mean where you have internalized a series of practices and ways of thinking that informs how you approach testing. I’ll come back to this at the end of the post.

What you end up with instead are a large group of practitioners who don’t “get it.” Thus you get a technocratic obsession promoted by people who like to write code and call that testing rather than dealing with the social discipline of quality assurance and testing as a whole.

Speaking from another bit of career experience, I worked in one environment where the entire test team was made up of coders who didn’t want to test at all, but wanted to be Java developers. They were just biding their time. The management not only supported this, they encouraged it. In fact, one of the rules for hiring anyone for the test team — regardless of role — is that they must “be able to code.”

It is my belief that we are seeing an industry that is perhaps slowly — but no doubt effectively — supporting an abdication of tester responsibility by making assumptions that a developer mindset can and should do most of the testing when that is demonstrably false. We are seeing assumptions made that automation of any sort is testing when, in fact, it is absolutely not.

Roll Out BDD and TDD

This has gone even more crazy when you factor in how some developers-as-testers have gone the full BDD route. I’ve talked about the BDD trap, at least as I’ve seen it, but I’ve also talked about BDD as a form of human engagement which gets lost if you don’t, as Liz Keogh has said, step away from the tools.

BDD — and concepts like it — often come into the picture because certain technocrats think that it’s “cool” or career enhancing — the latter unfortunately often being true. What’s more, these folks often have an ideological commitment to a style of development wherein everyone tries to think of everything that’s going to be done before it’s done. That way we can write our “feature files” or our “executable specifications”.

What you’re actually doing – with TDD or BDD – is creating yet another layer of technology. You are creating a whole lot more code that is itself untested. Think about that: you’re actually adding to your cognitive and technical workload.

Okay, now, c’mon. I’m being purposely facetious here, right? Well, I’m not saying TDD and BDD have to be this way — but they often are.

Consider that TDD and BDD can be about writing test cases or test scenarios but more often it becomes about automating a series of output checks before something is implemented. What they should be about is putting pressure on design at various levels of abstraction so that we can discover problems in our understanding of a shared idea of quality. These practices need to be about clarifying intent, encoding understanding, and pressuring design. And, to be sure, these technocrat practitioners say that’s what these practices do.

But what do you end up with? With TDD, you mostly end up with a set of code-based unit tests that some developer wrote — and maybe reviewed with another — that serve as automated checks of classes and methods. With BDD, you mostly end up with a certain constrained form of English that has “glue code” that matches those English statements to some code that, in turn, delegates down to more code by using word token extraction and regular expressions. And thus again do we basically get automated checks, usually working at the UI level.

What gets lost? Well, what a good tester would tell you — and what most developers will not — is that the efficiency of authoring and evolving tests is just as important, and arguably more so, than the readability of those tests. The efficiencies of authoring and evolving do not come built in just because you use tools like Geb or Rspec or Cucumber or whatever the flavor of the month is. Just making something readable does not mean it is expressive nor that it reveals intent. Further, all the natural language, business-domain focused tests do not necessarily mean that you get benefits of authoring and/or evolving those tests. In fact, it can often seem like the opposite.

I have written more than my fair share of test description language posts on this site and one of the key problems I’ve had in every situation where I’ve applied these thoughts is having business teams, testers, and developers build up a picture of the application and how it is tested when these repositories of natural language tests get even slightly large.

For all that TDD and BDD talk about being design-guiding, I’ve rarely found this to be the case except with the simplest of applications. What I see are people claiming that tools like Cucumber are misunderstood because they’re referred to as “testing tools”, not realizing that testing is a design activity, not just an execution activity. Even developers who tout TDD rarely seem to make that distinction. And even when they do, they certainly don’t seem practice it.

Why do I say that? Well, let’s wrap this post up by what I consider to be relevant points of test thinking that are getting lost in the shuffle to have testers be developers.

A Focus On the Danger

Earlier I mentioned to not “get it” (referring to testing as a discipline) means a lack of applying effective and efficient thinking about testing to a technology context. I said I would come back to that at the end of the post and here we are.

First let me just reiterate something I indicated above but that bears repeating:

  • You can’t automate testing, at least not if you believe testing is a sapient process that requires an engaged human brain that collaborates in equal measures with applications (learning them) and with people (talking to them). You can support the test process — which is always a human process — with tools. But those tools do not test anything.

This “automation” includes not just UI-level automation but also code-based automation. So a tester — as opposed to a developer — who doesn’t know every design pattern or doesn’t have a Computer Science level of algorithm knowledge is not a hindrance. One of the dangers of the technocrat tester — and those who enable them — is that such testers are considered a hindrance, at worst, or less capable of doing the job of testing than a developer, at best.

Let’s step into some basics for a second. A check is a repeatable process or method that verifies the correct behavior of an application or service in a determined situation with a determined input expecting a predefined output or set of interactions.

But … determining “correct behavior” here means, on one level, exercising the range of states and the combinations of operations. On another level it’s recognizing that all testable requirements are one or more data conditions, exercised by test conditions, that lead to observable results. Yet this in turn requires understanding how tests complement requirements. This happens when the tests are written to tell a compelling story that provokes the right questions that refines an shared understanding of what quality means.

This means that testing rules are decoupled from the implementation, so tests — or, rather, the discussion and communication and thinking that comprise testing — can easily evolve along with the business rules. This also allows us to write test scenarios before any code, even before the interfaces for that code are present.

The danger of the technocrat tester is that they tend not to express the above in some way.

A tester knows that requirements — like tests — help clarify intentions. Requirements — like tests — provide an unambiguous description of behavior. When this focus is backed by the human activity of treating testing as a design activity, you are led to these heuristics:

  • Use the context of creating tests to see if a design is too rigid or too unfocused.
  • Use the context of executing tests to stay focused on making progress.
  • Use the context of feedback from tests to raise the quality of the system.

A set of guidelines for action are thus expressed that are something like these:

  • Use a minimum set of processes and tools.
  • Minimize translation paths between artifacts.
  • Provide as unified a “source of truth” as possible.
  • Don’t unnecessarily mix your abstraction levels.

The testing world these days is mostly about so-called “cross-functional” teams working in an iterative delivery environment, planning with user stories and testing frequently changing software under the tough time pressure of short iterations. Because of these pressures, a focus on making “testing faster” is put in place and that’s where the technocrat comes in, with promises of tool solutions that actually do testing, as opposed to simply supporting some aspects of the simplest parts of testing — i.e., the execution parts as opposed to the design parts.

No code or tool focus will help you answer the following questions, that the technocrats often barely consider:

  • Does the application work well?
  • Is the application usable?
  • Is the application useful?
  • Is the application successful?

The technocrats are often simply interested in the answer to one question:

  • Does the application work at all?

Instead of just asking ‘does it work?’ when discussing overall tests, testers need to ask ‘how would you know if it actually works well?’ The focus shifts on trying to define what ‘good enough’ means in terms of user capabilities and benefits, without tying it to a particular technical solution or implementation.

And this is important because testing if something works well often leads to tangential information discovery. Unattended automated execution — whether code-based or not — won’t provide that learning. Nor will a focus on algorithms or design patterns. Those things will (possibly) impact code quality and that can in turn impact maintainability. That is certainly a viable measure of internal quality and it’s one that developers should absolutely be focused on to some degree.

But the information discovery we often need the most is at a level of abstraction that is removed from the code — and thus largely removed from the practices that are focused on code. And that means the “skills of a coder” are less important to a tester.

More To Come

This was largely a rant post but this notion of the technocratic focus of testing is one that is deeply concerning to me. My goal here was to get my ideas out there in a moderately digestable, albeit not concise, form and then use that as the springboard to further refine my thinking on this topic.


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

8 thoughts on “The Danger of the Technocrat Tester”

  1. Thank you for this. I will, of course, need to reread a couple of times to make complete sense of it.

    Am working alone and building Rspec testing into my RoR application.  Saturation tests on every aspect would grind all other progress to a near halt.   I am looking for a thoughtful approach which has an appropriate testing footprint where the weight of testing is placed to best benefit.  Have got several tests working but frankly the cost/ benefit ratio for (say) validation tests is not great.  Others might be better.  With a sophisticated fine grained pattern of permissions in the application: I am trying to weigh up what amount of BDD testing will justify it’s cost (measured in hours or days of my time).

    One of the things that has delayed my getting into the TDD and BDD; is the sheer struggle to make sense of the plethora of RoR testing tools available.  What each is?  How they work together or conflict or compete?  Which one(s) to run with?  Working in a large organization with standard decisions can be a pain but some headaches are presolved for you!

    In my current predicament I am wondering whether to use two competing test mechanisms, each to answer a different philosophical question:

    minitest to answer the question:  “Are the unit tests safe to proceed to system tests?
    Rspec + Cucumber to answer the question: “Will this system quickly fall on it’s face in front of users?

    Finally thank you again: it is good to read a nuanced view that does more than tub thump for idea of testing everything everywhere.

  2. An eye opening post Jeff!

    I am almost in the journey from converting myself from a pure “manual” tester to an “automation” tester who can code.

  3. Awe man, I stumbled across this and now I will be spending my whole day reading through all of you older posts….which is a good thing!! I can’t agree more with you, this is awesome. I have watched the request for SDET’s slowly migrate across the country to where it is almost pushed out traditional testers. And now, job postings specifically call out “manual testers”, when I’m not sure they are aware of what they are really asking for.

    Automation “checks”, is not the silver bullet. We use tests written in Gherkin format as a way to standardize our tests and it allows the Automation Engineers to use those exact test in their ‘checks framework’. We couldn’t get the developers to develop to the tests, so that part was discarded but the QA side remains in place. My justification on it remaining this way, is that it allows the “manual” testers to be able to test what the Automation Engineers are coding. I always like to ask SDET’s, “Do you really think you are a better coder than a developer, as to that you are error free but they are not?” It makes them stop and think. And so why do we trust what an SDET codes over a Developer? I have been laughed at before when I also ask, “Who tests the automation tests?” Well, the SDETs do. So they test their own code?? Seems problematic to me.

    Thanks again for the great article.



    Hello Jeff,

    your article is very good for the current state of the industry, but there’s a saying “Dress for the job you want not for the job you have”.

    What is your opinion on the AI progress now days?

    If Google DeepMind manages to play Starcraft II by using only the visual interpretation could it be used to test most of the applications?

    Whom would be able to teach it how to do it, the testers able to code or the ones revolving manual practices?

    What would an AI make better use of BDD, TDD or automated tests?

    Looking forward for your response,


    1. So one thing to note: the danger of the technocrat tester is the tester who focuses on tools to the exclusion of the human elements. These are the people who treat testing almost solely as nothing more than a programming discipline. That is a danger not just for the current state of the industry but for its foreseeable future, in my opinion. And that’s actually magnified if you bring “AI” into it.

      So on that topic …

      The difference between artificial intelligence and artificial sapience is one we don’t often talk about. And we should. Further, having a great deal of intelligence (artificial or otherwise) does not connote an emotional understanding of the wider venue in which something is operating. Let’s say some “AI” plays a game to completion. Okay, so what? Does that “AI” have the level of curiosity and ability to explore beyond the bounds of what it means to simply “complete the game”? Does it take satisfaction in the “job well done”? Does it “eagerly await a sequel” (Starcraft III, to use your example)?

      Does the “AI” — while using those visual interpretations — also afford moments of clarity as to why those visuals may or may not be appreciable to the human eye? Can it make distinctions about what is happening?

      You might say: “What does any of this have to do with testing?” Well, it’s part of what we do — particularly in the context of games — when we look at how something may appeal to human beings.

      So here’s a much better test. Have an AI play “Call of Duty: Black Ops”. I don’t want to spoil much so let’s just say: Would the “AI” anticipate the “Reznov reveal”? Could it? And if it did, how would it test the right pacing to make sure players are given enough information to have a chance of understanding the reveal just before it happened? (As a story and narrative experience tester, I can tell you this is exactly what some of the testing for that game was about.)

      This is the distinction between testing — as a sapient discipline — and checking, which is primarily an algorithmic one.

      My view of “AI” is that it’s largely reductionist, not emergent. Even seemingly emergent properties are traceable to the underlying parameters by which the AI is constructed. Sapience, on the other hand, is pretty much exactly the opposite of that.

      Now, let’s say we do design “artificial sapience.” You are talking about a life form, at that point, that would have so little in common with us, that it would probably scarcely notice us. Think about it: a distributed life form that has pathways that move at near the speed of light — but that can also operate at that speed. We humans would be unbearably slow to such a life form; so much so that it might not recognize we are a “fellow intelligent being with sapience.”

      And if we agree that “testing” is a human activity — as opposed to “checking”, which can be a computation — then that does have some interesting ramifications of what “testing” would mean to something that has “artificial sapience.”

      I sincerely appreciate your question. It mirrors something I’ve currently studying, which is simply: how reductionist is testing? Or, put another way, how much of an “ecological” (i.e., wide view) is there to testing when it is conducting by human beings? One way of approaching that idea is from the other way: looking at a non-human system that “reasons” about a system similar to how we might conceive “testing”.

  5. Hello,

    I am so happy that you have replied 🙂

    There are examples of testers in real world as well. For instance ants. They are significantly simpler than us from the number of organs but they do test by applying a check of a set of parameters, share the results with their peers and perform exploratory tasks very well.

    This weekend I have seen this video “ANT WAR or SUPERCOLONY: New Yellow Crazy Ants” (I am not affiliated with the channel in any sort of form) and I have noticed that if I map what I do at work on a new system against what they do in the video I find many many similarities. This is mostly because in my head for almost every type of input I have found I have created a set of tests which ensure me that I achieve a good enough result. Ants have the same. When their build “fails” the ants get killed and they stop the research on that part of their world for a while. Later, one “brave” individual will try again. The difference is the “death” part is that we store the info and integrate it into our system for better use. Also our “sharing” part is better because of internet and such. As a whole we are a better organism. Phew 🙂

    How would we label this video? “Why Do These Liquids Look Alive?” from which point on we call it curiosity? If it is related to intelligence, programming this is very easy. Very very easy. Extremely very easy. Have any network and program the actions. Let it roam in a world of interactive objects. Make it chose a small section of the screen and apply any of the known actions. Make it log the results in a node. Make it try to apply the same action in another area. Log the findings. It is that easy, I think that this could be weekend project for most developers. It would take me about 1 week cause I suck at it. The problem in today’s world is not curiosity nor the data nor the linkage of data. Is the competition. All the companies develop their own sort of system because they compete with others on a different level. As soon as they reach a point when competition does not bring money they will build on top of each other and complementary to each other (as an example see the progress of the tool called Docker) That’s when the shitstorm will come towards the human kind.

    As for the reply related to games. Yes indeed, your examples and train of thought is correct until the big companies will release an all purpose AI and neural network in which people will be paid to program behaviors. From that moment on the system will know that it should also be curious about the new version because the new version involves more sales therefore more money therefore it is worth for Google/Microsoft/”You name it” to send such a prediction to the gaming company and cash it out. History has proven that when it comes to money or power the humans are suckers for it. The most under-appreciated will do their best to teach the machine as well as they can just to bring down the ones who hurt them before.

    If I read our discussion entirely I think I have one answer for “That is a danger not just for the current state of the industry but for its foreseeable future, in my opinion. And that’s actually magnified if you bring “AI” into it.”. The answer falls exactly into the technocrat testers who would do anything into their well being to program something that proves someone that they can program. Any other non technocrat tester is happy with writing or using a tool that makes his job easier and share the knowledge in the community. On the other hand the technocrats try to do everything programatically because they want to show that they can.

    Honestly I do not think that my testing is much different than my validation. The only differences would be:
    1) my curiosity
    2) the fact that I log my work in order to improve upon next time
    3) my research

    From all the above, I did not replace #3 with a system yet at work and I perform 1 and 2 because some practices it is not feasible enough yet. But if it would be probably I would do it.

    Have a good day,

  6. I stumbled across this today and I’m writing my performance review. I’ve always been a highly performing QA analyst, with 10 years of experience. I’ve served as scrum master on various teams and very much enjoy the analysis of requirements ie. making sure the code works well and making sure it’s functionality our clients want. My current company has changed everyone QA’s titlele to Software Engineer in Test and now only hires those with CS degrees. Amazingly and thankfully, they sent us to coding camp to learn Ruby for a month and then it was off to the races – know how to automate on deadline within a year. It’s been rough and seeing the transition in what is considered relevant and valuable as a QA has been somewhat disheartening. This was refreshing to read, especially on this day! You’ve helped me figure out some additional points to include on my review.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.