Automation is a Technique, Not Testing

Marshall McLuhan said “We become what we behold. We shape our tools and then our tools shape us.” Anyone who seeks excellence in the testing craft must struggle with the appropriate role of tools. For testers this shows through no better than when dealing with automation.

Automation is a technique. When it’s treated as testing, that’s bad. It’s important for testers to get people to realize that even the term “automated testing” is a misnomer. It’s “automated checking.” In this I agree with the thoughts of James Bach and Michael Bolton in Testing vs Checking.

The reason for this, as those two guys state, is because testing is a sapient process and tools are not sapient. I know they are trying to get away from the “sapient” wording but I personally like it. The lack of this distinction is, I believe, one of the reasons why “test tools” are replacing testers across the industry, with the emphasis on the SDET role being just one manifestation of that.

It may sound like a semantic shell game, but, as James Bach says, it’s no different than how “programming” is more than just “compiling.” Would you call a compiler (the tool) a programmer (the person)? If no, then why call a checker (the tool) a tester (the person)? The sapient — i.e., reflective and thoughtful — activity of testing is not something a tool can do, just as a compiler cannot do the sapient activity of programming.

If checking is a tactic of testing, as asserted by the referenced article, then automation is just one of the tools for doing that checking. Automation is a particular context that allows checking to be done. And, to be sure, automated checks are a fantastic way to support a test process.

Testers Don’t Just Check

Let’s make sure that this point is drilled home. Consider some of the things testers do:

  • Look for missing concepts
  • Group ideas by commonality and focus only on variations
  • Split validation and processing
  • Summarize and explore important boundaries
  • Look for inconsistencies, ambiguities, and contradictions

Does anyone really think a tool will do that? Of course they don’t. But the way automated tools get used, as well as the heavy emphasis on such tools in many environments, ends up leading to a thought process where testing is considered nothing more than an execution activity. And that’s simply not true. Testing is a design activity.

So am I advocating against automation? Not at all.

Automation is a great technique to support the human-centric process of testing. But that’s what automation does: support it. Automation doesn’t define it. It’s important to avoid a technocratic, tool-centric approach and this is even more so the case in an industry where concepts like TDD and BDD put emphasis on “testing” within the context of various tools. Let’s consider “unit testing” as just one example. “Testing” was a bit of a misnomer for the particular programming activity that we call “unit testing.” The name itself makes it sound like it’s some activity that is distinct from and other than coding. And that, in turn, seems to have fostered an idea that “unit testing” is separate from design. Yet, it’s not at all.

What practices like TDD have tried to bring to the forefront is the idea that testing is part of design and an integral part of code. The idea of thinking about testing at this level was a great intersection of design, coding, and debugging. But when you conflate that activity with “unit testing” — i.e., the execution of those “tests” via tool — you start to treat it as an “after the fact” activity. The same applies to automation at the UI or service level, which is what some people have treated BDD as.

As a test specialist, you know that you don’t just get a tool — open source or otherwise — and “implement” it. In this era of rapidly shortening development cycles and frequent releases (sometimes daily), automation as a technique absolutely needs to be a focus. But only after the success criteria are put in place for it. You don’t automate simply because you want to automate. It’s generally less than optimal if your automation activities are driven by technical instead of business considerations.

Automation is Nothing More (or Less) Than Coding

It’s important to realize that while coding is a discipline, automation is not. Automation is a technique, one that relies on the discipline of coding. This technique is performed to gain fast feedback. But automation is only as “smart” as that which it consumes. What it consumes are tests — or, at the very least, test-thinking — that are the output of human thinking and deliberation.

Here’s what automation won’t do: it will not consider different ways to test a given feature. It will not think up combinations or permutations on its own. It will not weigh options and choose a particular test strategy that has the best chance of finding important problems quickly. It will not make determinations about how much time or effort to spend, particularly based on anything it “discovers.”

As a test specialist, all of what I just mentioned is your job! Even if you are a “technical” tester or a SDET. One of your goals is to invest in test solutions (some of which deal with automation) that will lower your testing cost over time while making sure you do not compromise your ability to find and communicate important issues in a responsible time frame.

The Discipline Part

A moment ago I said: “Automation is a technique, that relies on the discipline of coding.” Automation is simply one context for a coding activity. Now, even though I’m speaking for myself here, I believe it’s important for test specialists with a technical focus to keep something in mind. And that’s simply this: I want writing test solutions to feel more like the first time I wrote a few lines of code and stood back in amazement at what I was able to make happen with very little effort. I wanted the coding process to feel more like creating and less like trying to satisfy the strange inner workings of a browser, a REST service, a compiler, or whatever other mechanism I’m working with. At the same time, I want an environment that helps me make the design choices that make test solutions easy to create and understand from the start, but that continue to be the right choices to make my solutions easy to test, extend, and maintain as they grow larger.

This is really important! And this is where choosing your automation solution language and ecosystem matters. Many, many (many!) times, it does not matter if the language you are writing the test solutions in matches the language the developers are writing the production code in. Yet it is routinely the case that companies will say they want a “Java test solution” or a “Python test solution.”

Now, maybe there are good reasons for that. But often the reason is that it’s simply because they want someone to use the same language as the developers.

And why is that? To be fair, there are practical elements to this. Often developers are using a so-called “mainstream” programming language like Java or C# and it is perceived to be easier to find automaters who are similarly familiar with those languages because of their mainstream status. Another argument is that those languages are “powerful” and the underlying implication is that you shouldn’t be using “toy” or “scripting” languages — like Ruby, JavaScript, Python — for something as important as test automation.

Ah — but remember it’s not test automation; it’s simply code that executes checks. But beyond the semantics, let’s talk about power for a second.

The Power

As most programmers and developers will tell you, years have been spent tolerating baroque, complex tools that were the only way to get the power we needed. Years have been spent accepting reduced power for some sanity-enhancing simplification of the programming model. It’s why some companies still choose very expensive tool solutions like QTP. And, again to be fair, some trade-offs are truly fundamental. Power vs. simplicity is not one of them, however.

Power is what makes the easy stuff easy, and the hard stuff possible. In the context of writing automation, power is what enhances my productivity, enables me to fulfill my goals, and keeps me engaged while doing so. Power is sufficiency to perform the tasks I want to undertake. To feel powerful as a programmer, you need to build on a substrate that is itself capable and widely deployed. Then, your tools must give you full, unrestricted access to that power.

The Language

Let’s talk about choosing a language for a tool solution, particularly automation. Let’s say your situation is that your company has a REST (or SOAP) service. Sitting in front of and alongside those services are some browser-based applications. Further, there is a mobile component as well. You can use pretty much any programming language to execute checks against this kind of application suite. So when it comes down to it, what do you choose? Clearly you’ll focus on what language you are the most familiar with. But let’s say you are familiar with multiple. I already said any of those languages will likely be able to support the automation you need, so what drives your decision then?

Well, here’s what it is for me. First, let me throw a couple of quotes out there that are no doubt very familiar to you but have, since my early days in the field, driven my thinking on this subject.

  • Frank Lloyd Wright: “An architect’s most useful tools are an eraser at the drafting board and a wrecking bar at the site.”
  • Albert Einstein: “Everything should be as simple as possible, but no simpler.”
  • “Music is the silence between the notes.” (Attributed to Mozart, Debussy, Copeland, Cage, Adams and just about every notable composer.)
  • Antoine de St. Exupéry: “Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away.”

Regarding that last quote, what matters to me in automation solutions is what syntax can be taken away from the scripts while still leaving the meaning intact. I want a language that supports that ability: to remove as much boilerplate and cruft as possible, leaving me with the essence of what I want. That pretty much rules out Java and C# for me right at the outset.

One of my major goals is simplifying syntax and making each line easier to comprehend. That simplicity of expression will encourage you to compose programs that are, in turn, simpler and easier to comprehend. Your programs will become less complicated and not so intertwined.

A focus on simplifying and on comprehension often means you stand a better chance of your checks and your framework being entirely separate. There is thus much less chance of accruing technical debt. So my point here is that simplifying small things, like syntax, can lead to simpler big things, like entire programs. Further, a more natural syntax not only makes your programming life easier but it lets you focus on the problem you are expressing and in the case of automated checks the extent to which those checks do correspond to test thinking is going to be the most helpful.

So let’s take a quick look at what I mean here. In most automated solutions, the most straightforward way to instantiate page objects in calling code is to create new objects each time you want to interact with a page. In Ruby, as just one example, you can use blocks and automatically instantiate pages as needed:

Consider that same bit of logic in a language like C#

To do this, I’m actually using the “using” construct in a way that some people frown upon. My point here is that, when power of execution is not an issue, then I choose automation that has a degree of what I’ll call, for lack of a better term, elegance. All I mean by this is high signal, low noise. The programs are short programs. Short programs are cheaper to build, easier to deploy, and cheaper to maintain. The programs are concise rather than terse. Let’s consider another example this one regarding how a page object looks.

Here’s one in Java.

Here’s that same thing in C#:

And here’s that same thing in Ruby:

Now, to be fair, there are better ways I could have done this in Java and C# and there are worse ways I could have done it in Ruby. The above are showing a bare minimum of what is possible with the least amount of work using my own automation solution called Symbiont. My overall point with these examples is simply to say that irrelevant complexity quickly becomes dangerous complexity. Simple components — like that simple and concise Ruby page object — allow systems to do what their designers intend, without also doing other things irrelevant to the task at hand.

Did I Stray From the Topic?

You might be wondering if I’ve gone off the path. I started talking about automation and its context as a checker rather than a tester, and thus you do automated checking rather than automated testing. But then somehow I ended up in the weeds talking about the “power” of various languages and showing code examples. What the heck happened here?!?

Well, that was sort of my point. Notice how easy it was to slip from one context to the other?

Everything about automation is related to coding and about choosing ecosystems and languages. And then, using those languages, applying certain patterns, like page object patterns. None of that, however, has anything to do with testing. Even when I make some logic that calls my “login as admin” method in the above three examples — that’s still not testing. It’s simply checking to see if the script can log in using certain credentials.

The testing part comes in when I think about what kind of logins I should be trying, perhaps based on permissions or roles. And what an invalid login is. And what kinds of messages the user should get for valid and invalid logins. And how should those messages be displayed so that we have consistency in our success, warning, and error displays to users. All of that automation — no matter what language — will not think about those things for me. And, only in the loosest sense, will it test those things for me. It will simply check them.

But it’s very easy for many people to get lost in all that “coding stuff” (the second part of my post) and confuse it with the idea that you are doing “testing” in an automated fashion (the first part of my post). All this being said, I do believe that automation as a technique, particularly if it plays a large part in supporting your overall test strategy, must be of a sort that can consume tests. It’s up to the human beings in that process to make sure it is consuming the right kinds of tests.

A Final Bit of Realism

Let’s face it: going around telling everyone that they “talk wrong” when they speak about “automated testing” is not a way to get people to listen to your argument. I’ve used the phrase “automated testing” routinely and will no doubt continue to do so. All this being said, however, part of being a specialist in your discipline is to recognize how words are used and to make sure you clarify the context of your activities.

I’ve been in situations where I’ve given the “automation is checking, not testing” spiel but I have not then become a phrase zealot and shot down anyone who used the the words “automated testing” in a sentence. I’m less concerned with someone using the phrase “automated testing” if I know for a fact that they have no illusions about the fact that automation is simply doing the grunt work (test execution) and has nothing to do with the important work (test design).

I do think you can take language policing a bit too far and there is a balance to be struck here. As a test specialist (always assuming I actually am one; I’m not always sure), my goal is keep searching for that balance. This article is one attempt to do so.

Share

About Jeff Nyman

Anything I put here is an approximation of the truth. You’re getting a particular view of myself … and it’s the view I’m choosing to present to you. If you’ve never met me before in person, please realize I’m not the same in person as I am in writing. That’s because I can only put part of myself down into words.

If you have met me before in person then I’d ask you to consider that the view you’ve formed that way and the view you come to by reading what I say here may, in fact, both be true. I’d advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

This entry was posted in Automation. Bookmark the permalink.

2 Responses to Automation is a Technique, Not Testing

  1. Jim Hazen says:

    Jeff,

    I’ll go one step further on the basic premise of automation not being testing.  It isn’t even automated checking.  It is a technique to automate the “execution” of a check or other action to exercise the target software.

    Food for thought people.  Think about it.

    • Jeff Nyman says:

      Interesting. That might be pushing the semantics too far for me but that’s not to say your point is invalid by any means.

      Consider that a human can do pure checking as well. I would argue that a human that follows a rote script is really doing nothing more than checking. In that case, assuming little to no thought beyond completing the script, that’s a manual check. So if that same script is fed into a tool that performs the execution, it’s simply that same check, but automated. So in both cases you have execution of a check: one done by a human, another by a tool.

      The notion of where checking crosses into testing (or vice versa) is definitely interesting to me. It’s also interesting how far to carry the semantics and how much the distinction can and should guide thought. Checking seems to be something we do to confirm or disconfirm something. Which is why “checking” works great for the technique of regression. But we don’t necessarily expect checking to reveal new insights to us or provide a lot of new information. After all, we know what something should (or should not) do and we’re simply checking if it does (or doesn’t) do it.

      Testing, however, seems to be predicated upon the idea that we’re not just confirming or disconfirming, but seeking a shared notion of understanding regarding not just how a feature works but how it works in combination with other features, and whether it makes sense as part of a user experience, and whether we are able to describe the feature’s operation in a way that makes sense. Testing is, in my view, predicated upon the search for insight, speculation, inconsistency, ambiguity, contradiction, and so on. Testing doesn’t always require execution, which is why it can be a design activity. Checking, however, does require execution. So I’m still lead back to the idea that this “execution check” can be manual or automated.

Leave a Reply

Your email address will not be published. Required fields are marked *