Select Mode

The Abstract Battle for Irrelevancy

An interesting discussion came up on LinkedIn recently regarding the idea of whether automated tools “find bugs” and I actually found the discussion around this to be exactly what is wrong with a lot of our testing industry these days. I find testers are fighting more abstract battles and become less relevant as they do so. But maybe I’m the one that’s wrong on that? Possibly! Let’s dig in.

First, to be fair to all participants, here’s a link to the conversation. You can see my opening comment in there as well as others who responded.

I’m providing that because I don’t want to mischaracterize anything. You can read everything for yourself should you wish to dig in.

There’s one thing I want to call out. In one of my comments, which is easy to miss, I said this:

I should note: I’m not even saying what the testers are saying is wrong. I’m just saying it’s not as relevant as other things they could be saying.

That is the core issue for me with a lot of test discussions I see and why I think I move a bit orthogonally to many of our vocal testers in the industry. I see a trend of not saying the more relevant thing and instead putting up strawmen arguments and then attacking those strawmen as if they weren’t some abstract thing we created just to attack.

In line with my sentiment, I’ve posted things like Don’t Be Such a Tester and Testers, We Need a Narrtive partly in response to this continuing slide of at least a vocal segment of testers into irrelevancy. And, what’s worse, a lot of junior testers may be listening to them because they sound authoritative.

Irrelevancy. Yikes. Now, that is a strong word but I do very much mean it and I do use it very deliberately. Being fair, I also allow for the notion that this idea could be applied to me as well. I have an entire series of posts here that I call my indefinito series that could easily be seen as entirely irrelevant to anything having to do with testing.

I was trying to think of ways to frame this immediate discussion and here’s what I came up with.

Chess or Gettysburg?

The idea of a setting — where something takes place — allows for varying degrees of abstraction or concreteness. The presence of a setting is one of the necessary criteria that differentiates wargames from general games of strategy.

It feels appropriate to bring this context up because a lot of testers seem to feel that testing is always in a battle for its very existence in the wider industry. (This reminds me of my long-ago post Winning Battles, Losing Wars.) I’ve even heard one vocal tester describe it as a “culture war.”

Consider the game of chess. The board on which chess games takes places is completely abstract, right? It’s just sixty-four squares and there’s no relationship to any real or imagined terrain. That is the terrain that I see testers occupying a lot with their “arguments.”

Consider a wargame like Gettysburg. This has a setting which characterizes both the forces in conflict and the locations where the battles take place. It’s real and practical. That’s where I believe testers need to reside.

In that LinkedIn thread, you’ll find people saying things like:

“The automation tool didn’t find a bug, a human who wrote the tool actually found the bug and even then it wasn’t really a bug until a human classified it as a bug.”

Now maybe that resonates with you. Maybe that sounds exactly right to you. Keep in mind my above quoted part: I’m not necessarily saying this statement is somehow categorically wrong. So you and I still might agree. Maybe the above statement not just resonates with you and sounds right to you but you also believe this is what should be said. If so, that’s where you and I would start to disagree.

Here’s what I think when I hear the above — and what I know, from a whole lot of experience, non-testers also think: you’re boring me. You seem irrelevant. Put more charitably, there are more relevant things you could have said.

Sheesh, tough crowd, right? But here’s why I say that. I know the tool didn’t, on its own, decide to find a bug. And just about everyone I work with or have ever worked with knows that too. We know that a human had to program the tool to recognize situations where something that previously worked no longer works.

I also know the deviation we found might not end up being a bug or maybe not our bug. So do the people I work with and so do all the people I’ve ever worked with.

We know all this because we’re operating in a specific setting (Gettysburg) and not some abstract space (chess).

A lot of our vocal testers — which also seem to be the most negative sounding testers, incidentally, if you watch them on public venues like LinkedIn — seem to assume everyone is playing chess when they’re really playing Gettysburg.

And someone playing Gettysburg, but acting as if they’re playing chess, is irrelevant.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.