Cross-Discipline Associative? What does that even mean? I don’t know. It sounds good, though, right? What I mean by it is that testers take information from one area of thought (a discipline) and attempt to apply it to their own area of thought. (Eating my own dog food, I tried this when I talked about ubiquity or looking at military history.)
Being perhaps a little more relevant here, I’ve come to the conclusion that just as we expect developers to learn from testers, more focus on this needs to occur from the other direction. I’ll be the first to admit: my ideas are still forming on this. Or, rather, my ability to express these ideas intelligently are still forming.
There are four operative contexts that led to this post: (1) my career is in software testing and quality assurance, (2) my attempts to learn to be a better programmer over time, (3) my son, at my urging, learning C and C++, and (4) my apparent inability to avoid become embroiled in debates about “technical testers” and whether testers should learn a programming language.
This is all coming about because of some concerns I have in terms of what I see practiced among many testers. It’s not so much that they can’t test as it is that they can’t think around testing to really get into the meat of their discipline.
Along with this, I’ve had the pleasure of working with some really incredible developers. I’ve also worked around some really bad ones. And when I tried to distill what it was that differentiated, it really came down to how they thought around the problem they were developing. That was then coupled with their ability to utilize various techniques within their discipline.
I then realized that I learned a lot about testing in my own forays into being a good programmer — which is one thing — and then being a good developer — which is entirely different thing. Around this same time, I started talking with my son who is interested in learning to write games. As I was teaching him about C and C++, I started getting into the basics of programming and that’s where my ideas started to gel a bit in terms of how this is relevant to — and necessary for — testers.
All of what follows is, of course, opinionated based on how I see the world. I’m open to this viewpoint morphing, particularly since I know I’m making what might be some oversimplifications here. So if I sound like an idiot in what I say, at least we can all have hope that maybe I’ll learn that as I say it.
A Little Development Stuff …
At their heart, programs deal with two things: data and actions. (Substitute “procedures” or “algorithms” for actions if you prefer.) So consider C. C is a procedural language. That means it emphasizes the algorithm side of programming. Conceptually, procedural programming consists of figuring out the actions a computer should take and then using the programming language to implement those actions. In this view, a program essentially prescribes a set of procedures for the computer to follow to produce a particular outcome.
… Relates to Some Testing Stuff
Now, that sounds a lot like writing a test, right? Testers tend to think of a set of actions that lead to some outcome. Yet in a recent work experience, I was in a situation where testers had written many tests with very vague outcomes, on the equivalent of “and then nothing should go wrong” or “and then the values should change” or “and then check all other places where that value shows up.”
So just thinking in a procedural fashion does not, by itself, make effective test writing possible any more than procedural programming necessarily makes for a good program.
Further, I started to notice that testers who tend to be very algorithmic or procedural also tend to avoid revealing the intent of their tests.
The Limits of Procedural Thinking in Development
It’s certainly the case that early procedural languages, like FORTRAN and BASIC, ran into organizational problems as programs written in them grew larger. For example, programs often use branching statements, routing execution to one or another set of instructions, depending on the result of some sort of test condition.
This led to lots of problems, one being that it was often nearly impossible to understand a program simply by reading it. You could execute it and see what it did, but sometimes what it did could not be correlated to what the code said. Because of this, modifying such programs was a scary thought. It was better to work around the existing code or in fact just write entirely new code.
The Limits of Procedural Thinking in Testing
This is more than just kind of like what I see testers do. It’s exactly like it. Sometimes testers aren’t quite sure what a test does but it must do something, after all why is it there if it doesn’t? But they don’t quite know if they can add to it without changing its intent. So the thinking seems to go like this:
“Well, maybe I could just write some overarching stuff around what’s there like maybe saying ‘… and while doing all this, also check the reports.’ Or maybe I’ll just write another test, and even if it’s doing the same thing as a few other tests, well, whatever. At least it’s there. A little duplication never killed anyone, right?”
Programming Gets Structured
In response to the procedural issue, computer scientists developed a more disciplined style of programming called structured programming. As just one example of how this was employed, structured programming limits branching (choosing which instruction to do next) to a small set of well-behaved constructions, such as for loops, if-else statements, and so on.
Testing Gets Structured?
Consider how many testing formats evolved into keyword-based constructions, where you have something like columns on a spreadsheet called “Screen, Data, Action” and then each row indicated a test step. Or consider recent attempts at BDD type formats — a TDL, essentially — wherein the structure of tests is limited to being put in place with clauses such as Given, When, Then.
Granted, there is actually a world of difference between those two evolutionary approaches in testing. One current issue is that testers tend to treat them as if they were the same, but that’s definitely a story for a different day.
Programming Starts to Get Modularized
Along with this more structured approach to programming that was being promoted, top-down design also played a part and was promoted as well. The idea here was to break a large program into smaller, more manageable tasks. You basically develop program units called functions to represent individual task modules. If one of those tasks was felt to be too broad — doing too much, for example — you could divide it into yet smaller tasks. You continued with this process until the program was compartmentalized into small, easily programmed modules that had a very distinct purpose. These modules were called to do one thing and they returned a defined output based on the input they were given.
Testing Starts to Get Modularized?
The above sounds like a test case, right? In contrast to programming, you make sure that your test case doesn’t branch and test a series of conditions. Yet it’s not that simple, is it? Sometimes related conditions can, and probably should, be tested together. That’s the basis of an “and” in a test. “The cost value should be reflected in the UI form AND the cost value should be reflected in the report.”
The problem is that many testers do not just write test cases that way. They will write them with “or” conditions as well.
And, perhaps a more pernicious problem that I’m leading up to: this focus on the procedures forces testers to think, algorithmically, not just in design of tests but in execution of tests. I’ll come back to this point. But it’s crucial to what I’m getting at here: if testers see how procedural thinking developed in software design and construction, they can start to see how that has been mirrored in many cases with test design and construction.
Programming Needed To Modularize Design, Not Just Construction
Needless to say, the structured programming techniques reflected a procedural mind-set, thinking of a program in terms of the actions it performs. Structured testing did something very similar.
A problem started to develop though. Eventually computers got more complex. They became much smaller, but certainly more powerful. They could thus do more. Thus the problems they tried to solve became more complex. Enter the era of large-scale programming.
Eventually programmers realized that rather than trying to fit a particular problem to the procedural approach of a language, they wanted to attempt to fit the language to the problem. The idea that eventually developed was to design representative forms of data that corresponded to the essential features of the problem. This was the basis of object-oriented programming.
Unlike procedural programming, which emphasizes algorithms, object-oriented programming emphasizes the data. So, for example, a class is a specification describing one of those new data forms and an object is a particular data structure constructed according to that plan. In general, a class defines what data is used to represent an object and the operations (actions) that can be performed on that data. So if I’m dealing with plans in a clinical trial, I may have a Plan data form (class). If I’m dealing with liquidity in hedge funds, I may have Hedge Fund data form (class). Or, when my son is designing an RPG, he may have an NPC data form (class).
The point is that the object-oriented approach to program design is to first design classes that accurately represent those things with which the program deals. You start speaking (and coding) in the language of the domain you are working in.
Does Testing Modularize Design?
What about tests that do this? We have test models, that provide a way of thinking about users or entities in our application, such as Purchase Orders or Invoice Reports or whatever. In terms of representation, testers have to focus on what are generally called scenarios. Those scenarios effectively say that this User will create a Purchase Order and that information will show up on an Invoice Report.
Or do they? Many testers I encounter seem not to write in the language of the domain or, at the very least, they vary the domain terminology enough that it can unclear what is specifically being talked about in a test or set of tests. Here, of course, the idea of domain-driven testing and a ubiquitous language are front and center.
But just like in programming, a class may be unrepresentative of the thing it is modeling, just as a test scenario can be. Or a test scenario can be limited, such as not trying certain permutations. I’ll have more to say about scenarios like this in a bit but right now I’ll just note that designing a useful, reliable class can be a difficult task — just like doing this for a test scenario can be.
Top-Down, Bottom-Up; Algorithmic, Heuristic
Incidentally, the process of going from a lower level of organization, such as classes, to a higher level, such as program design, is generally called bottom-up programming. The idea being that instead of concentrating on tasks, you concentrate on representing concepts.
Yet so many testers do not do this and, again, I argue that it comes from thinking too algorithmically. This is as opposed to thinking heuristically, which I believe good computer science concepts help you do, and what I believe object-oriented analysis and design foster.
The ironic thing is that most people, in their general affairs, don’t think algorithmically about much of what they do. People do tend to think heuristically. For example, if you have to get the snow off your driveway, you don’t list out all of the steps required. You simply start doing the work, relating data to actions. (“Lots of snow. Might need snow blower. Better put on coat. Check gas real quick — yep, we’re good. Start it up.”) It’s a very fluid mental process.
Here I have a requirement (“get rid of the snow on my driveway”) and I factor that down into some data (“snow blower, coat, gas”) and actions (“wear coat”, “check gas”, “start snow blower”). But, again, it’s a fluid, heuristic process that can rely on algorithms when need be but does not actually put them front and center.
Relating the Programming to the Testing
I just said that I believe computer science concepts, particularly object-oriented analysis and design, help foster this kind of thing. So here’s where my thinking is at on that:
- OOP: binding of data and methods into a class definition.
- Testing: the combination of data conditions and test conditions into a scenario
- OOP facilitates creating reusable code
- Testing: composable sets of tests that can be mixed and matched based on need
- OOP allows for “information hiding” which safeguards data from improper access.
- In testing this follows with “hide the implementation details.” Talk in the language of intent, and reveal only the implementation that speaks to tasks (action) and not the interface.
- OOP allows for polymorphism which lets you create multiple definitions for operators and functions, with the programming context determining which definition is used.
- In testing this is where you can have guiding scenarios (or tours, to use a popular technique) that have different facets to them based on the context in which the tour is being carried out.
- OOP allows for inheritance which lets you derive new classes from old ones.
- In testing this goes back to have composable tests that can be constructed on demand and, in some cases, emergent based on need.
The craft of programming, at least in part, is the factoring of a set of requirements into a set of data structures and functions. The craft of testing, at least in part, is the factoring of a set of requirements into data and actions. To me, there is parallel there.
Yet that very process that guides us so well in life — “get snow off driveway” — seemingly gets turned off when testers look at their own work artifacts. And that’s too bad. I’m finding that many testers seem to be stuck in the procedural, algorithmic way of thinking and are not able to progress to a more heuristic approach that looks at the entirety of a problem.
Just as developers have learned to think in terms of concepts before tasks, such as with object-oriented programming, testers have often not made that same mental shift. That’s why I say I do believe testers should learn aspects of computer science, not so much so that they can program in C++ or Java or whatever, but rather so they can learn the thinking of design and construction. Because that is what testers do: they create a tangible, constructed work product based on design thinking.
But what teaches testers to do that and to do it well? Not much, from what I’ve seen.
To put a little basis around this, I mentioned scenarios before so I’ll stick with that here. I do this because thinking heuristically can matter a lot when performing scenario-based thinking.
The book Scenarios, Stories, Use Cases: Through the Systems Development Life-Cycle says the following:
“Scenarios allow us to take a backward glance. They use a simple, traditional activity — storytelling — to provide a vital missing element, namely a view of the whole of a situation.”
The book also says this:
“The scenario perspective looks at what people do and how people use systems in their work, with concrete instead of abstract descriptions, focus on particular instances rather than generic types, being work- and not technology-driven, open-ended and fragmentary rather than complete and exhaustive, informal, rough, and colloquial instead of formal and rigorous, and with envisioned rather than specified outcomes.”
This is fluid thinking that I believe testers have to engage in as they consider scenarios and think around the area they are testing. The book is talking about those thinking skills in the context of system development but, in reality, I have found no better way to internalize those skills than to work as a developer.
Yes, analysis will be about refining details; about making precise what is only vague. Analysis will suggest, precision, and completeness with respect to what you are working on. But before that there is the thinking that goes on. Again, quoting the book:
“Scenarios are basically holistic. Whether in terse and summary form, or written out at length in a carefully studied sequence — or even in a complex analytical framework with multiple paths ingeniously fitted together — the scenario is in essence, a single thing that conveys a human meaning. And that meaning is enhanced by the reader from their own experience; the story told in the written scenario slips effortlessly into the context of the network of meaning in the reader’s mind.”
That kind of ability to tell a story, to craft a narrative, is something that I think is lacking in many testers, at least those I tend to meet. (Maybe I’m just hanging out with the wrong crowd. Or maybe I missed my calling and should have been a developer.)
And finally the book says:
“Narrative can fulfill these diverse purposes because sequences of events in time are at the heart of our ability to construct meaning.”
Construct meaning. Ah, yes. The point of tests is to communicate … to tell a story, if you will. Those tests must express meaning because those tests are a way that business analysts, developers, and testers can build up a shared notion of quality.
I’m going to slip away from programming for a second here and show a few different associations that come to mind.
I’ll preface these associations by saying that there is an interesting dichotomy that often appears between the act testing and the tools that help us test. For example, I have run into many testers that have trouble thinking outside of — or around — the testing tools they use. In these case, testers often focus on how they use their tools, which increases their algorithmic mindset, rather than focusing on why they might need a certain tool. Beyond tools, this even occurs with test techniques, where testers will focus on the exact how (algorithm) of a particular test technique but sometimes without due consideration of why a given test technique should be used in a given situation.
Film and Screenwriting
I think there is an analogy here with film and screenwriting.
Screenwriting (form) and film production (function) are often taught separately. This has often created a divide where there should be a bond. Technical tools have become separated from their end, which is story. Many high-budget films employ dazzling effects but story gets left forgotten or, at the very least, compromised. Story has taken a back seat to technical wizardry and style.
A film craftsman knows how to create a specific shot, but the director knows why. Part of a director’s required knowledge is to understand the technical properties of film and then employ them creatively to advance the story. Without the connection between content and technique, you are watching two disjointed parts; the result, more often than not, is a technical exercise.
Directors know how to use the tools and certainly don’t eschew the use of tools. But they also know the limitations of those tools and do not allow the tools to dictate the design such that it compromises what is most important.
Apply this also, perhaps, to fiction writing.
A scene in a film or a book tells a small story with a beginning, middle and end. Each scene involves specific actors, props, direction, and dialogue. To allow for exploring alternatives, most screenwriters or novelists allow the early versions of their scene to be rough sketches of the idea. These are scenarios: ways that the scene could play out. As the final draft or actual filming gets closer, the details of the scene become more precise and thus the scenarios (for that scene) become fewer in number.
A (functional; operational) scenario is a simulation of what happens within the boundaries of a specific scene. The same should apply to a test scenario when looking at a specific feature. Yet the focus here and the goal is to try to focus on scenarios that have a business boundary rather than a product boundary.
The point here being that a writer is not so concerned with how a scene is written — whether in screenplay, stage play, or novel — but rather with why the scene has meaning and why the structure it has is necessary structure that it has. There’s not an algorithm for writing a scene necessarily; there are heuristics.
In acting (play acting or on film), an actor relies on another actor to best emote through a scene. They have to react to what the other person is doing. It’s a feedback loop. An emotive event is something that happens when there is an emotional response. Likewise, a business event is something that happens to which there is a business response. The response to a business event is a business use case. You discover requirements by building scenarios for the business event response.
The trick here is that you do rely on emotions and how other people are responding to you and your ideas. You see where they are confused, you see where they are upset, you see where they are happy. As testers, if you are emotionally frustrated while acting out a scenario because it’s a pain to do, then perhaps your users will feel that same way. Harnessing emotion — something actors are trained to do in the context of a narrative — is an important skill.
I believe in everything I say here yet I also believe I have not found the best way to express it. One common element of those last bits — about screenwriting, fiction, and acting — is the ability to convey meaning and a shared experience. Part of that means an accurate representation of a particular reality that is being dealt with.
Those “squishy” aspects have to be merged into the more technical aspects of the discipline — both programming and testing. I think when you do get to a point where that conceptual merging starts to take place, you get to where an actual merging of programmers and testers happen. When that happens, you start to create a true development team, two aspects of which just happen to be programming and testing.