There’s often talk about how developers should think more like testers. But there’s often not as much discussion about the corollary: testers learning to think more like developers. So let’s talk about this.
When I say there is not as much emphasis on “testers learning to think more like developers”, some of you may think back to my thoughts on the
rise of the technocrats. Isn’t the whole SDET concept the idea of asking testers to think like developers?
I would argue no, it’s not. It’s largely been about forcing testers to become developers. And therein lies some of the problem because it conflates “doing” with “becoming.” While I have written about the dangers of testing into a technocracy, I’ve also written about
testers writing tools and a series of
learning posts, mostly focused on technical learning. I’ve also talked about
testers spiking with tools, similar to how developers perform “spikes” for exploration and learning.
It
is important for testers to better accommodate and understand the mindset of developers. You can start to understand a mindset by
doing, as opposed to
becoming. So here I want to talk about that idea: about testers better understanding the developer mindset, particularly when it comes to approaches like TDD. And I pick TDD in particular because sometimes this approach is used as a framing device to suggest we don’t need specialist testers.
My main point of this post is in the final section (
Developers and Testers: In Harmony). But I need to set up some context for that to make sense.
Frame Your Thinking
So how do you “
do but not
become?” Well, clearly writing automation is a form of coding activity. Writing any sort of tooling that supports testing is, in fact, a good way to get more inline with developer thinking. Notice I didn’t say “developer skills”; I’m only focusing here on developer thinking. You can learn to think like developers without having to gather all of their skills.
I want to frame developer epistemology, not developer ontology. I want to do this because it’s important for testers and developers to see how alike we are in the former, while still retaining a great deal of autonomy and specialization in the latter.
So let’s see if we can frame our thinking around a different kind of exploration: that which developers routinely do. This will, I hope, provide better understanding of a generalized developer mindset and how, in many respects, it isn’t that far off from a tester mindset.
The Developer Way of Testing
Here’s my experience: developers rarely have trouble writing code. But many do find it extremely hard to write tests
entirely before writing code. And many others question if it’s actually necessary to be “test first.” And here I’ll remind about and/or reference my post regarding how
test-driven is not necessarily test-first.
I believe that one reason for this is that the act of programming, separate and distinct from the wider act of “development,” is really a series of small experiments carried out in sequence, every single time you sit down to code something. Programming — and, again more broadly, development — is an act of continuous discoveries.
So, tester, put yourself in the shoes of a developer …
You write some code. You more than likely will see it fail initially. You might play around with it a bit, read up on whatever it is you’re trying to do, try to code it a bit more, maybe try out a few different paths.
Eventually, through small trials and errors, you get the code to work. This is a very organic process. Even with my broad simplifications here, this
is effectively how code gets written. This, ultimately,
is how applications, made up of numerous features, get created.
Developers and Testers Explore
Testers often say they have to explore
something in order to actually perform the act of testing.
That “something” can be a working application just as it can be a set of requirements. What you need is basically, anything — abstract or tangible — that can be exercised with inputs and outputs, wherein observations are possible. Those observations can include nothing more than how people respond to questions about what something should do. That person is someone you can “feed” inputs to and get output from.
If we have literally nothing to go on — how could we test? We really can’t. There has to be something we can latch onto, reason about, and observe.
So let’s get back to our developers here. Let’s say you’re a developer and you don’t have a clear picture of the code. How, in that instance, can you write tests first? This is like asking a tester how they can explore when they don’t necessarily have the full understanding.
As that developer, while it’s true that you may not have the code you need — obviously, since that’s what you’ll be writing — it’s also true that you
do have the surrounding context (i.e., other code). You also have your experience. And you probably have some idea of what it is you want to code in the broad strokes; for example, the type of feature you are building and the abilities it must support.
Is that enough to be “test first”, though? That’s a question that has very contextual answers. But here’s what at least I think many developers do.
Testing is Continuous Discovery
First, you take small steps. If a test appears too hard to implement, the chances are you’re probably taking on way more than you should. In order to implement a feature using test-first coding, you would really have to work through a series of tests. This could be as few as one or as many as … well, as many as it takes. The actual numbers don’t necessarily matter too much.
Rather the goal is to think of each test as a step in an experiment. Each step is small, is distinct, and moves you forward in your knowledge.
I cannot emphasize those last two points enough. That is effectively a form of exploration.
Does TDD help here? This is an important point for testers to understand about the developer mindset. This question reflects the difference between testing for design (as a design activity) and testing for verification (as an execution activity).
In strict TDD you would avoid writing a test that you expect to pass, because a passing test doesn’t normally drive you to change the code. And
that is what TDD is essentially about: using tests as a way to probe not just behavior, but the context in which that behavior is situated and put pressure on the design. The idea is that each test should take you from a delta of working code to the next delta of working code with a small additional capability.
Developers Experience Difficult Test Writing
As a developer, if a test is hard to write for a given method, maybe that method you’re designing is not as cohesive as it should be. Maybe it has way more coupling than should be in place. The difficulty of the test experiment is a potential warning sign that you may have to break the method you have in mind into smaller methods and drive each one of them through tests.
Here what developers are doing is using exploration and experimentation to better figure out how to isolate tests. And that act of isolation is putting pressure on overall design by forcing the developer to think about how the parts of the system should communicate with each other, how much they should “know” about each other, and so forth.
Developers Experience “Too Many Tests”
You, as developer, might find that you’re coming up with a relatively large number of special cases and boundary conditions. This is a clue. This is often indicating that the team chose the wrong concepts and abstractions for the underlying model. Put another way, the business model is not being reflected in the implementation model as well as it perhaps could be.
Or maybe all these special cases and boundary conditions are telling you that parts of the system are too tightly coupled to be considered in isolation. This might also mean that the system, as it is being designed, is not well aligned with the processes it’s trying to automate or the problems it’s trying to solve.
Discovering an overly complex model during testing can be quite useful. And the earlier that’s done, the better. Instead of accepting the situation and trying to fight the large number of boundary conditions with test management techniques, teams can use this test proliferation as a signal that they need to start remodeling the underlying system or, at the very least, better consider the abstractions they are modeling in the first place.
Personal Example
I have a Stardate Calculator as part of my
Veilus application. (If you want to see it, you have to sign in as “admin / admin” and go to the Stardate Calculator page.) In working with testers on this, what I hope they find is that there can be a massive proliferation of tests based on the specific stardate used as test data. Does that mean the stardate is an incidental? Meaning, is any particular value of consequence over any other particular value? That hopefully leads testers to consider whether perhaps the interface should be referencing an episode title or episode number from one of the many
Star Trek series. And that, hopefully, leads to a question of whether the stardate calculator is solving the wrong problem.
Developers Focus on Inputs and Outputs
As stated above, treating too many boundary conditions as a signal that the model needs changing helps teams create better software architecture and design, which leads to systems that are much easier to test.
Along with this, you start to better consider the boundaries of the components that make up your feature and, ultimately, your application. Better decoupling between those different components of the model will lead to more focused tests that are more expressive of the problem domain. This is because those tests can now focus on the workflow of data. And a workflow can better be conceptualized around a persona: a person gaining value from the use of your application via the model it expresses.
Developers Focus on Examples
By removing interdependencies, creating better interfaces and higher-level abstractions, we can avoid a combinatorial explosion of inputs and outputs. From a pure test artifact perspective, this allows us to replace large state-transition tables and complex diagrams with several isolated sets of focused key examples. This means that there are fewer tests needed for the same risk coverage. And
that means that is’ faster to check the system against such tests, and easier and cheaper to maintain the tests.
Personal Example
I have a Project Overlord example, also as part of the
Veilus application. (If you want to see it, you have to sign in as “admin / admin” and go to the Project Overlord page.) I also provided a
series of examples for Overlord functionality.
One of the key goals I want from testers with that example is identifying commonalities in structure. This is often a valuable first step for discovering meaningful groups. Each group can then be restructured to show only the important differences between examples, reducing the cognitive load.
If the underlying model is difficult to describe with a relatively small number of key examples, it might be good to try alternate models. Try a few different approaches and compare them to see which one leads to fewer scenarios and clearer structure for examples.
Overly complex examples, or too many examples, are often a sign that some important business concepts are not explicitly described. A small set of core examples that are easy to understand and at the right level of abstraction are much more effective for this than hundreds of tests that may give the appearance of coverage but that also provide a false complexity. This applies whether you are writing a unit level, an integration level, or a higher system level.
Developers and Testers: In Harmony
The key benefit of better software models is not actually in easier testing — it’s in easier evolution. By reducing the overall complexity of software models, we get systems that are easier to understand, so they will be easier to develop. They will also be cheaper to maintain, because changes will be better localized, and easier to extend because individual parts will be easier to replace.
This is one of key driving areas where specialist testers and developers need to be able to meet and understand the various tradeoffs, assumptions, and constraints.
Both testers and developers can probe and evaluate potential models by experimenting with test design before programming even starts. Both groups, in that case, are doing development. Both groups are doing testing. Everyone is doing that in the context of quality assurance, which is not a team but a distributed function that recognizes testing is democratized. Everyone tests, but not everyone is a tester.
Eventually, in this harmony, there will be a divergence: the developers will go on to their speciality, which is the programming portion. Testers will go on to their speciality, which is not just “more testing” but is …. ah! But is
what? This gets into what makes the testing role something
uniquely distinct from that of the development role.
Testers not even being able to frame the question well, much less viable answers, is why industry and organizational pressures have led to test roles being removed or being conflated with that of developers.
Earlier in this post, I said: “My main goal for this post is a better understanding of a generalized developer mindset and how, in many respects, it isn’t that far off from a tester mindset.” But not being that far off does not mean the two are equated. That is not an argument for the SDET focus. This is instead an argument for looking at what makes use very similar while also allowing us to remain very distinct.
Development is a flavor of Scientific Method. I like when it used to be called Software Engineering — at least that made it sound like it was following defined rules.