In the past I questioned whether our test and development practices become defocused. As most people in the industry know, there has been a lot of comment and debate about whether “TDD is dead.” So let’s talk about that, particularly since this particular practice is said to sit at the intersection of testing and development. This is probably going to be a bit of a free-range over a few topics, so bear with me as we brave the dark waters.
Probably one of the more interesting statements of this viewpoint was that by David Heinemeier Hansson in his article “TDD is Dead, Long Live Testing”. David says some interesting things there:
“Over the years, the test-first rhetoric got louder and angrier, though. More mean-spirited. And at times I got sucked into that fundamentalist vortex, feeling bad about not following the true gospel.”
“TDD has been so successful that it’s interwoven in a lot of programmer identities. TDD is not just what they do, it’s who they are.”
Certainly this resonates with me because TDD adherents (particularly the “purists”) started to become a little shrill to me and certainly took on the characteristics of zealots. Likewise, at least in my experience, I can agree that many people tied their egos up not just in the creation of “TDD” as a defined practice but as a perpetuation of it to sell books and give presentations. Still, though, you could argue this might just be the people David was hanging out with. So a few other things stuck out at me that he said:
“Maybe it was necessary to use test-first as the counterintuitive ram for breaking down the industry’s sorry lack of automated, regression testing.”
“I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests. (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”
Interesting. I’ll come back to the focus on system testing in a moment. Along with the “TDD is dead” focus, there has also been much talk about “test-induced design damage.” Again, David has led some of the charge on this with this article titled just that: “Test-Induced Design Damage”. One part that stood out to me:
“Such damage is defined as changes to your code that either facilitates a) easier test-first, b) speedy tests, or c) unit tests, but does so by harming the clarity of the code … usually through needless indirection and conceptual overhead. Code that is warped out of shape solely to accommodate testing objectives.”
So What’s the Issue Here?
When you boil it down to some particulars, David’s premise is really that there is a trade-off between testability and understandability. He argues that in the name of making code “more testable” we have often made it harder to understand by spreading out the understanding among various code constructs, many of which only take on meaning at run-time. Further, we spend more time creating test code constructs than we do simple tests that would do the job just as well.
I’m no expert developer, but I can say that last point really resonated with me when I wrote a Marvel API consumer in Ruby. You can see all the unit tests that I have in my repository but you can also see a trimmed down example of a readable test script that would do the job just as well, with a lot less clutter.
But then David said something else interesting to me:
“You do not let your tests drive your design, you let your design drive your tests! The design is going to point you in the right direction of what layer in the MVC cake should get the most test frosting. When you stop driving your design first, and primarily, through your tests, your eyes will open to much more interesting perspectives on the code. The answer to how can I make it better, is how can I make it clearer, not how can I test it faster or more isolated. The design integrity of your system is far more important than being able to test it any particular layer. Stop obsessing about unit tests, embrace backfilling of tests when you’re happy with the design, and strive for overall system clarity as your principle pursuit.”
That was interesting because it went entirely with my thoughts of tests making everything clearer but it went against my thoughts as I have posted them on my blog and in general how I have presented testing. Specifically, I do believe tests drive design. But I then realized from a code perspective, I completely agreed with David. Further, I realized that to a certain extent, I even believed this with non-code based tests.
Reflecting Design, Not Driving It
When I think back to how tests have driven design at the acceptance, system, or integration level in my career and experience, I realize it was most often a sort of “test concurrently” approach. The tests didn’t so much “drive design” as they did “reflect the collaboration and communication that occurred around design.”
So my key takeaway is that much like with “Agile”, “TDD” has taken on the form of a religion for some people and thus much group think. What has gotten lost is the importance of testing at multiple levels, but with a focus on those levels that most impact business value. Along with that, there has started to manifest a lack of concern for quality as a common perspective; a shared notion of quality that is demonstrable and explicable from artifacts that, as closely as possible, mirror business value requirements with scenario-based tests.
I am coming around to the thinking that “TDD” is most useful when used at the system level, which I suppose is what “BDD” has allegedly been trying to promote, at least according to some. The point here is that the “system” is where your users interact with the software. So, ultimately, that’s where your tests are most important. The first tests I’ve found to be the most useful are those that, as quickly as possible, help us determine if a given feature actually satisfies the business need, as a working, integrated system. When I think about the tests I most don’t want to break, it’s the business-facing, acceptance-driven, system level tests. Various types of refactoring may break many unit tests and I can live with that. But it better not break any of those acceptance tests at the system level.
Focus on Fundamentals
So here are a few fundamentals, at least as I see them:
- The goal of your test suite — of whatever type — is to allow you to use tests to improve the design and for the existing tests to empower you to make feature additions or changes to existing features with less fear of introducing new problems.
- You should first and foremost avoid test suites that are complicated and fragile, rather than putting too much up front emphasis on tests that are fast.
- Any code-based tests — including automated acceptance — are simply code that do not have tests. Think about that because it is a salient point that is often overlooked.
Focus on Source of Truth
So now here’s a key question: what do you want to be the source of truth? Ask this about both business and code. Well, when you are writing truly test-driven code, the tests are the final source of truth in your application. This means that when there is a discrepancy between the code and the tests, your first assumption is that the test is correct and the code is wrong. On the other hand, so the logic goes, if you’re writing tests after the code, then your assumption must be that the code is the source of truth.
Okay, but go back to the third fundamental. When your tests are code — and further, when your tests are code that do not have tests — what does this mean for us? The fact is these ideas can apply for unit, integration, and system testing, to use the traditional distinctions. For acceptance, the tests are the source of truth and guide the behavior. For unit and integration, the tests are the source of truth and guide the structure and implementation.
What this does is help us focus on the fact that TDD is a software development technique masquerading as a code-verification tool. BDD is a software development technique that masquerades as an automated testing tool. The fact is that unit and integration tests are a wonderful way of showing that the program does what the developer thinks it does. The corollary, however, is that they are a lousy way of showing that what the developer thinks is what the program actually should do. That’s where the acceptance part comes in.
The problem is that these ideas, which are really simple, tend to get conflated around a sort of semantic shell game. This is touted as a “shift in thinking” but’s often just equivalent to arranging the deck chairs on the Titanic. Our “shifts in thinking” have suggested that tests become specs. That behavior become examples. That assertions, implying already-implemented behavior, must shift to expectations, implying future behavior. But, really, it’s testing by any other name!
It’s Not an Either-Or
In unit and integration, tests act as a universal client for the entire codebase, guiding all the code to have clean interactions between parts because the tests, acting as a third-party, have to get in between all the parts of the code to work. In acceptance, the tests act as a universal client for the behavior of the code base separate from its specific implementation details. None of this requires “test first” but it does require the idea that tests and design work closely together and, sometimes, are informed by actual implementation. This is no different than how a fiction writer often needs something available to edit before they can actually understand the design of their story.
When the tests are written first, in very close intertwined proximity to the code, those tests can encourage a good structure with low coupling (meaning different parts of the code have minimal dependencies on each other) and high cohesion (meaning code that is in the same unit is all related). That can happen. That’s for unit and integration. For acceptance, the idea is to encourage a separation of behavior and implementation and that functional units of behavior are related. That, too, can suggest how the internal of the application are coupled and how much and to what extent they cohere. But, again, none of this requires “test first.”
So is TDD dead? Well, it’s test-driven development, not test-first development. And I think the conflation of those two terms has been what’s led to most of the problems. But another conflation that has occurred is that TDD must be at the code level and that’s simply not true. What that thinking has done, however, has gotten us BDD (thus suggesting that “behavior” is different from “test”) and then ATDD (thus suggesting that unit tests do not serve an acceptance purpose).
This, tester folk, is what happens when you let well-intentioned developers define and dictate the testing process as it relates to software development. As testers, it’s this kind of thinking that we have to deal with, mitigate, and ultimately re-orient.