For those of you who work in agile environments, maybe nothing I say here will be new. Even for those who don’t work in agile environments, you may have found yourself thinking along these lines but not necessarily sure of how to articulate it. That’s a challenge I’ve found myself in where I had to explain to people that the process gates you typically see in a “waterfall process” can be accommodated in an “agile process.” So let’s talk about that.
People reading the agile manifesto often, for somewhat obvious reasons, tend to see it as some either-or document that leads to fracture points. There is no indication of how those fracture points can be healed or even if the fracturing is necessary or effective. Like most such manifestos, the agile one is simply a string of ideas, stated in mostly absolutist terms, that are somewhat divorced from their context. Yet critics rightly point out that agile, as a concept, was promoted to deal with some of the “failings” of environments that adhered to waterfall and still ended up with bugs, were over time, and over budget.
Yet those same issues can plague so-called “agile projects” as well. I’ve worked in many of them. Some might argue, “Well, then that was a case of agile not being done right.” Okay, but others could argue the other situation as “Well, then that was a case of waterfall not being done right.” And thus is nothing really decided. My viewpoint is that I don’t think just having a different set of processes makes the difference. What I do believe makes a difference — and a significant one — is how you interleave the various activities of a process, such that you do the following things:
- Catch problems at the earliest responsible time.
- Avoid duplicating actions.
- Build traceability in.
- Build knowledge transfer in.
So let’s consider the basic software development process, whereby you go from an idea of a product to a working implementation of that product that has been shoved out to customers.
When following any such development process — whether that be waterfall, agile, or some hybrid — you will be doing the following certain processes when building software. I’ve seen a few dangers here. For the agile crowd, they can defocus efforts by treating all of these processes that have traditionally been separate as one thing. For the waterfall crowd, they can mono-focus efforts by forcing all aspects of the process to be separate even when some conflation would help.
Here are some of the initial activities you will often hear about:
- Analysis. You will determine what you need to build. This will involve some form of gathering requirements and refining those requirements. How you gather you them, how they are stored, when you decide to refine them — those are all details to be decided. Regardless of those details, you will do some form of analysis.
- Planning. You will at some point work to figure out how long you think it will take to build what you believe is required. Even if you don’t have planning meetings or written plans, the act of planning will be done.
- Design. You will probably spend some amount of time determining how to put together what you are building. The amount of time you spend may be highly variable but you will do this to some extent.
I’ll grant that to many people laying things out like this smacks of waterfall. To them, this is “big design up front.” Of course in reality it doesn’t have to be. You can spend very little time on the above tasks. And do quite poorly. You can spend a whole lot of time on the above tasks. And do quite poorly.
So the question is not whether you do analysis, planning, and design and a success factor is not necessarily the time you spend on them. I believe that what matters is how you interleave those activities not only with each other but also with other work that you will be doing, such as coding and testing.
Before we consider those things, let’s look at reducing the amount of artifacts by seeing what if anything can be interleaved.
Analysis and design can be the same things if you iterate through them. As you perform analysis, you can encode that analysis as a type of design. Start treating the requirements and the design as the same thing. They are interleaved elements. While you are doing this you can do some rudimentary planning to at least figure out how long such designs would take to implement. This immediately argues for smaller chunks of design. Smaller chunks should be easier to estimate in terms of planning for time and effort.
Doesn’t that sort of sound like what agile folks promote, even if not in those exact words?
Okay, so now let’s look at two other aspects that “traditionally” will “follow on” from the above tasks.
- Coding. Here you will construct the software using some combination of programming language, database language, markup language, and so forth. This is where you actually build the design.
- Testing. Here you will ensure that what you built actually works.
The further the coding part is from the analysis+design, the more likely you are to have bugs creep in. This is because you will have to trace between the two artifacts. Likewise, the further testing is from the coding activity the more likely you are to have a whole different set of bugs creep in.
The separation between these activities are like cracks in the foundation of a house that allow bugs to get access to your home. Same thing for your projects, really.
I said earlier that as you perform analysis “you can encode that analysis as a type of design.” So now how about this continuation of that idea: what if your design was encoded as a set of tests? That would mean testing is a design activity. Further, those tests can be written in such a way that they drive development tasks. Modern developers often use a practice known as test-driven development. This is a technique where you specify the application you are writing in terms of tests. You write tests prior to writing the production code. These are very technology-focused or code-based tests, however. The tests I’m talking about are very business-focused or acceptance-based tests.
This is where the distinction between TDD and BDD tends to come in. But let’s not let that derail ourselves right now. (It’s really easy to go off on lots of defocusing tangents when the acronym soup starts.) There is a question here of how close the tests and the code should be. This is similar to asking how close the analysis and the design should be. I’ll hold off on that for a bit. Let’s consider a few other tasks in the traditional development life cycle.
- Documentation. You have to describe the software you built for various audiences. You might have to describe specific features at a high-level for sales and marketing. You might have to describe how to use specific features for training. You might have to describe how to use alternative uses of features for support teams. You have to describe how to use specific features for users.
- Review. You have to review the various products that you are creating, such as tests, code, documents and so on to make sure that they are useful, accurate, and relevant.
The reviewing part comes in all through these activities. However, if you say analysis+design+test, then you have potentially made your review a bit easier in that it’s focused on one set of interleaved deliverables. Expanding that idea a it, what if those tests can serve as a form of documentation as well? Can that happen? Can we have analysis+design+test+documentation? Sure, if you structure your tests in such a way that they can be used as documentation.
Doing so, however, requires a certain mental shift in how to write tests. Acceptance-based tests should be at an intent level. These tests, by their wording, describe what features are implemented and why they would be used. The breakdown of the acceptance tests into functional or behavioral aspects then gets more into the implementation level. The expansion of acceptance tests then speaks to how of a feature. It’s that very interleaved nature that allows tests to be a form of documentation. I don’t think anyone would claim they could be a user guide, but they could certainly be used a means of understanding what features exist, the context in which the feature is used, and details of how to actually use the feature.
Hmmm. Can this all really work? It sounds fine to write out these activities as “interleaved” and talk about them as if they are “done together” and “serve the same purpose.” But … seriously? Isn’t this really just wishful thinking?
It’s a fair question and I still don’t know the most effective way to respond to it. So let’s focus on one area for a second and consider a specific practice that some people use: diagramming out what they discuss in terms of design. A common diagramming language used is that of the Unified Modeling Language (UML). The UML representation is not the design — it’s a model of the design. The design is the code itself. To the extent that UML — or any modeling artifact — helps you understand the design of the system, it’s valuable and worthwhile. However, once it begins duplicating information that is more readily understood by reading code, UML becomes a burdensome, expensive artifact to maintain.
That very same thought can apply to what I said above. Specifically:
- To the extent to which analysis states material that would be better encoded as a design, that analysis becomes burdensome as a separate activity.
- To the extent to which design duplicates information that would be more readily understood or be more actionable as a test, that design becomes burdensome as a separate activity.
- To the extent to which documentation is created from tests when those tests could be modified to better state intent and implementation, that documentation as a separate activity becomes inefficient.
I think those notions are at least a bit more readily understandable to people even if they disagree with the conclusions that I’ve reached. But what about the tests and the code? The code is ultimately very important of course because it is what actually makes everything real. Without the code giving life to the application, everything else became an exercise in theory. So I’ll go back to the question I posed earlier but held off discussing: How close should the tests and the code be? Well, here are my thoughts:
- For unit-based code tests, obviously the tests and the code are tightly interleaved.
- For integration based tests, the tests and the code will be interleaved but not quite as tightly.
- For system based tests, the tests and the code will clearly not be interleaved.
So you have varying levels of the tests and the code being interleaved. What’s the common ground? I think that the code and the tests can use the same artifacts for their construction: namely, specifications. When you look at it that way, you see the concepts across these categories are the same. For example, consider how unit testing tends to work:
- Write a specification, in the form of code-based unit test.
- Demonstrate test failure. (It fails because you haven’t implemented the specification yet.)
- Write production code to execute the specification.
- Demonstrate test success.
- Refactor, or rework the code, to ensure that the system still has an optimally clean code base.
Now consider how acceptance testing can work:
- Write a specification, in the form of an intent-based acceptance test.
- Demonstrate test failure. (It fails because you haven’t implemented the test specification yet.)
- Write implementation-based functional tests to execute the specification.
- Demonstrate test success.
- Refactor, or refine the tests, to ensure that the system still has an optimally clean test base.
Wow, doesn’t that sound like we just aligned development and test activities? Think about it:
- Each unit test specifies the appropriate use of a production class.
- Each acceptance test specifies the appropriate use of a production feature.
So … now what?
My point with all this is that if all of these various activities are seen as aspects of a development life cycle prism — wherein different activities are reflected in the positioning of how the prism is situated — then you can begin to focus on the similarities of the activities rather than the differences. To some people this may be nothing more than “common sense” and what they are used to. However, I’ve found that for many people these ideas are new or, at the very least, are not articulated such that others can buy into them.
As for how you can encode the actions of these similarities into a process that remains agile (i.e., flexible and adaptive), I have come to believe that specification workshops are one of the more effective and efficient. In a previous post I talked about tests as specifications and I had already started to talk a little about this when I referred to the brave new world of testing. This post is essentially a continuation of those thoughts.
I plan on exploring these ideas — particularly the specification workshops — more thoroughly in some upcoming posts. Incidentally, all of this is me finding my light bulb moment in the agile world.