In my previous post on intersections of testing, I set the stage for how testing is an activity that takes place at various points of intersection. Here I want to conceptualize that idea a bit more and provide some focus on what it means from an operational standpoint.
By the way, I should that I’m talking about the “intersections of testing” and not “testing at the intersections.” If that distinction is lost on you, don’t worry. But if the distinction does mean something to you, hopefully that statement clarifies my intentions in this post and the previous one.
My previous post started off by indicating a discussion around naming things “integration tests” versus naming them “integrated tests.” The reason I like the discussion that tends to occur with very opinionated points of view — and what to name things is pretty much always highly opinionated — is that it forces us to figure out the fundamentals of how we conceptualize our activities.
Testers and developers often seem to make some of the mechanics of testing harder than they are. There’s a very simple fact that everyone should keep in mind: testing is a human activity. Not only is it an activity done by humans (as opposed to tools), but it is an activity that is largely innate.
Human beings have been doing some form of testing since we evolved as a species. Because testing, as a methodological and operational approach, is something that evolved with us, it had to be fundamentally simple. Anything too complicated (or too slow!) would have assured our destruction as part of an evolutionary process.
Going with the intersection theme, testing requires an intersection between the tester, their sources of knowledge, and whatever it is they are testing. This particular type of intersection is important because testing is a cognitive engagement process that is predicated upon curiosity and experimentation backed up by incremental knowledge and iterative experience.
At it’s most fundamental, testing is about applying inputs, performing actions, and evaluating outputs. That is what we have been doing as a species for quite literally millions of years. Yes, there’s lots of thinking that goes on and as our context has become more complex than that of our evolutionary ancestors, that thinking has by necessity become much more abstract. But, even with that being recognized and said, when you apply a reductionist stance, what you get is really: input –> action –> output.
So how does this apply to the idea of a tester evaluating Joe’s claim — mentioned in the last post — that “integrated tests are a scam”? How does this help testers, whether working at the code level or not, evaluate their own activities?
A Conceptual Example
Let’s keep things fairly abstract for the moment and say I have a system and that this system has two components. Those components are G and the A. If it helps you to have something more concrete in mind, G is a Game Engine and A is the Asset Engine that the Game Engine uses, where “assets” are graphics and sound files.
Going with the earlier discussion, and with the distinctions that got Joe into trouble, I can test these things via integration tests — in which I isolate both but mock and/or stub one or the other — or integrated tests — in which I test the two components working together, communicating and collaborating with each other.
Keep in mind our fundamentals. In the input –> action –> output dynamic, however it may manifest, you are often testing one of two things: you are testing state or you are testing interactions.
This applies in both the code and non-code context. In fact, check out my post on basis path testing for how this particular testing technique applies the same way, regardless of whether being considered at the code level or the application interface level. In this context, testing state means you’re verifying that the workflow of some given operation returns the right results. Testing interactions means you’re verifying that the workflow of some given operation handles certain actions (calls certain methods, if looking at code) properly.
By way of example, you might want to consider my Stardate example, which I talked about in my post on the BDD lure and trap. In that example, I could test what the page displays (the output), but I can also test what led to that output (the calculation). I could test the latter without the former, but I could not (reliably) test the former without the latter.
So let’s go back to my abstract components. With state-based testing, we can verify that G keeps track of A or at least what A provides. The more interesting question, though, is whether a message sent by G is received by A. So let’s consider our paths. Our paths are simple, assuming only these two entities:
G --> A G <-- A
Here G and A are collaborators. They also have a contract with each other. More on that in a bit. Now, I can put an interface -- let's call it I -- in between the components to make the discussion easier. This interface may even make the testing easier. (Maybe.) Don't worry so much about what the "interface" is or even means for a moment. Let's first consider this going one way:
G --> I --> A
If you read Ben's thoughts on why interaction testing is needed, you'll see that with interaction testing, testing becomes about the behavior of objects rather than a function of the system. Interaction testing is thus where tests assert how an object interacts with its collaborators rather than how it changes state. I worded all that very much as a developer likely would. But you can consider this without considering code-level constructs. The "behavior of objects" can be replaced with the "behavior of components" or "behavior of features" and you can think about things the same way.
As a somewhat important point, at least to me, I believe it's the facility to switch between these modes of thought while recognizing the inherent similarities that gives power to the intersections of testing I've been talking about.
There are two key questions for something that it is the subject of an interaction test:
- Do I ask my collaborators the right questions?
- Can I handle all their responses?
So in this abstract case what I'm asking with my interaction test is this:
- Does G ask I the right questions?
From a code perspective, this is really asking:
- Does G invoke the right method on I for the right reason with the right parameters?
Again, that's a very code-specific way of putting it. In a non-code context, however, this is really just saying that a particular workflow exists and is followed. Consider again my Stardate example: if someone puts in a valid stardate number and clicks the "Convert" button, the output they expect is some calendar date. So the right method (convert stardate) is called for the right reason (convert button clicked) with the right parameters (the stardate value provided).
In the current context, this interaction test is answering is if G does the right thing.
Now let's consider something different: contract tests. There are two key questions for the subject of a contract test:
- Do I even try to answer the question you are asking?
- Can I give you the answer you might think I can?
This is really just turning the arrows the other way around:
G <-- I <-- A
So what a contract test is asking is this:
- Can A handle the question in the first place?
This is really asking:
- Does A have a corresponding implementation for the interface on I that G is depending on?
And yet once again, that's somewhat code-specific. In a non-code context, however, this is really just saying that a particular workflow exists and is followed, as with the interaction test. Consider my Stardate example again: when a stardate is converted, there is a very specific mechanism -- a calculation, in this case -- that is being utilized to convert the stardate. The visibility of the output calendar date is entirely dependent on some code outputting a calendar date to a form field. But the accuracy of the output calendar date is entirely dependent upon this particular calculation being called.
In the current context, this contract test is answering if A does the right thing.
So let's just recap for a second here. As I've framed all this, the interaction test answered if G was doing the right thing; the contract test answered if A was doing the right thing. But in a non-code context, what I'm concerned about was whether the correct workflow was followed such that a particular feature is able to provide business value as intended.
So consider the abstraction layer of the tests in this context. Integration tests would be testing G and A, possibly with the use of an interface (I) between them. That interface may in fact be a mock. Integrated tests, on the other hand, would be making sure that G and A were talking to each other as they would.
Focused object tests -- i.e., unit tests -- could tell us if G and A are working as the programmer intended them to. But the integrated test is what's ultimately going to give us the confidence that the combined integrated "system" of G + A is working as the business user needs them to.
Contacts and Interactions: Integration Problems?
So given this, why do we have integrated test problems, such that Joe would call them a "scam"? The problems usually crop up because there's a disconnect between how G thinks the collaboration should go and how the collaborators themselves (A, in this case) view it. This manifests in three ways:
- G asks a question that A can't answer.
- G misses checking for a response that A might give.
- G checks for a response that A can't give.
So maybe this is a good time to finally talk about what is meant by a contract test. A contract test is a test that verifies whether an implementation respects the contract of the interface it implements. So, when you consider an interface like I've been talking about between G and A, this kind of test is answering questions like this:
- Does any implementation of this interface behave the way the contract says it should?
- Does it answer the questions we try to ask it?
- And does it respond the way we think it might?
What does this mean, however? Well, let's consider if I is a mock. In that case, it means that if I remove I, then the answers to those questions should be the same as with I in place. This is where the code-based and non-code-based tests intersect in terms of how they consider implementation.
This is a bit of side point, albeit peripherally related, but you could my discussion of a model for a particular feature. If you check that out, don't focus too much on the tool but rather the context in which the tool was being used.
Specifically, in that context, I basically created an interface (I) that essentially replicates a way that planetary mass is calculated. But I could remove that interface and have the tests run against the actual application as well. In both cases, I should get the test to pass, right? You could argue there that I have an integration test and an integrated test. The integration is more looking at one element (the calculation) in isolation. The integrated test is more looking at how that calculation is ultimately being used as part of a working application (web site and all).
Now this is where one could make the argument that "unit testing is a scam" much moreso than integrated tests are a scam. After all, regardless of how good my unit tests are, I still always need that interface. The interface will either be some real interface between real components or it will be a mocked interface that those components utilize. Either way, there is a level of integrated tests that must be performed.
So how does any of this discussion help us frame our testing? Well, going back to the abstract example, consider this breakdown from a code perspective:
Considering A For every test of the type "do you even try to answer my question", there must be a contract test that calls that method with those parameters. For every test of the type "do you return the correct response", there must be a contract test that expects that response.
Considering G For every test of the type "do I ask this question correctly", there must be a contract test that calls a method with parameters. For every test of the type "do I handle the response correctly", there must be a contract test for the implementation that returns that response.
Let's apply that to the stardate example as well.
Considering Integration For every test of the type "do you even try to answer my question", there must be a contract test that uses the actual stardate calculation. For every test of the type "do you return the correct response", there must be a contract test that uses the results of the actual stardate calculation.
Considering Integrated For every test of the type "do I ask this question correctly", there must be a contract test for that operates the convert button with a valid stardate. For every test of the type "do I handle the response correctly", there must be a contract test for the implementation that displays the output.
And this gets to another fundamental of testing that works in the input -> action -> output context. Of any piece of functionality, you are asking it to ask the following question of itself: Do I ask my collaborators the right questions and can I handle their responses? Abstract it up enough and essentially you are talking about features collaborating.
Clive Barker, in "The Book of Blood", had this to say:
The dead have highways ... They have turnpikes and intersections. It is at these intersections, where the crowds of dead mingle and cross, that this forbidden highway is most likely to spill through into our world. Here the barriers that separate one reality from the next are worn thin with the passage of innumerable feet.
Could I pick a more inappropriate quote for a testing post? Knowing me, probably ... but when I read that, what I immediately hooked into was "the barriers that separate one reality from the next." Here in this post, and the previous, what I'm trying to do is look at the "traditional barriers" that spring up between code-based testing and non-code-based testing. I feel these barriers sometimes preclude developers and testers from having discussions at the intersections, from one "reality" to another. When this happens, testing is not the most effective and efficient it could be.
I said previously that "testing is a means of communication at various intersections." Unit testing, via the practices of TDD is said to be an intersection of design, coding, and debugging. Behavioral testing, via the practice of BDD, has become an intersection of requirements and design.
So what intersection am I talking about here in this post? Beyond the communication divide that sometimes happens with testers and developers, I'm also talking about the different realities of test abstraction. I talked about this a bit with abstraction levels for tests as well as how these levels fit into the craft of testing. This is often the "end-to-end" and "edge-to-edge" intersection, which I see as being some sort of intersection between TDD and BDD.
One of the goals of TDD is to learn about the quality of our design. If our design has problems, the tests will be hard to write and hard to understand. So the idea here is that unit tests can serve as a measure of technical design, but they are isolated (unit and/or integration) tests. They won't speak to the larger business design.
One of the goals of BDD is also to learn about the quality of our design. If it's cumbersome to express the behavior of our applications or services, then our design may have issues. So the idea here is that acceptance tests can serve as a measure of business design. These are inclusive (integrated) tests. These won't speak to the underlying code design.
But there's a commonality there: you want tests that put pressure on design. But the design at what level? Well ... both, right? We're always in the business of doing at least these two things:
- Checking the correctness of the system.
- Checking the health of the design.
That, right there, is the intersection that matters most to me. The only testing that is a scam is the testing that does not let me do that at the level of abstraction most appropriate for what I am concerned about.
So I'll close this post with a thought of where my head is at right now ...
The intersection of testing with the other disciplines -- primarily business analysis and development -- is communicating at the correct level of abstraction in service of putting pressure on the appropriate level of design.