Integration and Integrated, Part 1

Here I’ll talk about the difference between the terms integration and integrated when applied to testing. You may read that and say “Um, there is no difference, is there?” Well, let’s talk about it.

There are a lot of distinctions that the test industry seems to get wrong. Putting on my opinionated hat, one of the most pernicious is positive vs negative testing. I’ve talked about this before and in the years since I’ve written that article, I’m more convinced than ever that testers who use this terminology are not doing themselves or anyone else any favors. Another example would be the notion of “non-functional.” Again, I’ve talked about this before as well.

But these differences are not always just an argument over semantics. They are, instead, differences upon which we conceptualize our practice.

Much like Occam’s Razor and applying needless assumptions, it’s true that a discipline does not want to necessarily come up with distinctions just for the sake of doing so. It’s important that terms have some operational distinction. (See my post on how testers should think operationally.) I’ve certainly found that the distinction between “integrated” and “integration” is one that has an operational difference that can guide behavior.

I should note that I’m particularly grateful to J.B. Rainsberger for providing a lot of thoughtful material upon which I have based my thoughts. That being said, I want to make it clear that I’m not attempting to fully articulate his thoughts. I say that because I don’t want to misrepresent his thinking on this subject.

Argue from the Opposite Side

Now, having said all that, in the first part of this series, let’s actually consider the counter-argument to what I’m saying. Let’s simply treat “integration” and “integrated” as the same thing. Since the term “integration testing” has been around for quite some time, let’s just use that term.

One side of the integration testing problem is to imagine specific dependencies and failure modes. The integration risk is investigating the specific nature of the dependencies. This leads us to ask the following question: What sorts of things might work fine in isolation but get disrupted or fail when forced to work with other components?

All of this presumes some communication between the components, of course. In other words, it presumes there is in fact some integration to be dealt with.

But, really, isn’t there pretty much always?

If you have some components that are totally isolated and that never send nor receive any information, then it would be interesting to hear what those components actually do in the first place. So to give this bird some flight, let’s assume some communication is possible. These components may just receive, but never send. They may send, but never receive. Or they may do a bit of both.

Integration Means Communicating

For code we generally say that there are dependencies between bits of code. Given two objects, X and Y, then X and Y must communicate and share information with each other. And they do so by sending messages to each other. But let’s break this down.

  • These components can communicate well. Meaning, they send the correct messages, accept the correct messages, do not accept (or gracefully handle) incorrect messages.
  • These components can communicate poorly. Meaning, they send the wrong message, accept the wrong message, don’t send any message at all (when they should), do send a message (when they shouldn’t).

But it’s not even that simple, right? After all, there can be performance factors that mitigate the communication. The intervening medium may not allow X and Y to communicate effectively. For example, perhaps messages can get truncated or mangled; or perhaps messages get rerouted and take more time.

Also, some intervening component may alter communication between X and Y. So, for example, a cache between them provides a response to Z that Y would have had to send; but in this case, X was never communicated with — unless, of course, Z had to communicate with X to determine if what it had in cache was up to date.

There is also potential integration. X and Y may have numerous ways to communicate information. But perhaps in a given context, only one bit of that is being used. Say, for example, [UsersDipslayed]. X has that information on hand. But if Y asks X about [UsersDisplayed,TotalUsers], well then some of that information must come from a database. That database may be in our control or not in our control. But the database, in either case, is getting X the information so that X can send it to Y.

Wait! Can’t I just do unit testing?

Could unit testing have handled all this?

Well, how could it? More specifically, if “unit” testing is relegated to so-called “micro-tests”, meaning at the level of methods within an object? Certainly with that kind of distinction, unit testing might have worked in the case of Y asking X for [UsersDisplayed] but not for [UsersDisplayed,TotalUsers].

What about if we broaden “unit” to mean interacting objects? Well, okay — but interacting objects are using a communication mechanism. Does the fact that the objects communicate mean they are integrated? If so, am I really doing a “unit test” at this point?

Okay, fine, so what if I mock/stub/double one of the objects? Then I don’t really have communication. Except that you do — it just happens to be with the mock/stub/double. So are you “integration testing” X with Mock-Y? To me, “mocking” and “integrating” would be two different things. So I would argue any checks that use a mock could be considered “unit” testing but not “integration testing.”

So the reason I can’t just rely on unit testing is because I need the actual entities in the system talking to each other, sending and receiving messages.

Do Components Equate to the System?

But let’s not give up just yet on that notion of unit testing.

Let’s be simplistic and say the unit tests provide a test — or set of tests — for each of the components. But not for the “final” product. Yet obviously we want to test the final product, right?

Okay, then let’s ask ourselves this: why, if we have tested the final components, would that not be the same thing as testing the final product?

This is a question I find many testers think they have the answers to but, more often than not, don’t.

Let me put it this way: when you test the final integrated product, the individual units don’t “know” that any other unit is anywhere nearby, right? Each line of code does what it does without checking whether other lines of code are on the same system, right? Or am I wrong? If the units are being units, why can’t we rely on unit testing to sort everything out?

The Units of Construction

Going with my particle physics discussion, let’s say you were to build a particle collider. You need a lot of things for this. You need storage rings, which store the particle beams. You need detectors, that look for radiation and other particle debris from collisions. You need electromagnets to keep the particles confined to a narrow beam so that you can guide the particles into collisions. You need vacuum systems, which remove air and dust from the tube of the collider. You need monitoring systems that can determine when collisions happen and what happened as a result.

Now, each of those components can be of high quality. They can be precision made and tested. But, together, they could make a very poor accelerator. Let’s say I had powerful magnets but not quite powerful enough for my experiments. Let’s say I have a vacuum system that removes particulate matter, but not often enough. Your monitoring system cannot work quickly enough to process the interactions from a collision.

Yet each component, by itself, is perfectly fine. The collider as a whole, however, is not so great. How much of that is due to the fact that the units are in fact not performing the functions that the other units need them to perform? For example, if the vacuum system cannot clear out the collider tube, then you could say that each is not performing the “function” for the other that the other was expecting. Because one function was “keep me clean of dust and particulate matter” (from the collider perspective) and the other was “only generate dust and particulate matter that I can handle” (from the vacuum system perspective).

When you put component X with component Y in some form of relationship, you’ve created a system. The behavior of the overall system is down to the behavior of X, the behavior of Y, but also how they interact with each other. The overall system behavior is more complex that the components taken individually.

So the focus should be on inquiring about the interactions between components, with some emphasis on exposing pathological edge cases where components behave sensible in isolation, but don’t work well together, like an electromagnet that is supposed to guide particle beams of a certain luminosity but is placed in a system that has a higher luminosity.

But Do They Truly Work In Isolation?

Particle colliders are a bit out of many people’s frame of reference. So let’s just go with the more common example: you have a mountain bike with gears from a child’s bicycle.

The problem is the child’s gears work for a child’s bike and the mountain bike gears work for a mountain bike. We might agree they don’t work together. But do we agree that it’s reasonable to say that they work good in isolation?

In fact, they do not work fine in isolation — if they were designed to work with the other mismatched components. But it’s important to note that it would be incorrect to say that the problems between them could not be detected until they were put together (i.e., integrated). In fact, we know very clearly before they were put together that they could not work together in the same bicycle.

The same applies to the particle collider. If I know the luminosities that can be handled, then a particular magnet working as a magnet in isolation is really telling me nothing if I know — even before “integrating” — that this magnet could not handle luminosities in the particle collider.

Thus the Integration Risk

The spirit of integration risk is this:

  • There is some particular risk involved with putting things together that would not reasonably be analyzable when evaluating units that were designed to be integrated until after the integration actually occurred.

So the question we’re all asking is this: is there in fact such a risk? Or can all the problems that will happen in integration be found prior to integration?

Where Does This Leave Us?

The answers you have to those last questions are indicative of how you frame testing at one of the key points of software construction: where things meet. But there’s a wider focus here that leads us to how developers conceptualize testing and how testers conceptualize testing.

This context was necessary to set up the second part of this discussion, where I’ll try to determine if there is a usefully operational distinction between “integrated” and “integration.”


This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.