In this two-part post, I want to cover the distinctions between integration testing and integrated testing as well as the distinction between edge-to-edge and end-to-end testing. I also want to show how this thinking should be leading testers to think more in terms of contract testing. And, finally, so all of this isn’t entirely theoretical, I’ll provide a React-based code example. That’s a lot to cover so let’s get to it.
Just to set the stage a bit further: there is programmatic notion of “integration” and the business notion of “integrated” and this has led to a potentially confusing distinction between integration testing and integrated testing. I talked a little about this regarding an integration pact. Here I’ll talk about how this leads us to distinctions of edge-to-edge testing as opposed to end-to-end testing and how this allows testers to start working with developers below the UI, treating most things as a service that has a contract.
I’ll note that I’m going to avoid going into strict contract testing tools like Pact. Instead I just want to show what modern testing looks like and what one aspect of “being a technical tester” means. My focus here is on test specialists who should have a technical enough background that they can test below the UI.
First let’s talk about integration risks. Let’s state an integration risk like this:
There is some particular risk involved with putting things together and that risk would not reasonably be analyzable when evaluating units that were designed to be integrated until after the integration actually occurred.
Read that a few times. It can be a lot to unpack. But essentially the risk statement is pretty simple, right? It’s saying that testing units that were designed to be integrated may not find problems into the integration fully happens. Let’s frame a question rather than a statement:
Can all the problems that will happen in integration be found prior to integration?
Well, let’s consider a video that’s been making the rounds:
If, for some reason, the video doesn’t play in the post directly, you can get the file, which is called integrated.mp4
This is a good example of where the “integration / integrated” divide happens.
The sink, for example, has handles and when they are turned, water comes out. There’s a lot of integration going on there, including a distinction between hot and cold water. There is also a drain that allows water to avoid building up and overflowing the sink but also would allow for water to build up to a certain level. Lots of integration; each of which could be unit tested. But note a key point:
That integration has nothing to do with the integrated nature of the towel dispenser with the sink.
Likewise, the towel dispenser has a motion sensor and when it’s triggered, a roller is spun that dispenses towels. That’s integration (because multiple components), each of which could have been unit tested. But, again, note a key point:
That integration has nothing to with the integrated nature of the towel dispenser with the sink.
What’s also interesting to note is that if you tried the right sink handle, it’s likely that the motion sensor on the towel dispenser might not be triggered. Which again points the way to a distinction between integration (sink components as distinct from dispenser components) and integrated (sink and dispenser in proximity).
The Problem of Being Integrated
My point with showing this is that what we have here isn’t an integration problem. It’s an integrated problem.
Integrated (as per current parlance, anyway) means something that involves several layers of abstraction, or subsystems, in combination. The environment is one of the abstractions. Just as much as the browser is the environment for, say, a web app. You’re only building the web app but you do have to care about the wider environment as well.
Thus an integrated view would, in fact, consider the relationship of things, such as their proximity in the case shown in the video or their embedding as in the case of a web app being hosted by a browser. And, of course, that browser is embedded in its own context: a platform such as a mobile device, an operating system, etc.
The (Integrated) Design Risk
There is an important point to note about the example with the sink and towel dispenser and it has to do with the issue of design. Testers often have to approach something with different levels of thinking: analytical, critical, logical, and — most importantly in many cases — system.
So let’s say, in the case of the sink and dispenser above, that all we have is the working implementation. We’re not given any of the requirements that were previously used as the basis for this implementation. Even assuming there were any. So this means we often have to reason about the history that came before us, thinking about intent and motivation.
That’s what systems thinking essentially is: understanding the relations between different ideas and actions and then perceiving the impact of one of them on all the others. This means consideration of a holistic point of view; studying relations and interactions and then making value judgments upon them.
But what I’m describing is after-the-fact: after we have a towel dispenser and sink wherein the proximity is causing an issue. We find the problem as shown in the video via testing as an execution activity. But this very nicely shows why testing has to be considered a design activity and not just an execution activity. Testing putting pressure on design — before implementation — would suggest a proximity heuristic: meaning, is the proximity of these two items something to be considered? The answer hopefully being “yes” in this case.
From Integrations Thinking to Contract Thinking
I’m not necessarily going to ask you to agree with my terms here as I use them. But I at least need you to see where my thinking is at. My view of the world:
- An integration test is a test that checks several classes or functions together.
- An integrated test is a test that checks several layers of abstraction, or subsystems, in combination.
Another way to word that second point is to say that an integrated test potentially checks several problem domains at the same time. This is as opposed to checking a single problem domain in isolation. As an example of what I mean, perhaps for some functionality I’m testing, I have to login as a campaign admin and create an event. Then I can login as, say, a campaign manager who assigns people to the event, which sends out emails to them.
A few different domains intersect there. But that’s ultimately what we test: the system has to deliver value. And just being able to login as an admin is one part of that value. Creating an event is another part. Someone being able to use that event is yet another. Each of those aspects solves a particular problem in the domain.
With that example given, let’s consider a few other points:
- Integration is about the basic correctness of the system (feedback about code design).
- Integrated is about the value of the system (feedback about business design).
In my view, this leads to an interesting dynamic. Let’s see if you agree. Consider:
- Test simplicity is best focused on in unit tests.
- Production faithfulness is best sought with integrated tests.
I say that’s an interesting dynamic because it suggests the following:
- You should try to increase production faithfulness in unit tests.
- You should try to increase test simplicity in integrated tests.
This leads us to yet another interesting dynamic that gets us beyond the traditional testing pyramid (which I’ve talked about as dogma in testing) and gets us more into a diamond-style shape.
You have to imagine that diamond being able to morph and flex a bit, where the middle layer can expand or contract based on need. What the testing industry is slowing coming to realize is that we can utilize micro-test (unit) principles but scale them for macro-test (integrated) concerns. The middle of the diamond — along with its ability to flex — is a way we can calibrate our scale.
But there is a limiting function to both approaches. That’s where integration meets integrated. This is where contracts start to happen. Or, at least, where contract-style thinking starts to happen.
A React Example
Regarding that application I mentioned, you can check out the thanos-react code repo. This is a very simple React application, probably not written all that well on my part. You can clone the repo or just download it as an archive. Once you have it on your machine, go into the
thanos-react directory and type the following:
That will get you all the dependencies. Again, this does require you having Node and npm available on your operating system of choice.
You can start up the application by typing the following:
Then you can navigate to http://localhost:3000/thanos in your browser. You should see the minimalist app come up. There’s not much to the app, so, as a tester, explore it a bit. And start thinking about the tests you would want to run.
Here is what you should see upon going to that URL:
Brief Aside on React
I don’t want you to worry if you don’t know React all that well. So let me provide just a bit of the basis for how React essentially works. If you look in my components directory you’ll see a bunch of files. These files reflect what React calls components. One of those components is a DisplayScreen. Another is a DisplayTimer. There’s an Thanos component that coordinates the application.
This is essentially how React works. You have a series of components. Those components will have properties (called props) and state. Those components can be mounted (meaning, be made ready for use) and unmounted (meaning, be taken out of use). Components can send props to each other.
To put a little meat on just one example, the DisplayScreen component can receive props: displayImage, displayMessage, and onActivate. The DisplayScreen component pulls in other components: DisplayTimer, DisplayContent, and DisplayActivate.
The Thanos React Conditions
As any test specialist should know, testing is often about determining the conditions that need to be exercised. In the case of the Thanos React app, the following conditions (at minimum) exist:
- Shows the current time (varying unary condition)
- Can show a user-defined display message (varying binary condition: present/absent; accurate text)
- Can show a user-defined display image (varying unary condition: default/custom)
- Has a slide-to-activate (condition)
Now, obviously as a developer, I could test each aspect individually at a unit level. That’s one type of edge-to-edge, of course. But here I want to focus on integration. But, as per the focus of this article, I want to start thinking in terms of contracts. A contract defines the expected behavior of your component and what assumptions are reasonable to have about its usage.
Consider that every React component contributs one aspect to the contract:
- Content it renders. (It may render nothing or what it renders may be context-dependent.)
There are other parts of the contract, however, such as:
- Props received
- State held (it may be stateless)
- Behavior (what it does when interacted with)
- Collaboration (does it pass props to another component; does it host another component)
- Side Effects (mounting, unmounting)
So here’s an important point. You, as a tester, look for basis paths for signs of contracts. (I previously talked a bit about basis paths and test coverage, if you’re interested.)
Basis paths shows variation possibilities. These are based on constructs like switches, ifs, loops, and so on. This is a way that developers and testers can have fruitful discussions by speaking in what is approximately, if not exactly, the same language. You, as a tester, can also frame it is also around constraints. Contracts are all about constraints. So developers/testers can imagine a component contract by it asking questions, like:
- Do I receive any props?
- Do I do anything with the props I receive?
- What components do I render?
- Do I pass anything to those components?
- Do I ever keep anything in state?
- Do I ever invalidate my state when receiving new props?
- When do I update state?
- If I’m interacted with, what do I do?
- If a child component calls a callback that I passed to it, what do I do?
- Side Effects
- Does anything happen when I’m mounted?
- Does anything happen When I’m unmounted?
The Abstraction Level for Testing
Earlier I asked you, as a tester, to start thinking about testing the Thanos React application. And you were probably thinking about these tests at the graphical UI level, right? Meaning the end-to-end tests. And that makes sense. Obviously we want to test as much as we can like a user would.
But there’s also the reality that we want to test often. Very often. Ideally with each and every build and even more ideally with each and every commit. But, as we know, end-to-end tests can be expensive in that context.
What we would like are tests that operate more edge-to-edge. Those are quicker and don’t require the full graphical UI. So you might ask: “Well, isn’t that what unit tests are for?” Sure. Unit tests are a form of isolation testing at the edges; but the edges are very small. In the context of something like React, we deal with components. These components are bigger than units but … what are they?
This is where we get into integration testing. And if we think about integration as components — or services — talking to each other, we realize this takes us into the world of contracts. This is a type of testing that can tell us a whole lot and can do so very quickly. Even better, it can do so closer to the time we make our mistakes rather than waiting for the end-to-end testing to take place.
A powerful question I like to ask teams is this:
If we only did edge-to-edge testing, would we even need to do end-to-end testing?
The answer is yes, we would. But this leads us into thinking about how much and to what extent. I won’t cover all those aspects in this article because I want to keep this focused on how testers should be working with developers when we want to make better decisions sooner by catching our mistakes quicker. But I will frame what I just said in the following way:
- If we are dealing with systems that have isolatable components with well-defined responsibilities …
- … and we have a series of comprehensive component and/or service tests …
- … does this allow use to reduce the need for end-to-end or system tests?
Test specialists are working with the development team to ask this question and come up with good answers to it.
That should be enough for this post. In the next post, we’ll dig into writing some tests for the Thanos React application. My goal there will be to introduce contract-style thinking along with the expression of that thinking in contract-style tests.