In the United States we are currently going through one of our normal rounds of political craziness as we move towards a new election. This is not a political blog and I don’t want to add to the crazy. Thus this post will not discuss current political viewpoints, whether for or against, and will have nothing to do with current candidates. Rather this post will discuss one specific aspect of politics that has a historical context that relates to how our testing industry has evolved and continues to evolve.
As many automation engineers know, we’ve been dealing with Marionette, the successor to the Firefox Selenium driver, for some time now. We’re starting to see a light at the end of the tunnel. However, I’m finding a lot of teams are still struggling with what all of this means. Here I’ll talk real briefly.
Awhile back I talked about what makes testing complicated. To be honest, that post is embarrassingly written as I look back on it. That said, I think there is some value in what it says. But to show how my thinking has refined, as well as become a bit more operational, let’s piggy-back on my previous “code as specification” posts and look at why testing is challenging.
This post follows on from my code is a specification. I highly recommend reading that post to get the context because here I’m going to add a bit to the sample code from that post. This is being done to illustrate the idea of test code and production code working together to act as an executable specification. Here I’m going to focus a bit on how this has relevance to the business as well.
Early on I talked about business needs becoming specs that become code. More recently, in my modern testing posts, I talked about the idea of production code being the specification of behavior. I wasn’t necessarily very descriptive in all of that, however. Let’s see if I can do better here.
It’s become tradition — with a bit of dogma — to point to triangles and quadrants to “explain” things about testing and development. A good case in point is presented in the article Agile Testing Automation. My goal is not to critique the article but rather to use it to highlight what I see as some of the problem. So let’s subject tradition to some rational inquiry and let’s subject dogma to a bit of scrutiny.
In previous posts about the integration / integrated distinction (see part 1 and part 2 of that series), I talked about how there is in fact a distinction and provided a little rationale behind why this distinction currently matters. So now let’s talk a little “around” the concept of integration — not integrated — and see where this takes us.
In the previous post in this series, I talked about the counterargument of there being no distinction between integration and integrated. That post ended on a question. In this post, I will start from the presumption that there is a distinction between the terms and explore that a bit.
Here I’ll talk about the difference between the terms integration and integrated when applied to testing. You may read that and say “Um, there is no difference, is there?” Well, let’s talk about it.
Awhile back I talked about being cross-discipline associative. I did something similar to this approach when I asked what time travel could teach us about testing. Let’s see how this works with another domain entirely.