We’re continuing to build out our Benchmarker application, putting pressure on design as we go and keeping testability front-and-center as the quality attribute we want to provide, enhance, and maintain. Keep in mind we’re still slogging toward value, assuming that, for the most part, we’re handling correctness as we go along.
In the previous post we ended up creating tests with a context. And that context was allowing us to bridge the gap between correctness and value while also continuing to put focus on testability. We saw some warning signs along the way but, overall, made progress. Here we’ll continue that progress and also start to see how while testability is something to strive for, just doing so by itself guarantees us very little.
Here we’ll continue on from the previous posts, getting more and more into aligning correctness of implementation with value for the business. We’re also going to look a bit at that line where the “unit test” starts to shade into an “integration test” and thus where a “programmer test” might start becoming a “customer test.”
Here we’ll continue on from the first and second posts in this series. We made a good start with looking at the idea of correctness and attempting to encode our assumptions about that correctness in the form of code that was driven by tests. So let’s keep evolving our Benchmarker application.
We’re going to continue on from the first post in this series by starting to build our Benchmarker application. In the first post we considered design pressure on value. Now we’re going to get into correctness. Value and correctness are two sides of the testability coin. So let’s get started.
I talked before about whether testers should be developers and I also talked about the intersection of development and testing with a focus on testability. Here I want to make that intersection a little more actionable by considering an example over a series of posts.
Part of achieving quality in software means treating testability as a primary quality attribute. Once you do that, you can then adapt your requirements and development styles from that point of view. Whether you call that “agile”, “lean”, “scrappy” or whatever else is largely beside the point. The focus is on testability. But let’s talk about what that means.
Like most disciplines, testing has some so-called truisms that get passed around, many often being blindly accepted. The problem is often that our discipline requires a bit of nuance that the truisms — even if accurate — tend to gloss over. So let’s dig into a few of these.
I personally think the answer to my title question is an obvious “yes.” But I do see a lot of discussions about how the DevOps movement and/or culture has “killed” testing or has removed the need for testers. Let’s talk about that.
There are groups of testers out there right now denying that the term “manual testing” — or the reality of the term — does exist or has ever existed. To me this is a bit of historical revisionism. Let’s talk about this.
I talked before about tradition and dogma in the testing field. It’s often interesting to see how the idea that get passed off for wisdom in the testing world come about. Let’s take one example and break it down.
I see a lot of vocal pundits in the testing world saying that artificial intelligence (by however they choose to define that) is no threat to future testing. I’ll ignore for a moment that most of these pundits get AI wrong — equating it, as they do, with “better algorithms” — but I won’t ignore that they seem to forget a key point: will there be a perception that AI can handle future testing?
There is a distinction between “being a programmer” and “being a developer.” Yet those two terms get conflated in our industry quite a bit. The idea of a tester also gets conflated with … which? Programmer or Developer? That very much depends. Combine what I just said with the idea that many testers feel that DevOps has caused a decline in testing. Is there a correlation there? And should testers become developers? Let’s talk about this.
There is still so much wrong with how testers — even those who will write automation — are interviewed. I talked about this already regarding how technical test interviews are broken and about interviewing technical testers more broadly. Is there really more to say? I think so; let’s see if you agree.