I’ve talked about interviewing testers before and I’ve talked specifically about hiring test specialists. Here I’m going to try to be a bit more concise, yet also a bit more expansive, about exactly what I think it means to look for specialist testers.
I periodically find myself questioning the extent to which it makes sense to blog. I find it’s healthy to go through these periods of reflection and introspection. I often find it’s even healthier to expose these thoughts to others.
I recently talked about a focus on being able to test an AI before you trust an AI to test for you. Here I want to provide a bit more focus on how worth it this idea might be. But my goal here is not to dampen the spirits of those who want to build such tools; rather I want to suggest some of the challenges and provide a bit of the vocabulary. I want to give you a way to frame the current situation with AI and its value as a test-supporting technology.
A lot of testers are talking about how to use artificial intelligence (AI) or machine learning (ML) to be the next biggest thing in the test tooling industry. Many are in what seem to be a lemming-like hurry to abdicate their responsibilities to algorithms that will do their thinking for them. Those same testers, however, often have absolutely no idea how to actually test such systems in the first place. So let’s dive into this a bit.
The general idea of a prison of representation is when you are locked into some means or method of being understood. This means of “being locked” can come from the past and, interestingly enough, from the future. I believe testing, as a discipline, is in danger of being in such a prison. Let’s talk about this.
As a specialist tester, one has been doing this since the early 1990s, it’s interesting to follow the contours of a notoriously fractious discipline. A discipline that is often populated by articulate but frustratingly argumentative practitioners. I say “frustratingly” not because argumentation is bad (it isn’t) but because that argumentation often turns into becoming an instinctive contrarian and a ruthless, rather than pragmatic, skeptic.
We made it! The final post in the testability series. Here we bring the Benchmarker application to a reasonable close. Then we’ll take a bit of time to briefly cover the journey we’ve taken together here.
We’re continuing to build out our Benchmarker application, putting pressure on design as we go and keeping testability front-and-center as the quality attribute we want to provide, enhance, and maintain. Keep in mind we’re still slogging toward value, assuming that, for the most part, we’re handling correctness as we go along.
In the previous post we ended up creating tests with a context. And that context was allowing us to bridge the gap between correctness and value while also continuing to put focus on testability. We saw some warning signs along the way but, overall, made progress. Here we’ll continue that progress and also start to see how while testability is something to strive for, just doing so by itself guarantees us very little.
Here we’ll continue on from the previous posts, getting more and more into aligning correctness of implementation with value for the business. We’re also going to look a bit at that line where the “unit test” starts to shade into an “integration test” and thus where a “programmer test” might start becoming a “customer test.”
Here we’ll continue on from the first and second posts in this series. We made a good start with looking at the idea of correctness and attempting to encode our assumptions about that correctness in the form of code that was driven by tests. So let’s keep evolving our Benchmarker application.
We’re going to continue on from the first post in this series by starting to build our Benchmarker application. In the first post we considered design pressure on value. Now we’re going to get into correctness. Value and correctness are two sides of the testability coin. So let’s get started.
I talked before about whether testers should be developers and I also talked about the intersection of development and testing with a focus on testability. Here I want to make that intersection a little more actionable by considering an example over a series of posts.
Part of achieving quality in software means treating testability as a primary quality attribute. Once you do that, you can then adapt your requirements and development styles from that point of view. Whether you call that “agile”, “lean”, “scrappy” or whatever else is largely beside the point. The focus is on testability. But let’s talk about what that means.