I believe that semantics matter. I do realize not all semantics matter equally. But, still: semantics matter. It’s disappointing when otherwise intelligent people seem to dismiss something simply because they feel it’s just semantics. Let’s talk about this.
If you are going to have an AI that “does testing” — as opposed to some other activity like analysis or pattern recognition — you are going to have to move from a focus solely on perception and add the dimension of actions. I’m finding a lot of folks promising “AI-based testing tools” or those eagerly hoping for them are very much confusing this distinction. So let’s talk about it and, as we do, let’s create an illustrative example.
In this post I want to explore how a theory of constraints can be combined with cost of mistake curves to consider how testing operates, first and foremost, around the concept of design. Keeping design cheap at all times is a value of testing that I rarely see articulated. So here I’ll give that articulation a shot.
I get asked this a lot. I’ve been doing some form of testing since the early 1990s and while my initial opportunities were provided by chance, my career was one of choice. Rather than say why I stay in testing, I’ll frame this around some questions and answers that may give some insight of how testing has allowed me to answer certain questions in a career-relevant way.
My contention is that specialist testers know enough to not use the term “non-functional.” And if they are in environments where this term is used, they seek to shift people from this vocabulary. This is one of the ways that I spot specialist testers. Let’s talk about my rationale for this and why I think it’s important.
I’ve talked about interviewing testers before and I’ve talked specifically about hiring test specialists. Here I’m going to try to be a bit more concise, yet also a bit more expansive, about exactly what I think it means to look for specialist testers.
I periodically find myself questioning the extent to which it makes sense to blog. I find it’s healthy to go through these periods of reflection and introspection. I often find it’s even healthier to expose these thoughts to others.
I recently talked about a focus on being able to test an AI before you trust an AI to test for you. Here I want to provide a bit more focus on how worth it this idea might be. But my goal here is not to dampen the spirits of those who want to build such tools; rather I want to suggest some of the challenges and provide a bit of the vocabulary. I want to give you a way to frame the current situation with AI and its value as a test-supporting technology.
A lot of testers are talking about how to use artificial intelligence (AI) or machine learning (ML) to be the next biggest thing in the test tooling industry. Many are in what seem to be a lemming-like hurry to abdicate their responsibilities to algorithms that will do their thinking for them. Those same testers, however, often have absolutely no idea how to actually test such systems in the first place. So let’s dive into this a bit.
The general idea of a prison of representation is when you are locked into some means or method of being understood. This means of “being locked” can come from the past and, interestingly enough, from the future. I believe testing, as a discipline, is in danger of being in such a prison. Let’s talk about this.
As a specialist tester, one has been doing this since the early 1990s, it’s interesting to follow the contours of a notoriously fractious discipline. A discipline that is often populated by articulate but frustratingly argumentative practitioners. I say “frustratingly” not because argumentation is bad (it isn’t) but because that argumentation often turns into becoming an instinctive contrarian and a ruthless, rather than pragmatic, skeptic.
We made it! The final post in the testability series. Here we bring the Benchmarker application to a reasonable close. Then we’ll take a bit of time to briefly cover the journey we’ve taken together here.
We’re continuing to build out our Benchmarker application, putting pressure on design as we go and keeping testability front-and-center as the quality attribute we want to provide, enhance, and maintain. Keep in mind we’re still slogging toward value, assuming that, for the most part, we’re handling correctness as we go along.
In the previous post we ended up creating tests with a context. And that context was allowing us to bridge the gap between correctness and value while also continuing to put focus on testability. We saw some warning signs along the way but, overall, made progress. Here we’ll continue that progress and also start to see how while testability is something to strive for, just doing so by itself guarantees us very little.