I find that many testers still like to “talk in quotes.” Meaning, they like to throw out quotes or sentences and then act is if they’ve said something profound. And maybe they have. But I’m seeing more of this lately without, it often seems, the necessary ability to think beyond the quote. Let’s dig in and see if I have a point to make.
So, not surprisingly, the AI test tooling community didn’t want to engage on my AI test challenge. They saw it as being inherently unfair. And, to a certain extent, it could be. But what this really showcased was people are talking about AI and test tooling with way too much hyperbole in an attempt to gain traction. So was my test challenge unfair? Is there too much hyperbole? Let’s dig in a bit.
This is not a challenge for testers to test an AI. Although that is a worthy challenge, one I tackled a bit. For right now, I want to propose a challenge for those promoting tools that claim to perform testing, particularly when the claim is that such tooling stands a chance of replacing human testers.
Awhile back I talked about a possible test apocalypse and how you might respond to that. In honor of the current film Avengers: Endgame, I started thinking about this topic again. Here I’ll use the well-known “snap” of Thanos as my starting off point, trusting that this makes the article title a little more clear.
The title of this article is actually a little too simplistic. It’s more about asking: “Will AI Truly Perform Testing?” Or perhaps; “Will AI Perform Actual Testing?”
I’m asking here: do testers understand testing? By which I mean: do testers truly understand testing? By which I mean … okay, you know what, let’s just dig in to the basis of testing for a moment.
In a previous post I talked about why I’ve stayed in testing. Here I want to be a little more concise around the idea of how you might frame testing beyond how it is normally described, particularly if you want to get someone excited about testing as a career choice.
I’ve focused on the danger of the technocracy before, which is where we turn testing into a programming problem. This has, in many cases, infused the interview process for test specialists as well. And yet automation is important! There’s no doubt that automation is a scaling factor and a way to leverage humans better. So that does bring up an interesting dynamic when you interview test specialists but where you hope to have some sort of programmatic focus.
We all know the situation, right? We find an “issue” but in many cases it comes down to determining whether our issue is something for the developers to fix or if it’s something for product to clarify. It’s often a question framed as “Is this a bug?” Hidden within that question is dealing with the protean nature of a “bug” in current software development.
This post will continue on directly from the first post, where I introduced you to the concepts as well as an application that can serve as a guide to some of the ideas. I highly recommend reading that first post for the context if you haven’t already.
In this two-part post, I want to cover the distinctions between integration testing and integrated testing as well as the distinction between edge-to-edge and end-to-end testing. I also want to show how this thinking should be leading testers to think more in terms of contract testing. And, finally, so all of this isn’t entirely theoretical, I’ll provide a React-based code example. That’s a lot to cover so let’s get to it.
I see a lot of companies who have trouble getting started with quality assurance and test positions. They do a lot of interviewing but make a lot of mistakes when bringing in those crucial first people that will let them scale for the future. These companies look for things like “ability to write test cases” or “knowledge of automation.” They don’t look for people who have specialized in testing. But what does that even mean?
I was recently asked my thoughts about questions scrum masters could be asked during an interview process, particularly in regards to their thoughts around testing and quality. This allowed me to think about the role of being a scrum master. Which in turn allowed me to think about how I would do as a scrum master.
I still see many testers talking about the number of bugs found as some sort of barometer of success in terms of effective testing. But lately I’ve seen this framed around the “quality” of bugs found, rather than just their quantity. Still, you have to be a bit careful here. Let’s talk about this.