Many are debating the efficacy of artificial intelligence as it relates to the practice and craft of testing. Perhaps not surprisingly, the loudest voices tend to be the ones who have the least experience with the technology beyond just playing around with ChatGPT here and there and making broad pronouncements, both for and against. We need to truly start thinking about AI critically and not just reacting to it if we want those with a quality and test specialty to have relevance in this context. Continue reading Thinking About AI
Author: Jeff Nyman
Keeping People in Computing
Following on from computing eras but before getting to my “Thinking About AI” series, there’s one intersection I’d like to bring up which is the notion of “people’s computing.” This idea of people being front-and-center of computation, and thus technology, once held sway but has often been in danger from a wider technocracy. Continue reading Keeping People in Computing
Computing and Crucible Eras
This post will be a bit of a divergence from my normal posting style although very much in line with the idea of stories that take place in my testing career. Continue reading Computing and Crucible Eras
AI Testing – Generating and Transforming, Part 3
We come to the third post of this particular series (see the first and second) where I’ll focus on an extended example that brings together much of what I’ve been talking about but also shows the difficulty of “getting it right” when it comes to AI systems and why testing is so crucial. Continue reading AI Testing – Generating and Transforming, Part 3
AI Testing – Generating and Transforming, Part 2
This post continues on from the first one. Here I’m going to break down the question-answering model that we looked at a bit so that we can understand what it’s actually doing. What I show is, while decidedly simplified, exactly what tools like ChatGPT are essentially doing. This will set us up for a larger example. So let’s dig in! Continue reading AI Testing – Generating and Transforming, Part 2
AI Testing – Generating and Transforming, Part 1
The idea of “Generative AI” is very much in the air as I write this post. What’s often lacking is some of the ground-level understanding to see how all of this works. This is particularly important because the whole idea of “generative” concepts is really focused more on the idea of transformations. So let’s dig in! Continue reading AI Testing – Generating and Transforming, Part 1
The Abstract Battle for Irrelevancy
An interesting discussion came up on LinkedIn recently regarding the idea of whether automated tools “find bugs” and I actually found the discussion around this to be exactly what is wrong with a lot of our testing industry these days. I find testers are fighting more abstract battles and become less relevant as they do so. But maybe I’m the one that’s wrong on that? Possibly! Let’s dig in. Continue reading The Abstract Battle for Irrelevancy
AI Testing – Measures and Scores, Part 2
AI Testing – Measures and Scores, Part 1
There are various evaluation measures and scores used to assess the performance of AI systems. As someone adopting a testing mindset in this context, those measures and scores are very important. Beyond simply understanding them as a concept, it’s important to see how they play out with working examples. That’s what I’ll attempt in this post. Continue reading AI Testing – Measures and Scores, Part 1
Human and AI Learning, Part 2
In part 1 of this post we talked about a human learning to play a game like Elden Ring to overcome its challenges. We looked at some AI concepts in that particular context. One thing we didn’t do though is talk about assessing any quality risks with testing based on that learning. So let’s do that here. Continue reading Human and AI Learning, Part 2
Human and AI Learning, Part 1
Humans and machines both learn. But the way they do so is very different. Those differences provide interesting insights into quality and thus the idea of testing for risks to quality. I found one way to help conceptualize this is around the context of games. Even if you’re not a gamer, I think this context has a lot to teach. So let’s dig in! Continue reading Human and AI Learning, Part 1
The Spectrum of AI Testing: Case Study
The Spectrum of AI Testing: Testability
It’s definitely time to talk seriously about testing artificial intelligence, particularly in contexts where people might be working in organizations that want to have an AI-enabled product. We need more balanced statements of how to do this rather than the alarmist statements I’m seeing more of. So let’s dig in! Continue reading The Spectrum of AI Testing: Testability
Creating Explanations: The Ethos of Testing
A couple of years ago I talked about what I considered to be the basis of testing. I very confidently asserted things. Maybe I sounded like an authority. Maybe I sounded like I was presenting a consensus. But did I really talk about the basis or just a basis? And shouldn’t an ethos have been part of that basis? Continue reading Creating Explanations: The Ethos of Testing
The Economics, Value and Service of Testing
Among the many debates testers have, one of those is whether it makes sense to write tests down. Sometimes this is framed, simplistically, as just writing down “test cases” and, even more simplistically, as a bit of orthodoxy around how you don’t write tests, you perform tests. So let’s dig into this idea a little bit because I think this seemingly simple starting point for discussion leads into some interesting ideas about what the title of this post indicates. Continue reading The Economics, Value and Service of Testing
Exploring, Bug Chaining and Causal Models
Here I’ll go back to a game I talked about previously and show some interesting game bugs, all of which came out of exploration and where the finding of one bug guided exploration to finding others, which led to some causal mapping. Of course, the idea of “bug chaining” and “causal mapping” is certainly valid in any context, not just games. But games can certainly make it a bit more fun! Continue reading Exploring, Bug Chaining and Causal Models
The Emic and Etic in Testing
There’s an interesting cultural effect happening within the broader testing community. I’ve written about this before, where my thesis, if such it can be called, has been that a broad swath of testers are using ill-formed arguments and counter-productive narratives in an attempt to shift the thinking of an industry that they perceive devalues testers above all else. This has led to a needlessly combative approach to many discussions. In this post I want to approach this through a couple of parallel lenses: that of game studies, linguistics, and anthropology. That will lead us to insider (emic) and outsider (etic) polarities. It’s those polarities that I believe many testers are not adequately shifting between. Continue reading The Emic and Etic in Testing
A History of Automated Testing
What I want to show in this post is a history where “teaching” and “tutoring” became linked with “testing” which became linked with “programmed instruction” which became linked with “automatic teaching” and thus “automatic testing.” The goal is to show the origins of the idea of “automating testing” in a broad context. Fair warning: this is going to be a bit of a deep dive. Continue reading A History of Automated Testing
The Breadth of the Game Testing Specialty
I’ve posted quite a bit on game testing here, from being exploratory with Star Wars: The Old Republic, to bumping the lamp with Grand Theft Auto V, to ludonarrative in Elden Ring. I’ve also shown how comparative performance testing is done with games like Horizon Zero Dawn. These articles offered a bit of depth. What I want to do here is show the breadth of game testing and some of the dynamics involved since it’s quite a specialized sub-discipline within testing. Continue reading The Breadth of the Game Testing Specialty
Testing: From Aristotelian to Galilean, Part 2
In this post I’ll continue on directly from part 1 where we ended up with a lot of models and a recognition of competing interpretations of quality along with a need for testability. Continue reading Testing: From Aristotelian to Galilean, Part 2