Author: Jeff Nyman
AI Testing – Measures and Scores, Part 1
There are various evaluation measures and scores used to assess the performance of AI systems. As someone adopting a testing mindset in this context, those measures and scores are very important. Beyond simply understanding them as a concept, it’s important to see how they play out with working examples. That’s what I’ll attempt in this post. Continue reading AI Testing – Measures and Scores, Part 1
Human and AI Learning, Part 2
In part 1 of this post we talked about a human learning to play a game like Elden Ring to overcome its challenges. We looked at some AI concepts in that particular context. One thing we didn’t do though is talk about assessing any quality risks with testing based on that learning. So let’s do that here. Continue reading Human and AI Learning, Part 2
Human and AI Learning, Part 1
Humans and machines both learn. But the way they do so is very different. Those differences provide interesting insights into quality and thus the idea of testing for risks to quality. I found one way to help conceptualize this is around the context of games. Even if you’re not a gamer, I think this context has a lot to teach. So let’s dig in! Continue reading Human and AI Learning, Part 1
The Spectrum of AI Testing: Case Study
The Spectrum of AI Testing: Testability
It’s definitely time to talk seriously about testing artificial intelligence, particularly in contexts where people might be working in organizations that want to have an AI-enabled product. We need more balanced statements of how to do this rather than the alarmist statements I’m seeing more of. So let’s dig in! Continue reading The Spectrum of AI Testing: Testability
Creating Explanations: The Ethos of Testing
A couple of years ago I talked about what I considered to be the basis of testing. I very confidently asserted things. Maybe I sounded like an authority. Maybe I sounded like I was presenting a consensus. But did I really talk about the basis or just a basis? And shouldn’t an ethos have been part of that basis? Continue reading Creating Explanations: The Ethos of Testing
The Economics, Value and Service of Testing
Among the many debates testers have, one of those is whether it makes sense to write tests down. Sometimes this is framed, simplistically, as just writing down “test cases” and, even more simplistically, as a bit of orthodoxy around how you don’t write tests, you perform tests. So let’s dig into this idea a little bit because I think this seemingly simple starting point for discussion leads into some interesting ideas about what the title of this post indicates. Continue reading The Economics, Value and Service of Testing
Exploring, Bug Chaining and Causal Models
Here I’ll go back to a game I talked about previously and show some interesting game bugs, all of which came out of exploration and where the finding of one bug guided exploration to finding others, which led to some causal mapping. Of course, the idea of “bug chaining” and “causal mapping” is certainly valid in any context, not just games. But games can certainly make it a bit more fun! Continue reading Exploring, Bug Chaining and Causal Models
The Emic and Etic in Testing
There’s an interesting cultural effect happening within the broader testing community. I’ve written about this before, where my thesis, if such it can be called, has been that a broad swath of testers are using ill-formed arguments and counter-productive narratives in an attempt to shift the thinking of an industry that they perceive devalues testers above all else. This has led to a needlessly combative approach to many discussions. In this post I want to approach this through a couple of parallel lenses: that of game studies, linguistics, and anthropology. That will lead us to insider (emic) and outsider (etic) polarities. It’s those polarities that I believe many testers are not adequately shifting between. Continue reading The Emic and Etic in Testing
A History of Automated Testing
What I want to show in this post is a history where “teaching” and “tutoring” became linked with “testing” which became linked with “programmed instruction” which became linked with “automatic teaching” and thus “automatic testing.” The goal is to show the origins of the idea of “automating testing” in a broad context. Fair warning: this is going to be a bit of a deep dive. Continue reading A History of Automated Testing
The Breadth of the Game Testing Specialty
I’ve posted quite a bit on game testing here, from being exploratory with Star Wars: The Old Republic, to bumping the lamp with Grand Theft Auto V, to ludonarrative in Elden Ring. I’ve also shown how comparative performance testing is done with games like Horizon Zero Dawn. These articles offered a bit of depth. What I want to do here is show the breadth of game testing and some of the dynamics involved since it’s quite a specialized sub-discipline within testing. Continue reading The Breadth of the Game Testing Specialty
Testing: From Aristotelian to Galilean, Part 2
In this post I’ll continue on directly from part 1 where we ended up with a lot of models and a recognition of competing interpretations of quality along with a need for testability. Continue reading Testing: From Aristotelian to Galilean, Part 2
Testing: From Aristotelian to Galilean, Part 1
Any discipline can focus along a spectrum of thinking. That’s no less true of testing, of course. The spectrum I want to introduce from history is that of moving from an Aristotelian to a Galilean way of thinking and “doing science” which, in many ways, is synonymous with “doing testing.” Continue reading Testing: From Aristotelian to Galilean, Part 1
When Testing Questioned Orthodoxy
Continuing on from the first and second posts in this series, let’s look at how testing, as it came to be in a scientific context, challenged a bit of orthodoxy. Continue reading When Testing Questioned Orthodoxy
When Testing Questioned Philosophy
In the first post in this series, I ended by focusing a bit on Galileo who started to make the idea of testing what it eventually would be recognized as today. That’s the same thing as saying Galileo effectively produced one of the first attempts to make science as it is known today. Let’s continue this path of investigation. Continue reading When Testing Questioned Philosophy
When Testing Became Scientific
As I’ve been teaching the history of science and religion recently, some interesting ideas have formed in my head around how to present certain topics as they relate to testing. This is crucial since testing is the basis of effective experimentation. So here I’ll talk very briefly about how testing truly became testing. Continue reading When Testing Became Scientific
Testers, We Need a Narrative
I was recently re-reading Houston: We Have a Narrative by Randy Olson and I was struck by certain concepts there that reminded me how poorly framed testing often is, particularly by its own practitioners. Clearly an opinionated statement, of course, but I very much believe that many testers in the industry currently lack a narrative or are using a malformed narrative. And this is hurting the industry more broadly as we see quality problems get worse and worse. Continue reading Testers, We Need a Narrative
Ludonarrative Testing, Part 3
In the second post of this series I looked at a couple of games to drill in the idea of ludonarrative and what it means. Here I want to go back to a game I started with in the first post, Elden Ring, and take a much deeper look at the mechanics and the narratives from a ludonarrative testing standpoint. Continue reading Ludonarrative Testing, Part 3
Ludonarrative Testing, Part 2
Continuing on from the first part, I want to continue to give testers a look into a very specific, and often undocumented, form of testing in the context of games, which is the idea of ludonarrative. This has the benefit of also showing how quality can be very much a function of viewpoint. Continue reading Ludonarrative Testing, Part 2