We all know the situation, right? We find an “issue” but in many cases it comes down to determining whether our issue is something for the developers to fix or if it’s something for product to clarify. It’s often a question framed as “Is this a bug?” Hidden within that question is dealing with the protean nature of a “bug” in current software development.
For the sake of argument, let’s say we all agree that a bug is anything that threatens value. That one statement alone, if accepted, can change some perceptions about bugs. Value, of course, is based on perception and while some perceptions will be objective (“It crashes!”) others will be subjective (“It performs slow” or “It looks ugly” or “The workflow is complicated”). Sometimes the subjective can become objective based on shared agreements.
Consider one of the areas that are often problematic: so-called “edge cases.” There are actually three things to consider that usually get lumped under one term: edge cases, corner cases, and pathological cases. It can be helpful to know which one we’re dealing with. Sometimes an “edge case” can be easy to dismiss but, in turns out, it’s really a boundary case — and those should be less easy to dismiss.
For example, if an agreed upon boundary violation is happening, that’s likely a bug for developers. If there is a question as to whether the boundary even makes sense (perhaps it should be higher, perhaps it should be lower), that’s probably a bug for product. It needs to become a decision that is captured as part of our acceptance criteria.
As another example, if a true edge case manifests a problem, then it becomes a joint decision of product and development: priority (how important to business, given the likelihood) and severity (impact if the situation happens). Let’s say something is very unlikely. But … if it happens, we could end up with a corrupted database. Priority may be low but severity may be high.
So we have a few issues to unpack:
- What do we consider bugs?
- What are our heuristics for recognizing bugs?
Consider these various possibilities:
- “I know this is a bug because it causes a 404 to display.”
- “I know this is a bug because it’s not what our acceptance criteria says.”
- “I know this is a bug; I agree it matches the acceptance criteria, but it’s compromising one of our qualities.”
- “I think this is a bug because while it matches our acceptance criteria, I think the criteria are wrong.”
- “I think this is a bug but we don’t have acceptance criteria one way or the other.
- “I think this is a bug because it just seems like the workflow is really complicated.”
There are obviously other variations there. The regression one is usually obvious:
- “I know this is a bug because it was working before and now it’s not.”
I’ll leave out the complication for now of where the “bug” as it is working now is actually the correct functionality and how it was working before was actually incorrect. That said, it is worth noting that this can happen. Specifically, a bug persists for so long that we just start to treat it as “how it all works.”
The point being there are times where we “know” it and “think” it. Sometimes what we consider a bug (value threatener) serves as the impetus for a feature request. That can come from us or from our users.
What we do want (I hope) is this: we want the time between when a bug is introduced and when a bug is found to be as short as possible.
Many of our bugs will creep in before we even code, must less deploy. That can be for many reasons perhaps we were short-sighted in our design thinking (intentionally or otherwise); perhaps we didn’t know about technical constraints at the time decisions were made; perhaps we had a faulty view of a user model and thus a usage pattern.
Once we have implemented (but prior to deploy), what we’re looking at is deviations from our acceptance criteria. Questioning whether the acceptance criteria are valid in the first place is totally viable — but if we’re finding a lot of issues like that, we need to put more of the test focus where those issues are being created.
Whether pre-deploy or post-deploy, it’s important to determine the kinds of bugs we’re dealing with. For example, what about transient or intermittent bugs? Those are often the hardest to reproduce, much less isolate. That can be because the bug is dependent on time or specific data or other variables.
We also have internal qualities we might want to consider. What if we are, during implementation, knowingly degrading one of our internal qualities? Perhaps, for example, we are doing something that will raise the number of Postgres connections, which might eventually lead to a degradation in performance. Clearly a developer bug, right? Okay, but what if we’re doing that because a feature desired by product has a technical constraint that will require these connections? Now we have a technical constraint that needs to inform a product decision.
“But is that a bug?”, I hear someone asking.
Well, if we treat a bug as something that is threatening value, then yes, quite possibly it is.