I think the question of what makes testing a complicated discipline is a good one. It’s one all testers should be able to answer about their profession and, further, answer in a way where the answer couldn’t be applied to just about any other profession. The problem I’ve found is the answers you get are usually simplistic. You’ll hear people say “miscommunication” or “unrealistic schedules.” Well, that’s great but miscommunication can make any profession difficult as can an unrealistic schedule or poor estimating. I bet day traders, doctors, police officers, lawyers, military strategists, nuclear power plant operators and so forth could make the same claim.
Surely we can do better than that, right?
I want to make it clear here that I’m not asking why the job of a tester is hard. I don’t want this to be about the role of a tester. I want this to be about the discipline of testing. So let’s start off with what I think is an important point to keep in mind.
Consider This It’s a truism that what testers do requires a problem-solving mindset within a technical discipline. What testers present requires a focus on evidence that is delivered in a persuasively diplomatic way.
To a certain extent, you could argue that this still applies to a lot of professions, such as software development or business analysis. Where it comes down to it, I think, is the tangibility. A developer can present a logical code construct and can show evidence that this code construct does what it should. A tester can also test that the code works as the developer says. But a tester also has to show different ways that it could work and, more importantly, different ways it could not work. While a tester can write up those decisions, the thinking behind those decisions is largely intangible. That’s because there are many ways to test and that’s because testing is a broad term. After all, you might not just be testing code, but also design specifications or requirements statements.
What I’m saying here implies a selection process; it’s one that the developer had to go through to come up with the solution but it’s also one that the tester has to permute and combine in ways that the developer did not. This is part of why you have testers that act with developers, rather than developers doing their own testing. Likewise, developers are often not called to task for confusing code (at least assuming the code works) while testers can be taken to task for tests that appear to be confusing. While developers don’t have to be diplomatically persuasive about explaining the bugs they create, testers often have to be diplomatically persuasive about the bugs they find.
Now let’s take this a little more high-level. To come up with this selection process, a tester usually has an approach or a strategy that they have in mind. This “construct” of theirs often needs to be three things: specific in nature, practical in implementation, and justified in basis. Doesn’t sound so tough, right? But consider:
- “Specific in nature” is not something business analysts are often required to do. A tester often has to ferret out if something is specific enough to be testable and thus implementable.
- “Practical in implementation” is not something that a designer necessarily has to worry about. A developer, along with a tester, does.
- “Justified in basis” is often something that everyone has to deal with but that a tester has to prove to a greater extent.
Now let’s consider something else that’s really important.
Consider This Tests help people discover information.
This has some implications.
- This means a tester has to be aware of the different kinds of information that are possible to ferret out and how to most effectively look for that information.
- Different types of tests are more effective for different classes of information. This means a tester has to be effective at the selection process, usually under time constraints.
- This means testers need to create tests based on the approach they are taking or the strategy they believe is effective. That approach or that strategy is usually based on certain types of testing (like configuration tests, performance tests, etc) as well as certain styles of testing (like domain testing, scenario testing, or risk-based testing).
So a tester has to have a fairly large toolbox of techniques and the expertise to know when to apply them. Testers often don’t have an API to reference or specification template to plug thoughts into. The nature of how to test some idea is often inherently more difficult than coming up with the idea in the first place or implementing the idea.
Here’s yet another important thought.
Consider This Testing is effective when decision makers have enough information to make an informed decision about the benefits and risks associated with the product or service being tested.
This means testers must understand what kind of information should be presented and in what manner. They have to have an understanding of benefits vs. risks, particularly when it comes to market-driven realities. They also have to understand how various people — like project managers, business analysts, support personnel, sales, and customers — have different vested interests in the outcome of tests and thus in the efficacy of the test process.
So let’s consider a few truisms.
This is true! Any effort by a tester is devoted to evaluating a product or service.
An activity that a tester does that puts the product through its paces does not actually become a “test” unless and until that tester applies some principle or process that will identify some kind of problem if one exists.
This is true! All tests are experiments that are performed to answer a question about the relationship between what a product is and what it should be.
In any test activity, a good tester should ask themselves what questions should drive their evaluation strategy but all the while realizing that “what a product should be” is often contextual at best and is subject to competing values and concerns, all of which can change at any point during a project.
This is true! Testing is an activity that involves skillful technical work (searching for bugs, determining usability, finding security holes, etc) and accurate, persuasive communication about the results of that technical work.
In other words, you don’t just have to gather the findings. You have to be diplomatic and persuasive in your communications about those findings.
So what I think makes testing complicated is a gestalt of the above. In testing there are certain approaches (which amount to philosophy and/or theory) and there are methods (which amount to the philosophy in action or the applied aspects of the theory). You can’t separate the two, particularly in testing. That, to me, is what makes testing complicated.
The main issue that makes testing more complicated, in my view, is that quality is contextual and situational. Quality is a malleable dynamic concept that is viewed through the twin lens of needs and expectations. Quality is a perception based on need, desire, and experience. Quality is subjective to a large extent. Products and services (both of which are engineered) are objective to a certain extent.
This is true! Testing is a quality-related process within the context of an engineering discipline.
What the above point means is that testing must not only straddle theory and practice but also subjective and objective perceptions, all the while ensuring that what it does is, as I said earlier, specific in nature, practical in implementation, and justified in basis.