Here I argue against people, particularly certain companies, that seem to think it matters overly much if their test employees hold a certification, such as the CSTE.
My title here is perhaps a bit provocative and that’s by design. But I do think there’s some truth in it. To be fair, I’m not saying those certifications are bad in and of themselves. But I am saying that people should not be making too many decisions based upon whether someone does or does not have one. Here I’ll talk a little about why I feel the way I feel.
As far as speaking out against certifications, I can’t say it any better than James Bach already has in Conscientious Uncertifaction or Against Certification. I also like the thoughts presented in Certifications Are Creating Lazy Hiring Managers.
Let’s recognize at least one reality: many companies, particularly consulting companies, use certification programs to filter candidates for an interview. Whether rightly or wrongly, since certification is used as a filter by some, an argument could be made for getting the certification regardless of how much or little you believe in it. In fact, another reality is that there are companies — again, particularly consulting firms — which force their employees to have these certifications so that they could show case it to their clients, under the assumption that the client is getting a much higher caliber of talent due to the certification.
I say all this because I do think it’s important not to simply be unreflectively pessimistic about certifications. The process of taking a certification does (potentially, anyway) show a willingness to improve, even if it was forced on the person by a company. And the process of preparing for one certainly does force a person to engage with the material. So there is that.
All that being said, here’s another reality to consider: for as long as those certifications have been around, there is absolutely no empirical evidence that shows any relation between the skills of a tester and being certified. And the reason for that is fairly obvious: certification in and by itself isn’t any guarantee for the flexible and adaptive thinking required by most testers on many projects.
The Certification Miss
So what do I think certifications miss? Well, the best I can say is my own anecdotal evidence.
As just one example, I would argue that the skills and aptitude that I talk about — and test for — in the Quest for Testers is absolutely not covered by certifications. Likewise everything I talked about in testers being more than their skills is only peripherally touched in certifications. When I talk about a nuanced view of test planning, that kind of thinking can be anathema to many who have been certified.
I have applied my thinking in those posts to interviews with candidates as well as casual conversations on forums with testers and I have been able to correlate performance and/or discussion with knowledge of who did and did not have a certification. Certainly I’m not saying that those who had certifications uniformly did bad with the challenges or discussions nor am I saying that those who had no such certifications uniformly did well. But what I can say, without equivocation, is that certification played little to no demonstrable part for those who did well and, further, those who more often than not did poorly were those who tended to rely upon their certifications as a barometer for their skill.
What Testers Need; What Certifications Don’t (Seem) To Test For
In my experience, beyond many other skills a tester may have, there’s a few key elements of really good testers.
- Testers must be good observers of what is going on around them.
- Testers must make the fewest possible assumptions and be evidential in their thinking.
- Testers must good explainers, both of inference and observation.
- Testers must have the ability to use rational justification and diplomatic persuasion.
Beyond those points, testers must inherently realize that making an observation and determining what the observation means are two different things. This means testers realize that we can have a chance for mistakes in our observations or a chance for mistakes in our reasoning about our observations and this recognition informs what they do and how they think.
Certifications do have “essay” style components that are supposed to check for all this but, again, there is absolutely zero empirical evidence that testers who have gone through certification programs can do the above.
There’s another important aspect that gets overlooked in the tester mindset. Testers should have an instinctive view that measurement is a way to reduce error and good testers leverage that view to build up an intuitive feel for quantitative investigation.
In other words, testers must be able to size up a measurement problem and identify quick and simple observations that have revealing results. This goes hand-in-hand with being able to estimate unknowns, reasonably quickly, by using simple observations. Thus testers must have the ability to coax information out of the few facts that they can confirm. Further, testers must be aware of a rule (or set of rules) relating one simple observation to a quantity that they want to measure.
The better testers also realize that story priorities and time are basically like a living calculus. Think back to your calculus: df(x)/dt. Do we integrate or differentiate? Well, that depends on the priority of a given story, the time we have, and so forth.
My Own “Certification” Scenario
In my own career I’ve presented particular challenges to testers (both in interviews and in less formal settings) that I wish were in some way part of a certification process. In doing this, I’ve tracked how those testers did and also whether or not they had a certification. One of these was my Test Quest game.
What’s the point of presenting this here? Mainly to show that in the times I have given this challenge, it’s more often than not been the tester with the certification who often failed to engage with the challenge in a meaningful way. This isn’t to say that everyone without a certification solved the challenge. I’m not even necessarily looking for solution. I’m looking for how the problem is engaged with.
Am I Tilting At Windmills?
So what’s the “danger” that I mention in the title of this post?
I mention the danger of the certified tester and that’s clearly emotive language. Here I speak more to the danger towards the discipline of testing as a whole rather than the danger coming from a particular tester. Let’s consider some things that testers need to do.
- Be willing and able to acquire a broad range of skills.
- Know how and when you can be effectively involved at all phases.
- Be able to represent different interests to different groups.
- Be able to think tactically and strategically.
- Be able to innovate by shedding ego and not being afraid to fail.
- Foster working relationships with people inside and outside the project.
- Invent by taking a creative idea and developing it to the point that it’s practical.
Beyond those points, testers have to have a certain element of fearlessness in that they may have to go up against stronger personalities, or disagree with people in positions of authority, or have to tackle something entirely new without any real road map to go on.
That being said, some testers have very strong personalities as well and that has to balanced with a certain degree of humility. That may not seem tricky but, in my view, the best testers have a definite and distinct bias towards things being done not just “right” but efficiently, effectively, and elegantly. They care that things are wrong, and they want them to be better. Professional opinions — even very strong opinions — are certainly fine, of course, but testers can sometimes get into a habit of thinking they have a monopoly on “truth” or what it means to “speak for the user.” Good testers make sure that what it means to assure quality is taken with a broad focus.
One last point on that distinction I brought up in the last paragraph. Testing can be carried out effectively, efficiently, and elegantly. Effectiveness means satisfying objectives and expectations. Efficiency means satisfying objectives and expectations in a way that maximizes the value received for the resources invested. Testers have to find a way to maximize value within their (often multiple) resource constraints. Elegance means achieving effectiveness and efficiency in a graceful, well-executed fashion. This means the work of testers should resonate as professional, experienced, and competent and testers have to prove themselves elegant over time in results, behavior, and demeanor.
That — as well as most everything else I talked about here — is the stuff that I don’t see certifications even beginning to cover. To the extent that this is true and to the extent that certifications keep being treated as more than what they are, that is the danger I perceive.