Understanding the Foundational Principles on which IT Certification Rests
OK, I admit it: I'm of the generation that devoured Robert Pirsig's thought-provoking road journal-slash-metaphysical discourse Zen and the Art of Motorcycle Maintenance with both gusto and relish. I sometimes find myself thinking of his riff on what he calls the "full-scale formal scientific method" that's sometimes necessary when solving serious problems.
As Pirsig himself observes, this all-out, no-holds-barred form of investigation is seldom truly necessary. But sometimes, it is. And today, having also just read my old friend and former collaborator Emmet Dulaney's Certification Magazine article Is Your Certification Certified?, I found his story, and my own recent musings, brought me back to Pirsig's soliloquy on the scientific method.
What Makes a Certification Rigorous?
I'll refer you to Emmett's story for more details, relevant standards (and standard bodies), and more. For my own part, let me lay out a case for rigorous certification as a way of tackling some subject matter designed to be of import and use in the workplace.
Among other things, this requires diligent research into a number of topics to figure out what needs to go into a certification, as well as a set of measurements and evaluations that determines which exam questions work, and which ones don't. The scientific method plays an important role across the board here.
All along the way we're looking at what's out there in the world, and trying our best to make sure that the understanding we create to model and serve that world has definite, if not empirically measurable, relevance to what's going on.
First, There Must Be an Established Need
Vendor-specific and vendor-neutral certifications start from opposite ends of the needs spectrum. Vendors need to provide training, support, and useful transfer of skills and knowledge to third-party professionals so they can sell more stuff. Vendor-neutral organizations usually coalesce around special or general interests, and wish to serve their target communities with useful ways to understand and extract value from the tools and technologies they stand behind.
Either way, this quickly gets down to a detailed investigation into specific job roles, and the tasks that the people who fill them must know how to handle. That's why this part of the cert exercise if often called "job task analysis."
JTA involves recruiting so-called SMEs (subject matter experts — people who excel at the job or jobs that are the focus, and the tasks that such job holders must know, understand and be able to execute quickly and well). It also involves observing people who do those jobs and creating an inventory of the tasks they are asked to fulfill while doing them.
This involves a certain amount of "reverse knowledge engineering," as in, "Let's figure out what a person should know and be able to do in order to fulfil task X." Unpacking this can be an interesting and time-consuming proposition. Ultimately, however, it's what drives things in terms of selecting tasks, topics, background concepts and best practices, and all the other stuff that goes into completing some task, and completing it well.
Setting the Domain, and its Range
From the job task analysis, a body of knowledge emerges. This represents a hierarchical collection of topics, concepts, skills and abilities that job holders must possess to handle the tasks that come with those positions. This is where the exam objectives come from, but before they get finalized, a huge list of candidates is assembled and elaborated.
Then it's subjected to analysis and evaluation from SMEs, and also from stakeholders — that is, professional educators who must teach these materials; stakeholders in businesses who must employ those who earn such certifications; members of professional societies, associations, or unions who represent the interests of those employed in such jobs; and more.
What emerges is a set of topics and areas of skills and knowledge, often with simulated tasks or exercises to show and teach the kinds of understanding, problem-solving, and specific responses that such situations and tasks demand of those to whom they're assigned. This is usually created in vast profusion and variation, because it must now undergo another set of rigorous evaluations to see how it plays in actual use.
Psychometrics Separates the Wheat from the Chaff
Training materials, learning assessments, quizzes, tests, exercises, labs, and ultimately certification exam question banks must all be created. Then they are set loose in a beta test population that's carefully chosen to match the desired characteristics of the certification target audience. Elements are tested in controlled conditions, then carefully evaluated to see if they work as they should.
This means soliciting and acting on feedback from participants, instructors, trained observers, and analysts, all of whom provide input to decide what works and what doesn't. Consider these extremes as an example: A test question that everybody gets wrong is as useless as a test question that everybody gets right. Neither one tells you anything useful about the target population.
Ideally, you want questions that separate those who know something from those who know something relevant, useful or good. That's where psychometrics comes into play: It helps content creators to recognize things that work and things that truly separate those who know the job and its tasks from those who don't.
That's what employers really want to be able to distinguish so that those who get certified can be reasonably expected to know and be able to do the jobs to which such certifications apply.
Read Emmett's story for more details and another perspective on this process. It will help you understand where the value and meaning of a "real certification" comes from. In large part, it's because such certifications involve significant data-gathering, thought and analysis, observation and feedback, and more to bring them out in the first place, and keep them current thereafter.