Kirkpatrick’s model for measuring training effectiveness is popular because it is elegant–both easy to explain and useful conceptually.
I generally explain the four levels of the model this way:
1. Were learners satisfied?
2. Did they learn anything?
3. Did they apply what they learned in the real world?
4. Did the training ultimately decrease costs or increase revenues to the business enough to at least pay for all the associated costs of the training?
That’s not every question you might ever ask about training effectiveness (for instance, the model says nothing about whether an organization’s curriculum addresses its most pressing needs), but that’s not really the point. Kirkpatrick’s model measures a particular course against its stated learning objectives and then asks the question of whether the course had positive economic value.
Based on the conversations I’ve had recently that have been framed by this model, I think there is a step missing at the beginning–a step zero, if you will.
0. Did a critical mass of the target population take the class?
If only a fraction of the people who the course was designed for actually find and take the course, that would call into question the overall success of the course.
This obviously wouldn’t be an issue everywhere. When the curriculum is preset and required, no problem. But when learners are expected to use professional judgment to navigate a deep course catalog, issues of helping the right learners connect with your course are wholly germane to questions of a course’s effectiveness.