A couple of weeks ago I wrote about my firm’s efforts to monitor learning program effectiveness in a comprehensive way. This question came up: even if we have great courses teaching the right things and taught by good instructors, some participants are still going to disengage (choosing instead e-mail, IM, etc.). What is our responsibility to do something about that?
Part of me wants the answer to be, “nothing.” Professionals are empowered to make choices. If a professional makes a choice that a particular module is not relevant to him or her, or that a particular client issue has to take precedence over coursework, then he or she is making a choice. As long as that choice is done in a way that it is not disruptive in the classroom, using judgment to make choices is what makes someone a professional.
And accountability comes with a cost. To achieve accountability requires creating a means of measurement, typically a test. Test creation carries hard costs (our SME developers are reporting back that it can take an hour to create a good test question, between thinking, writing, and responding to reviewers), but also soft costs. Learning happens best and deepest when it is intrinsic, when learners work hard because they know the course will help them achieve something they desire outside of the course.
Learning is less effective when motivation is extrinsic, that is, when someone feels they have to pay attention because they need to pass a test or else face remediation or worse. Learners approach materials differently when their goal is to pass a test rather than to apply what they learned on the job. Learning is shallower, more focused on memorization than application.
We already conclude some of our courses with tests, but since the focus is on evaluating the effectiveness of the courses in the aggregate and not on individual accountability, we can do anonymous testing, removing any incentive to cheat. As the stakes get higher, test security becomes an issue, complicating logistics, increasing overhead.
So, requiring learners to pass a test at the end of a course carries costs. Are the costs worth it? One of the problems of choice (i.e., the choice to engage) is that even adult learners often make poor choices. We had a module in one of our courses that was a lightning rod for criticism from learners: “We already know that material,” they said, and saw that as license to do other things. “Fine,” we said, “prove you know the material as well as you say you do by passing a short pre-test, and we’ll skip the module.” No class ever passed the pre-test, and whining about the need for the module dried up.
Certainly, good instructional design helps learners make appropriate choices, but learners who lack self-discipline will still find ways to make poor choices. If the specter of having to remediate if someone fails the post-test helps participants make better choices about how they direct their energies during class, maybe that’s a net positive. One could certainly argue that particularly in industries where the stakes are high (e.g., defending the public trust), organizations should do everything reasonable in their power to help learners make appropriate choices.
What is clear from an instructional design perspective is that at a minimum we have a responsibility to build meaningful assessment into courses so that whether or not learners are being held accountable to achieve a particular score, they at least have a fairly accurate sense of how well they are achieving the learning objectives. And if they aren’t achieving the objectives, that they have the strategies or the resources available to do something about it.