The vast majority of internal training at the firm where I work has no attached exam. For many reasons, credit depends on time spent in the class, not on mastery of the learning objectives. This has been a subject of lively debate at the firm for the past couple of years, with one side arguing that exams create accountability, which drives engagement, and the other side arguing that forcing learners to engage produces poor quality of engagement and doesn’t fix underlying problems.
The debate stalemated for lack of a good way to resolve the disagreement. Last spring we decided to run an experiment. We took a course that was offered 18 separate times and in half the classes we required that participants pass a test to receive credit (of course, we’d tell them this at the beginning of class). In the other nine classes, we had no exam. We measured the differences across four different scales:
- Engagement, as measured by something we invented called the Distracted Learning Index, which is simply two separate samples averaged together where we count the number of people who exhibit evidence of multitasking.
- Learning, as measured by performance on application-based, in-class, anonymous polling questions.
- Satisfaction, as measured by our normal end-of-course surveys.
- Application, as measured by interviews administered after participants have been away from class long enough to apply the skills.
The application piece is to be determined, as we won’t have the data until next spring. But what did we find on the other metrics?
Perhaps the most important metric was learning. On this metric, the exam group scored identically to the control group: 75.4%.
Even if there is no evidence that more learning happened, engagement is still an important metric because visibly multitasking learners are frustrating for instructors and a source of potential long term harm as they set a norm of disengagement. On this metric overall we saw that disengagement was higher in the control condition, but the difference was not statistically significant.
One thing to note about this study was that the courses were split across two conferences, one aimed at senior staff (manager through partner) and one at junior staff (experienced associates). The difference in the Distracted Learning Index for less experienced learners was higher–less experienced staff in the control condition were more likely to show evidence of multitasking (19% of learners in the exam condition versus 26% in the control condition), a marginally statistically significant finding. However, the instructors were asked afterwards whether they could perceive a difference in the two conditions in terms of multitasking and they couldn’t, so the difference does not appear to be large enough to affect classroom dynamics holistically, nor as noted above did it affect learning scores.
Satisfaction was higher in the control condition (4.34 versus 4.23 on a scale of 5), but the difference was not statistically significant.
On the basis of this experiment, there is some evidence that the presence of an exam may help less experienced staff exhibit somewhat greater self-control, but this study found no evidence that this difference translates to greater learning nor a better instructor experience. Given that exams come with significant costs (development, enforcement, time spent taking the exam, etc.), the results of this study do not provide justification for that investment. Of course, the issue is complex and this is only one limited data point, but it is nice to have some local data to help guide our decision making.