Do Exams in Corporate Training Increase Engagement, Learning, or Satisfaction?

The vast majority of internal training at the firm where I work has no attached exam. For many reasons, credit depends on time spent in the class, not on mastery of the learning objectives. This has been a subject of lively debate at the firm for the past couple of years, with one side arguing that exams create accountability, which drives engagement, and the other side arguing that forcing learners to engage produces poor quality of engagement and doesn’t fix underlying problems.

The debate stalemated for lack of a good way to resolve the disagreement. Last spring we decided to run an experiment. We took a course that was offered 18 separate times and in half the classes we required that participants pass a test to receive credit (of course, we’d tell them this at the beginning of class). In the other nine classes, we had no exam. We measured the differences across four different scales:

  • Engagement, as measured by something we invented called the Distracted Learning Index, which is simply two separate samples averaged together where we count the number of people who exhibit evidence of multitasking.
  • Learning, as measured by performance on application-based, in-class, anonymous polling questions.
  • Satisfaction, as measured by our normal end-of-course surveys.
  • Application, as measured by interviews administered after participants have been away from class long enough to apply the skills.

The application piece is to be determined, as we won’t have the data until next spring. But what did we find on the other metrics?

Perhaps the most important metric was learning. On this metric, the exam group scored identically to the control group: 75.4%.

Even if there is no evidence that more learning happened, engagement is still an important metric because visibly multitasking learners are frustrating for instructors and a source of potential long term harm as they set a norm of disengagement. On this metric overall we saw that disengagement was higher in the control condition, but the difference was not statistically significant.

One thing to note about this study was that the courses were split across two conferences, one aimed at senior staff (manager through partner) and one at junior staff (experienced associates). The difference in the Distracted Learning Index for less experienced learners was higher–less experienced staff in the control condition were more likely to show evidence of multitasking (19% of learners in the exam condition versus 26% in the control condition), a marginally statistically significant finding. However, the instructors were asked afterwards whether they could perceive a difference in the two conditions in terms of multitasking and they couldn’t, so the difference does not appear to be large enough to affect classroom dynamics holistically, nor as noted above did it affect learning scores.

Satisfaction was higher in the control condition (4.34 versus 4.23 on a scale of 5), but the difference was not statistically significant.

On the basis of this experiment, there is some evidence that the presence of an exam may help less experienced staff exhibit somewhat greater self-control, but this study found no evidence that this difference translates to greater learning nor a better instructor experience. Given that exams come with significant costs (development, enforcement, time spent taking the exam, etc.), the results of this study do not provide justification for that investment. Of course, the issue is complex and this is only one limited data point, but it is nice to have some local data to help guide our decision making.

Advertisements

2 thoughts on “Do Exams in Corporate Training Increase Engagement, Learning, or Satisfaction?

  1. Rob Foshay

    Interesting point, Bob: learning effects of testing are well documented, and presumably come from two things: opportunity for practice, and feedback. But we also know that the kind of feedback tests provide is among the weaker forms (according to Hattie’s meta-analysis). A well designed course, such as the ones your group builds (right?) will have lots of opportunity for practice and high quality, timely feedback. Those features probably do a good job of wiping out any test effect.

    I also wonder if the distraction measure has an age interaction? I suspect that in your organization, smart phone use of all kinds probably is more common with younger employees, in all contexts of work. So this may have nothing to do with the learning environment design in your course.

    Bravo for doing this kind of data-driven decision making! It should be standard practice for trainers and educators!

    Rob

    Reply
  2. robertmulcahy Post author

    Hi, Rob! Alas, the course catalog at my firm far exceeds our instructional design capacity, so this course was not particularly interactive (not completely lecture, but heavily biased that way). We tend to invest more heavily in the design of courses based on shelf life, audience size, etc. The debate at some level was about whether the presence of an exam would create an incentive that might help learners in these more lecture-y courses tempted to multitask find the inner resolve to decide instead to focus on the presenter. I presented the results to leadership this week and they were supportive of the findings (or non-findings) in the experiment and I believe it will influence our instructional strategies going forward.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s