McGladrey doesn’t have a huge library of computer-based training (CBT) courses, but it is growing fast. The ones we have have been well-received, but one criticism we do get from users is that they don’t like having to click forward to advance to the next screen.
The algorithm I’ve been using for any given screen as to whether to auto-advance to the next screen once the audio narration is complete goes like this: if the screen includes a significant amount of information, keep the advancement of the slide under control of the learner. This encourages learners to reflect on the content and prevents us from advancing before learners have had a chance to read or absorb the written or graphical information.
On the other hand, if the screen itself has minimal information, or the information builds on the next screen, it makes sense to auto-advance as a courtesy.
And, indeed, when I am the designer of the course, I design with this algorithm squarely in mind. If I feel that for a particular screen I don’t have any visual information that would meaningfully augment the narration, I keep the screen clean, with low information density, perhaps just a few words in a few bullet points to create an organizational structure. Following the redundancy principle of multimedia, I only want to augment spoken information with complementary information, not redundant information.
If, on the other hand, I can populate the screen with something meaningful that has high information density, I will. At the end of these screens I’ll add “click Next to continue” or similar to the audio.
The problem is that it’s not always an experienced instructional designer doing the development. Technical SMEs tend to design around bullet points with variable amounts of information on the screen: some with low information density and some with high information density and everywhere in between. If when building these activities I just use my judgment to decide which screens are low enough in information density to warrant auto-advance, it will feel arbitrary and distracting to learners. Better in those cases to pick one method or the other–either all auto-advance or all manual advance–and stick with it. But which one? Manual advance is probably more instructionally effective, but learners seem to prefer auto-advance. I’m OK with annoying learners, to a point, if I think they will learn more, but I don’t know that manually advancing slides carries enough of an instructional advantage to be worth it.
Given that my audience is professionals with a lot of background knowledge, and given the presence of pause and replay buttons, I’ve started to lean more toward giving learners the convenience of auto-advance. I’ve been talking to my LMS team, though, about the possibility of launching, at random, one of two versions of a given self-study (say, one with auto-advance and one without) and tracking time on task, test results, and satisfaction. This would be a really interesting way of determining whether specific design decisions have measurable effects.