Are You Awake?

The MOOC I’m engaged with has a curious practice question strategy. Each video lecture is accompanied by one or two multiple choice questions. That’s good; practice questions help you gauge how well you understand the material and they promote deeper processing.

What’s curious is that the questions are purely at the fact level. To illustrate the difference, here’s an example of a fact-level question from the MOOC: “According to David Bordwell, the ‘plot’… (a) is just another word for story, (b) is the causal-chronological series of events, (c) does not have anything to do with the story, (d) is the order and duration of events as they are shown to us.” The chapter itself was trying in part to draw a distinction between story and plot, but to answer the question, all I had to do was identify the definition of one of the terms, and even then all I had to do was recognize which string of words I had head verbatim in the video, even if I didn’t understand them.

The classic way to test the understanding of concepts is through discrimination tasks. Here, perhaps, that would mean asking questions like, “The movie Apollo 13 describes the 1971 flight of three astronauts and is based on real events. Of course, in order to condense a week-long flight into a two-hour movie, many of the actual events were left out. The events that were left out were part of the: (a) plot, (b) story.” Or, “As part of the process for creating Citizen Kane, Orson Welles had to decide which events to show as flashbacks and which events to leave out. When he was done, he had crafted a (a) plot, (b) story.”

The critical difference is that the question they used only asks me to recall or recognize a definition, which I may or may not understand. A better question strategy would would be to ask me to apply my new knowledge. The practice question strategy in this class is weak throughout. Why?

The most straightforward explanation is that the designers just didn’t know better (or didn’t place value on the multiple choice questions, perhaps assuming that the real learning would happen in the longer assignments). But that got me to thinking. What if the designers were intentionally using low cognitive load questions not for purposes of practice, but merely for purposes of engagement?

In other words, what if they intentionally included softball questions just as a gentle way to help participants remain focused on the lecture? Learners can answer questions as they go, and the questions are so easy they can answer them without even pausing the video. Is that a good strategy instructionally?

Let’s think about this. The cost of asking these questions is cognition cycles. If I’m answering questions while watching the video, that uses up cognitive resources that could otherwise be devoted to the video itself. In other words, if I’m thinking about the question while the video is playing, then I can’t be thinking as hard about the video as I otherwise could.

But since the questions are easy, the cognitive burden is small. And if finding the answer to a germane, if shallow, question keeps my mind from wandering to unrelated thoughts (daydreaming), then that would be a net positive.

So, in the Mythbuster lingua franca, I’d rate this hypothesis as “plausible.” That said, I think I’d only resort to this as an instructional technique if I had to present a video that I knew was long and dry and I couldn’t do anything about that. The videos for this MOOC are neither, so a better technique here would be more application-based review questions at the end of the videos.

I will say that the use of fact-level but at least on-topic practice questions is superior to the approach enabled by some webcasting tools. Some tools, for attendance and compliance purposes, by default pop up the equivalent of “Are you awake?” questions–almost literally, “Click to prove you are still here” or “Write down the following code to receive credit at the end of this webcast”–at random times. This approach drives me crazy because it is genuinely destructive to learning. It focuses cognitive resources on irrelevant processing. (Similarly, I’ve seen developers use questions like “Who is going to win the Super Bowl this year?” in lieu of thinking of germane questions when forced to include interactivity for purposes of establishing attendance.) The interactive elements of instruction are where the most learning happens. Designers, make them meaningful.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s