Category Archives: Instructional design

Controls

I direct learning for a CPA firm. I’m not a CPA, but I feel like I learn a lot from them.

One concept that auditors talk a lot about is controls. Controls are processes, tools, and checkpoints that businesses have in place to guard against error and fraud. For instance, if a large transaction requires the signature of the CFO, that’s a control. Password-protecting critical financial systems is a control.

In short, controls are a concept that auditors understand because auditors know that businesses with poor controls in place are going to be a lot harder to audit.

I’ve used controls as a way to explain the importance of measuring mastery of learning objectives. When an auditor–indeed, most any professional–is asked to design a course for less experienced professionals, their default is to typically treat it more like a presentation than a course and include little interactivity and no means for instructors to assess how well learners grasp the material before moving on to the next topic.

One could argue in good faith that it is the learner’s responsibility to learn. That as a professional if someone is struggling, it is on them to recognize that reality and take steps to ameliorate it. In reality, that puts the firm at risk.

So when I talk about introducing checkpoints and polling questions and case studies, I sometimes talk about them in terms of controls. Without those elements built into the course, we have no way of knowing if a course was effective (and more formatively, instructors will have no way of knowing whether what they are doing is working or whether they need to do something else).

Auditors know what separates a strong control from a weak one, so this becomes a powerful way to make the case for investing in classroom activities that provide evidence of learning.

Designing Classroom Instruction with a CBT Background

I learned the craft of instructional design designing computer-based training. Elearning has limitations not present in the classroom, and vice versa. You learn to design toward the strengths of your medium. I sometimes wonder how designing exclusively for one medium early in one’s career affects one’s ability to design for other media–a crystallized design sense, if you will.

I was observing a course I didn’t design recently, and the last section of the class was devoted to student presentations. I was unsure about this; it took up a significant portion of the class, and I always worry whether learners get much out of watching their classmates present. For those reasons, I’ve never really incorporated student presentations in my ID toolkit.

I think it worked, though. The learners were interns at the firm, and creating presentations in teams helped them get to know each other, creating potentially valuable connections, which fit with the larger goals of the program. It allowed them to go a little deeper in a topic while positively impacting the classroom dynamic.

Anyway, I’ve certainly designed experiences for classrooms that capitalized on the strengths of the medium and would not be easy to replicate in elearning, but I sometimes wonder what my blind spots are (not just me–any designer) when designing for media outside of the core of my experience.

Compression: Live Training, Elearning, and Time on Task

I learned a term recently for something I’ve thought about in the past but didn’t know was a thing: compression. Compression means it will take learners longer to complete a live class than an equivalent elearning. In other words, if a four hour class is offered in both a live classroom version and a self-paced elearning version, participants will complete the self-paced version quicker with the same level of achievement.

Apparently–and I didn’t know this either–the rule of thumb for compression is 50%. A four hour live class can reasonably be turned into a two hour self-study. The persuasive utility of this, of course, is huge. What leader wouldn’t want their people in training only half as much time?

There are some enormous caveats. One is that time spent practicing cannot be compressed, so the more focused the training is on practicing skills, the less compression is possible. It also appears that the elearning has to be text-based in order to allow significant compression. People read faster than they talk. The elearning we produce at my firm are audio-based, so one wouldn’t expect very much compression.

That brings up the question, should our elearning be audio-based? We use audio for a number of reasons. The most important is that from a cognition perspective, using the audio channel for narration and the visual channel for complementary information (charts, organizational bullet points, tables, etc.) leads to better learning.

The question then morphs to: Is the increase in the quality of the learning worth the investment? Experienced professionals can take in new information very efficiently. If training is focused on them, it’s possible that the increase in learning may be immaterial compared to a time savings of 50%.

It would be interesting to offer the same elearning two ways (audio or text), randomize which one people get, and measure time on task, satisfaction, and exam performance across the two conditions.

When Learner Perceptions Are at Odds with Instructional Design Principles

A group inside of our firm started coming to me about a year ago asking for help designing their webcasts to make them more compelling. They chiefly wanted help with visual appeal so I started there, focusing on taking their text-heavy slides and trying to create visual analogs–substituting, for instance, descriptions of a process with a flowchart. I tried to focus on slides that complemented the speakers rather than competing with them, creating slides that were much lighter.

I tried to argue in favor of packaging all of that wonderful elaboration they put on the slides into a dedicated participant guide, though I couldn’t convince them that the value would be worth the development.

I didn’t notice right away that after a few webcasts, they stopped asking for help. I was dismayed to hear recently through back channels that they believed that I’d dumbed down the content too much, and had gotten that feedback from participants.

That hurt! There were a couple of things to unpack there. One: what had I done or not done that caused the group to just stop coming to me rather than having a conversation? How had I failed to build trust?

Two: I’m confident that my prescriptions were instructionally sound, but I have to take seriously the charge that what I provided didn’t match expectations. There are three possibilities here. One is that one or more of the team believed what I was doing was not in their interests, so any negative feedback they got fed a confirmation bias. Two is that I’m wrong and really did make their instruction less effective for their target population. The third is that we were both right, that while I made the instruction theoretically better, because it didn’t match the learners’ expectations, they found it distracting and thus learned less. All three possibilities are interesting to think about.

Bibliography for My “Storytelling and Instructional Design” Presentation at ATD

I presented a GTC Talk at the ATD Learning that Counts conference this week. I was thrilled to see the turnout, and was honored by the comments given to me afterwards.

I committed to posting my bibliography to this site afterwards; here it is.

If  you were at the talk and have feedback, or if you were there and read any of these books and have opinions, please leave a comment or otherwise contact me. I’d love to hear from you.

Writing Exam Questions Changes Your Perspective

It’s interesting to think about and debate the effects of exams on learners, but what is the effects on course developers?

I was helping a couple of SMEs create exam questions recently, and the dialog between the SMEs was really interesting. “Do you think we do enough in the course to really help learners understand that concept? Maybe we’ll need another example. Do you think we make that point clearly enough?” And so on. Crafting exam questions really made them think hard about what they were teaching and how they were teaching it. I don’t always see this kind of introspection sparked, but it is a lot of fun when I do.

Back when I was first learning how to create instructional designs, I was taught to write the objectives first, and then the exam questions, and only then do you start to design the course. The idea being that if you have difficulty writing exam questions, you may lack clarity around your objectives. It was great advice, and saved me a ton of design time over the years.

Research of Indeterminate Origin

I saw a line graph the other day that showed learning decay over time. The research was credited to something called the American Research Institute–not a research body I was familiar with but, hey, interesting research is interesting.

The graph, which looked like this:

2016-09-26_11-48-27

raised all sorts of questions in my head. What was the learning event? How was it designed? What was the expertise level of the participants? What was the nature of the skills taught? What age?

So I did some digging. And, as far as I can tell, the original research doesn’t exist. Often cited in infographics and books, but never in research journals, I imagine that at some point someone either overgeneralized from some real study (perhaps casually credited to “an American research institution”) or made up to make a directionally correct point in a particular context.

My guess is that everything can be traced back to this:

ebbinghaus

Looks eerily similar, doesn’t it? The numbers have drifted a little, but that’s what you’d expect from a few rounds of internet telephone.

In any case, the original study was about people trying to memorize lists of arbitrary words–hardly a good analog (I hope) of what goes on in most classrooms. So, interesting, and important, but a big leap to conclude that classroom knowledge will reliably decay at a certain rate. (In any case, I believe the most compelling work done with learning decay has been in the context of spaced learning.)

This sort of thing–propagating research of indeterminate or suspect origin–really bugs me. That said, if I’m wrong and the Research Institute of America really has done and published this research, please comment below to help set the record straight.