Category Archives: Instructional design

Reflection Is a Form of Practice

The authors of Make it Stick highlight the principle that reflection is itself a form of practice, that thinking about a problem is a form of useful rehearsal.

The recently revised standards for learning for CPAs put out by the National Association of State Boards of Accountancy (NASBA) stipulate for the first time that classroom learning must be active. For now, the rules for minimum interactivity are, indeed, minimal. To give formal credit, classes have to have one interaction per hour, of which that interaction can be nearly anything, including asking participants to reflect silently for a few seconds on a given question.

Lecturing for 45 minutes and then asking participants to reflect silently on a question asked by the lecturer is not necessarily effective design. But I applaud NASBA both for requiring interactivity and for keeping the requirement open. The more prescriptive the targets, the more proforma the execution. Keeping it open will result in many developers trying to do the minimum, for sure, but it may also help developers take ownership of active learning and try to understand NASBA’s intent.

Advertisements

Quiz Before Teaching

One of the core principles in Make it Stick is “trying to solve a problem before being taught the solution leads to better learning, even when errors are made in the attempt.” (p. 4)

I get it; finding out you don’t know something as well as you thought can make for a powerful learning moment.

On the other hand, I’ve tended to advise course developers to avoid asking learners right/wrong multiple choice questions before actually teaching the content, arguing that they are unfairly setting learners up for failure. (Intentional and actionable diagnostic pre-testing is an exception.) Philosophically and temperamentally, I much prefer trying to set learners up for as much success as possible. But I do understand that there are proponents of failure-based learning and am open to the possibility that there could be elements here that I should add to my playbook.

Controls

I direct learning for a CPA firm. I’m not a CPA, but I feel like I learn a lot from them.

One concept that auditors talk a lot about is controls. Controls are processes, tools, and checkpoints that businesses have in place to guard against error and fraud. For instance, if a large transaction requires the signature of the CFO, that’s a control. Password-protecting critical financial systems is a control.

In short, controls are a concept that auditors understand because auditors know that businesses with poor controls in place are going to be a lot harder to audit.

I’ve used controls as a way to explain the importance of measuring mastery of learning objectives. When an auditor–indeed, most any professional–is asked to design a course for less experienced professionals, their default is to typically treat it more like a presentation than a course and include little interactivity and no means for instructors to assess how well learners grasp the material before moving on to the next topic.

One could argue in good faith that it is the learner’s responsibility to learn. That as a professional if someone is struggling, it is on them to recognize that reality and take steps to ameliorate it. In reality, that puts the firm at risk.

So when I talk about introducing checkpoints and polling questions and case studies, I sometimes talk about them in terms of controls. Without those elements built into the course, we have no way of knowing if a course was effective (and more formatively, instructors will have no way of knowing whether what they are doing is working or whether they need to do something else).

Auditors know what separates a strong control from a weak one, so this becomes a powerful way to make the case for investing in classroom activities that provide evidence of learning.

Designing Classroom Instruction with a CBT Background

I learned the craft of instructional design designing computer-based training. Elearning has limitations not present in the classroom, and vice versa. You learn to design toward the strengths of your medium. I sometimes wonder how designing exclusively for one medium early in one’s career affects one’s ability to design for other media–a crystallized design sense, if you will.

I was observing a course I didn’t design recently, and the last section of the class was devoted to student presentations. I was unsure about this; it took up a significant portion of the class, and I always worry whether learners get much out of watching their classmates present. For those reasons, I’ve never really incorporated student presentations in my ID toolkit.

I think it worked, though. The learners were interns at the firm, and creating presentations in teams helped them get to know each other, creating potentially valuable connections, which fit with the larger goals of the program. It allowed them to go a little deeper in a topic while positively impacting the classroom dynamic.

Anyway, I’ve certainly designed experiences for classrooms that capitalized on the strengths of the medium and would not be easy to replicate in elearning, but I sometimes wonder what my blind spots are (not just me–any designer) when designing for media outside of the core of my experience.

Compression: Live Training, Elearning, and Time on Task

I learned a term recently for something I’ve thought about in the past but didn’t know was a thing: compression. Compression means it will take learners longer to complete a live class than an equivalent elearning. In other words, if a four hour class is offered in both a live classroom version and a self-paced elearning version, participants will complete the self-paced version quicker with the same level of achievement.

Apparently–and I didn’t know this either–the rule of thumb for compression is 50%. A four hour live class can reasonably be turned into a two hour self-study. The persuasive utility of this, of course, is huge. What leader wouldn’t want their people in training only half as much time?

There are some enormous caveats. One is that time spent practicing cannot be compressed, so the more focused the training is on practicing skills, the less compression is possible. It also appears that the elearning has to be text-based in order to allow significant compression. People read faster than they talk. The elearning we produce at my firm are audio-based, so one wouldn’t expect very much compression.

That brings up the question, should our elearning be audio-based? We use audio for a number of reasons. The most important is that from a cognition perspective, using the audio channel for narration and the visual channel for complementary information (charts, organizational bullet points, tables, etc.) leads to better learning.

The question then morphs to: Is the increase in the quality of the learning worth the investment? Experienced professionals can take in new information very efficiently. If training is focused on them, it’s possible that the increase in learning may be immaterial compared to a time savings of 50%.

It would be interesting to offer the same elearning two ways (audio or text), randomize which one people get, and measure time on task, satisfaction, and exam performance across the two conditions.

When Learner Perceptions Are at Odds with Instructional Design Principles

A group inside of our firm started coming to me about a year ago asking for help designing their webcasts to make them more compelling. They chiefly wanted help with visual appeal so I started there, focusing on taking their text-heavy slides and trying to create visual analogs–substituting, for instance, descriptions of a process with a flowchart. I tried to focus on slides that complemented the speakers rather than competing with them, creating slides that were much lighter.

I tried to argue in favor of packaging all of that wonderful elaboration they put on the slides into a dedicated participant guide, though I couldn’t convince them that the value would be worth the development.

I didn’t notice right away that after a few webcasts, they stopped asking for help. I was dismayed to hear recently through back channels that they believed that I’d dumbed down the content too much, and had gotten that feedback from participants.

That hurt! There were a couple of things to unpack there. One: what had I done or not done that caused the group to just stop coming to me rather than having a conversation? How had I failed to build trust?

Two: I’m confident that my prescriptions were instructionally sound, but I have to take seriously the charge that what I provided didn’t match expectations. There are three possibilities here. One is that one or more of the team believed what I was doing was not in their interests, so any negative feedback they got fed a confirmation bias. Two is that I’m wrong and really did make their instruction less effective for their target population. The third is that we were both right, that while I made the instruction theoretically better, because it didn’t match the learners’ expectations, they found it distracting and thus learned less. All three possibilities are interesting to think about.

Bibliography for My “Storytelling and Instructional Design” Presentation at ATD

I presented a GTC Talk at the ATD Learning that Counts conference this week. I was thrilled to see the turnout, and was honored by the comments given to me afterwards.

I committed to posting my bibliography to this site afterwards; here it is.

If  you were at the talk and have feedback, or if you were there and read any of these books and have opinions, please leave a comment or otherwise contact me. I’d love to hear from you.