Author Archives: robertmulcahy

About robertmulcahy

I am the director of learning for the assurance practice at RSM US, LLP. I have a doctorate from the University of Minnesota in Curriculum and Instruction (Learning Technologies).

Reflection Is a Form of Practice

The authors of Make it Stick highlight the principle that reflection is itself a form of practice, that thinking about a problem is a form of useful rehearsal.

The recently revised standards for learning for CPAs put out by the National Association of State Boards of Accountancy (NASBA) stipulate for the first time that classroom learning must be active. For now, the rules for minimum interactivity are, indeed, minimal. To give formal credit, classes have to have one interaction per hour, of which that interaction can be nearly anything, including asking participants to reflect silently for a few seconds on a given question.

Lecturing for 45 minutes and then asking participants to reflect silently on a question asked by the lecturer is not necessarily effective design. But I applaud NASBA both for requiring interactivity and for keeping the requirement open. The more prescriptive the targets, the more proforma the execution. Keeping it open will result in many developers trying to do the minimum, for sure, but it may also help developers take ownership of active learning and try to understand NASBA’s intent.

Advertisements

Quiz Before Teaching

One of the core principles in Make it Stick is “trying to solve a problem before being taught the solution leads to better learning, even when errors are made in the attempt.” (p. 4)

I get it; finding out you don’t know something as well as you thought can make for a powerful learning moment.

On the other hand, I’ve tended to advise course developers to avoid asking learners right/wrong multiple choice questions before actually teaching the content, arguing that they are unfairly setting learners up for failure. (Intentional and actionable diagnostic pre-testing is an exception.) Philosophically and temperamentally, I much prefer trying to set learners up for as much success as possible. But I do understand that there are proponents of failure-based learning and am open to the possibility that there could be elements here that I should add to my playbook.

Everybody’s Fool and Star Trek

I finished the novel Everybody’s Fool yesterday, by the very talented Richard Russo.

One of the main characters experiences a sort of split personality due to a lightning strike; he splits into a good version and a bad version of himself, not unlike Kirk and Spock in The Enemy Within due to a transporter malfunction.

That parallel to Star Trek didn’t occur to me until late in the book when, within a few pages of each other, Russo sprinkled in a couple of Star Trek references–namely, the split character turns the air conditioning to stun shortly before giving the order to “make it so.” (The latter, of course, is a reference to the wrong era of Star Trek, but never mind.)

Anyway, I’m commenting here because I was too lazy to go back and try and find other Star Trek references and a Google search came up empty. If someone in the future tries the same search maybe they’ll find this page and can add additional references to the comments.

Learning Is Effortful

I just finished Make It Stick: The Science of Successful Learning by Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel. It’s interesting how much my frame has shifted over the years as I get farther from college and graduate school and deeper into helping professionals learn and develop. Make It Stick, while not specifically about academic settings, is clearly geared toward academic learning, which inevitably raises questions when thinking about how everything applies in professional settings.

For instance, the very first principle in the book is that deep learning takes effort. If it feels easy, then it probably won’t stick.

I’m sure that’s generally true, but does a rich schema have a mitigating effect? Does a rich schema make learning easy (in the narrow area of expertise, anyway)?

Case Studies and Confirmation Bias

I liked The End of Average, but I did find one stretch difficult–Todd Rose spends a chunk of the book describing some inspiring case studies of major corporations who have abandoned HR and performance management systems that assume one path to success, to great commercial effect.

My issue isn’t with the examples, but rather the apparent lack of a critical search for disconfirming cases. Finding cases that support your point is great and instructive. But many companies who embrace traditional methods also succeed wildly, so how do we know it was abandoning average that made the difference. What about companies that tried similar approaches but failed? Is there anything we can learn from them?

Benjamin Bloom and Mastery-Based Learning

I’ve long been aware of Benjamin Bloom (of Bloom’s Taxonomy fame) and his work on mastery-based learning. The argument goes something like this: schools try to move everyone at the same pace, but since most students on a given topic can move either faster or slower than average, you guarantee that most of the class is either bored or falling behind.

But I’ll admit that I don’t think I’ve ever actually read Bloom (not even in graduate school; how is that possible?). So it was interesting when Todd Rose delved a little into Bloom’s work. According to Rose, Bloom found that if you create two identical classrooms, and make one of them move at a traditional pace while the other allows individuals to move at their own pace, the percentage of learners who master the material by the end of the course moves from 20% to 90%.

Bloom also found that learners were jagged in the pace they moved. All learners had content they breezed through, and all had content they struggled with, even in the same subject. As Rose puts it, “equating learning speed with learning ability is irrefutably wrong.” (P. 133)

For me, Bloom’s work raises questions. Chief among them: Bloom was doing this work forty years ago; has it been replicated?