Stakes

At least in a professional setting, my default position on assessment is that lower-stakes assessment should be the first option (where “low stakes” means there are no consequences for getting questions wrong, like polling). Professionals should not need external incentives to take assessments seriously, and high stakes assessment carry a lot of overhead costs. The point of assessment should be to let participants and instructors understand their own level of understanding, and that can usually be done with low stakes assessments. (High stakes testing is still useful or necessary sometimes, but just shouldn’t be the default.)

In Make It Stick, this gave me pause:

Make quizzing and practice exercises count toward the course grade, even if for very low stakes. Students in classes where practice exercises carry consequences for the course grade learn better than those in classes where the exercises are the same but carry no consequences. (p. 227)

Courses in professional environments don’t necessarily have grades, but that still raises the question of whether attaching some kind of consequence to practice questions in professional training would increase learning, as the authors assert.

To the extent that professionals who are participating in training should understand the direct relationship between what they are learning and their ability to grow in their careers, I’d imagine that external incentives or accountability shouldn’t be necessary unless those professionals are only in the training because they have been required to, or to pick up credits for their licensure, in which case external incentives might make sense, but also in which case there are bigger issues to solve.

Advertisements

Intelligence and Picking Winners at the Horse Track

In a nice coincidence, two books I’m reading–Make It Stick and Everybody Lies–both talk about intelligence and practical problem solving in the context of identifying horses that will be winners at the track.

In Make it Stick, the authors speak of successful horse handicapping as a demonstration of practical intelligence–as opposed to analytical or creative intelligence, the other two types in Robert Sternberg’s model. The success of handicappers is unrelated to IQ, they point out. Handicappers develop “complex mental models involving as many as seven variables” (p. 150), and one doesn’t have to be intelligent in the IQ sense to develop this ability.

In Everybody Lies,¬†Seth Stephens-Davidowitz also discusses individuals who are able to successfully identify gifted horses, but for him the discussion is geared toward to use of data. For him, the users of new data and new analysis techniques are likely to have the biggest impact in situations where old ways of using data aren’t very successful–such as, he contends, identifying likely winners among horses.

Successful handicapper Jeff Seder’s innovation,¬†Stephens-Davidowitz points out, was to systematically measure every element of horses, both physical and familial, he could and use data analysis to determine correlations with winning. Using x-rays, he finally discovered the one variable that makes a meaningful difference is aorta size.

If Stephens-Davidowitz is right, it demonstrates that Sternberg’s practical intelligence is vulnerable to significant disruption from analytical intelligence. Practical intelligence is also likely more prone to biases (if Stephens-Davidowitz is correct that prior to Jeff Seder that horse evaluation was ineffective, then identifying successful horse handicappers would be prone to survivor bias; those that got lucky and identified winners would attribute their success to practical intelligence instead of luck).

Self-Fulfilling Learning Styles

The authors of Make It Stick make an interesting point about learning styles that had never occurred to me. If someone believes that he or she is a visual learner, for instance, that can become self-defeating when faced with something to read. He or she may believe it’s pointless to even try.

A good friend and colleague once pointed out to me that while, sure, Gardnerian learning styles probably don’t really exist, a positive side effect of the model is to help teachers think about instruction in new ways. So he felt it was a net positive. But what if associating one’s self with a particular style undermines your willingness to take in information in different ways? Do you end up opting out of a lot of learning that would have otherwise taken place? Fascinating.

Desirable vs Undesirable Difficulties

I like how the authors of Make It Stick use the terms desirable and undesirable difficulty to describe positive and negative traits of learning situations. Undesirable difficulty, I think, is more inclusive than extraneous cognitive load because it includes internally-generated emotions like anxiety.

The authors indicate that there is some experimental evidence that anxiety can be ameliorated by acknowledging that the material is difficult and talking with learners about how struggle is healthy and desirable because it is a sign that learning is happening.

Addendum: The authors give credit later in the chapter to Elizabeth and Robert Bjork for coining the term desirable difficulties. They quote the Bjorks: “[Desirable difficulties] trigger encoding and retrieval processes that support learning, comprehension, and remembering. If, however, the learner does not have the background knowledge or skills to respond to them successfully, they become undesirable difficulties.”

Varied Practice

Make It Stick starts the chapter on massed practice with an interesting research anecdote about a group of eight-year-olds who practiced throwing a beanbag into a bucket three feet away. Half the group practiced with the bucket three feet away and half practiced by throwing the beanbag into buckets two and four feet away.

They did this for 12 weeks and then were tested throwing a beanbag into a bucket three feet away. The subjects who practiced on the two and four foot away buckets did much better than the ones who practiced the actual task. Interesting!

And counterintuitive. I would have anticipated the opposite, on the strength of the principle that the closer the practice is to the real world task (or the exam), the better the preparation.

If I wanted to learn a song in a piano book, clearly I’d be better off practicing that song than the song before and after it in the book. But, thinking broadly, maybe I would learn the song better if I practiced it in varied keys. And in the long run perhaps learning a variety of styles of music would help me become stronger in my preferred genre.

In terms of cognitive skills, it’s hard to know how far to take this, and the authors acknowledge that more research is needed. However, it seems reasonable to me that if you were an aspiring CPA and were learning how to audit cash at financial institutions (banks), you’d benefit from practice auditing cash at other types of businesses. The risk is that you waste time practicing learning and applying principles and facts that don’t apply to any of your actual clients, but the upside, maybe, is that the varied practice makes you a stronger auditor in financial institutions.

Actually, the bigger risk I see is that if you specialize in auditing financial institutions, taking time to practice auditing other kinds of institutions may force you to deal with concepts that are foreign to you, raising the cognitive load higher than it needs to be. Cognitive load is not an issue when you are throwing beanbags into buckets, but it is with complex cognitive problem solving.

Thus, a model for using varied practice to learn cognitive skills would have to include guidance on what cognitive variations are useful and which are harmful.

Memorization vs Complex Problem Solving

In my last post I commented that the emphasis in Make It Stick on recall as opposed to application made the book feel a little academic. The authors do address this point, though they are a little snarky.

Hmmm. If memorization is irrelevant to complex problem solving, don’t tell your neurosurgeon….Pitting the learning of basic knowledge against the development of creative thinking is a false choice. Both need to be cultivated. The stronger one’s knowledge about the subject at hand, the more nuanced one’s creativity can be in addressing a new problem. (p. 30)

It’s certainly true that knowledge within a problem domain is essential, and that greater depth of understanding will likely produce better solutions. But that understanding has to be built on coordinating facts, concepts, principles, and well- to ill-structured procedures–distinctions I wish the authors of Make It Stick had more carefully addressed. The type of knowledge needed to solve problems matters, and I could see how critics can interpret Make It Stick as focused primarily on fact-level learning.

The Testing Effect

I’ve written previously about the Posttest Paradox. Makes it Stick in contrast speaks of a “testing effect,” which is the idea that retrieving information from memory–say, for an exam–increases your ability to retrieve that knowledge later.

I don’t like the term “testing effect,” because it implies formal exams. I’ve spent the last few years cautioning my firm that exams have hidden costs and may not be the best way to achieve their objectives, so calling this the “testing effect” undermines that when in reality, it sounds like the effect had more to do with practice and application than exams per se.

Their alternative name, the “retrieval-practice effect,” is a little better, but not exactly memorable.

There’s also a lot of emphasis in the book on retrieval. While the ability to access important knowledge is important for problem solving, in the real world, people can also look stuff up. I’d have liked to see more focus on conceptual understanding and the ability to generalize to related problems.