Do you know people who are just really great at solving complex problems, seemingly no matter the subject? The notion of intelligence is ill-defined, but it’s fair to say that natural variation and talent play a role in problem solving ability.
It’s also pretty clear that problem solving acuity is affected by more than raw talent. Fatigue and health can play a role, as does motivation, determination, and creativity. I’ve lumped all these factors together under the acuity umbrella.
Why lump all these things into one category? Largely it’s because of the three categories, it’s the one that usually matters least. If I, say, need help figuring out why my bread loaves are collapsing, and I know a food scientist who doesn’t do much baking, a frequent baker who learned by watching watching his or her parents but doesn’t really know why it works, and my brilliant friend who knows nothing about baking, my brilliant friend would not be the first one I call for help.
In fact, my impulse is to make the acuity circle smaller than the others, but really the relative sizes of the circles are probably variable. For instance, the more open-ended and gnarly the problem, the more the problem demands a creative, novel solution, the more acuity helps. In other words, acuity is, I think, relatively more important for the person charged with coming up for strategies for reducing the federal debt than it is for the person trying to figure out how to start a stubborn car on a bitterly cold day.
From a training perspective, that means when designing instruction I focus on knowledge and experience. Trying to make people generically better problem solvers is a poor investment compared to conveying useful knowledge and experience. The one exception here is motivation. It’s important for instruction to help learners understand why they should care about the problem enough to invest themselves in solving it.
I have actually been involved in a couple of projects that were intended to make learners better overall problem solvers. A long time ago I was involved in a computer-based project that tried to teach problem solving generically by dropping learners into a number of disparate problems and providing resources and coaching. It tried to build experience and confidence by showing the value of strategic thinking regardless of the situation. To the market’s credit, it recognized that problem solving ability can’t be taught generically and the product failed. (While the product was a commercial and instructional failure, I’ll note that I learned a great deal. I interacted with high level thinkers regarding problem solving and learned a lot about collaborating across cultures.)
At McGladrey, I currently have a small role in an effort to help auditors exercise greater professional judgment and become more critical thinkers. This project is being done in a partnership with several professors at BYU. At first, I was worried that this project would repeat the same mistakes of the failed product I was involved with all those years ago, but I think the project team has made some really smart decisions. While there is a generic professional judgment framework at the center of the model, the heart of the instruction focuses on several common biases that tend to cloud our judgment (such as anchoring) and makes a concerted effort to apply bias-clearing kung-fu to specific common and important auditing areas. This way, even if the auditors don’t generalize the problem solving strategies to all domains, they will still become better auditors, at least in the specific areas where they practice strategies for overcoming the bias. Smart.