It’s difficult in these political times not to picture a White House Press Secretary when Haidt writes:
If you want to see post hoc reasoning in action, just watch the press secretary of a president or prime minister take questions from reporters. No matter how bad the policy, the secretary will find some way to praise or defend it. Reporters then challenge assertions and bring up contradictory quotes from the politician, or even quotes straight from the press secretary on previous days. Sometimes you’ll hear an awkward pause as the secretary searches for the right words, but what you’ll never hear is: “Hey, that’s a great point! Maybe we should rethink this policy.”
The role of our conscious brain is to be our personal Baghdad Bob.
Haidt contends that being naturally groupish is a big part of the acrimonious political and cultural divisions we see today. To be an “us,” there needs to be an “other.” But he also points out that the alternative to groupishness is that human culture as we know it probably would have never had the chance to evolve.
“Our tribal minds make it easy to decide, but without our long period of tribal living there’d be nothing to divide in the first place. There’d be only small families of foragers–eeking out a living and losing most of their members to starvation during every prolonged drought.” (p. 212)
Haidt, following the lead of self-consciousness researcher Mark Leary, calls the function in our brain that constantly monitors our value as a relationship partner as our “sociometer.”
I think part of the reason I’m interested in the role of reputation in human motivation is that at work I tend to advocate that learning is best and deepest when internally motivated rather than extrinsically.
But if people are inherently prone to making good choices only if being watched…
In a professional context, part of the answer might be to ensure there is a clear link between what you can learn in training and your job performance, and therefore your professional reputation.
“The most important principle for designing an ethical society is to make sure that everyone’s reputation is on the line all the time, so that bad behavior will always bring bad consequences.” (Haidt, p. 86)
That’s horrifying. But that alone doesn’t make it wrong. Of course, people clearly do selfless anonymous acts. Last week my wife, having found some cash on the ground, decided to pay it forward at Target and buy the cashier a gift card (which actually turned out to be harder than it should have been, as accepting gifts is against Target’s policy, for sound reasons, I’m sure, but I’m having trouble envisioning a scenario where it is materially advantageous to curry the favor of a Target cashier–but I digress). Our kids saw her do it, but I think she would have done it anyway. She did tell me, but I don’t think she would have if it hadn’t ended up being kind of a hassle that ended up involving a manager and thus made an interesting story.
Anyway, people do good things sometimes that hold no reasonable possibility of reputational reward. But that’s not Haidt’s point, I think; his question is whether a community populated with people worried about reputation would out-compete one populated with people not as worried about their reputations. Which is, of course, a fascinating question because if it would, that means evolutionary pressure would favor groups concerned with reputation.
“Why do we have this weird mental architecture? As hominid brains tripled in size over the last 5 million years, developing language and a vastly improved ability to reason, why did we evolve an inner lawyer, rather than an inner judge or scientist? Wouldn’t it have been most adaptive for our ancestors to figure out the truth, the real truth about who did what and why, rather than using all that brainpower just to find evidence in support of what they wanted to believe?”
Jonathan Haidt, p.83
If the role of the conscious mind is to be the press secretary for the subconscious mind, does reason play any role at all in our moral reasoning?
Haidt points to an experiment that suggests it can. Subjects were exposed to a scenario meant to elicit a disgust reflex, but in this particular scenario no one is hurt, emotionally or physically. Subjects were asked whether the protagonist in the scenario was wrong to act as they did. This is similar to experiments that Haidt himself has run.
The twist is that some subjects were asked to give an option immediately, and some had to wait two minutes to give an opinion. The subjects in the latter condition were more likely to say that the actions were OK than those who have an opinion right away, suggesting the rational mind can sway the moral mind.
It would be interesting to know much of an experimental effect is going on here (i.e., people have time to reflect that they are part of an experiment, which could change their thought patterns).
Still, I have to believe that the conscious mind has influence over our moral reasoning centers. The question is how much, and how do we maximize that control?