If the role of the conscious mind is to be the press secretary for the subconscious mind, does reason play any role at all in our moral reasoning?
Haidt points to an experiment that suggests it can. Subjects were exposed to a scenario meant to elicit a disgust reflex, but in this particular scenario no one is hurt, emotionally or physically. Subjects were asked whether the protagonist in the scenario was wrong to act as they did. This is similar to experiments that Haidt himself has run.
The twist is that some subjects were asked to give an option immediately, and some had to wait two minutes to give an opinion. The subjects in the latter condition were more likely to say that the actions were OK than those who have an opinion right away, suggesting the rational mind can sway the moral mind.
It would be interesting to know much of an experimental effect is going on here (i.e., people have time to reflect that they are part of an experiment, which could change their thought patterns).
Still, I have to believe that the conscious mind has influence over our moral reasoning centers. The question is how much, and how do we maximize that control?
Haidt, while exploring how people make judgments, notes research from Alex Todorov that indicates that if you give a person a picture of two candidates they’ve never seen before, in this case national Senate and Congressional races, they’ll guess correctly two-thirds of the time, even if only shown the pictures for a tenth of a second.
First, a word about the presentation of statistics for this. When I first read that statistic, my mind was blown. Two-thirds! It was only after thinking about it for a little bit that I realized that people would already on average get it right half the time even if they just flipped a coin. That made two-thirds seem less impressive, which is too bad because it really is impressive.
Anyway, one can think of lots of confounds for this–age, gender, physical attraction–but it appears Todorov controlled for all of these. The one factor that did correlate with success picking out the winning candidate was the perception of competence.
It would be great to know more about that. What are the specific visual markers that makes one appear competent? (From the Todorov article linked above–it sure appears that the eyes have a lot to do with it.) And how well does the appearance of competence correlate with actual competence (which would obviously be hard to define in a political setting, but presumably this type of judgment is going on in lots of settings)?
“The brain tags familiar things as good things. Zajonc called this the ‘mere exposure effect,’ and it is a basic principle in advertising.” (Haidt, p. 65)
Something I need to understand better. When does it work and when does it not?
Jonathan Haidt refers back to famous philosophers at a number of points in his book to show the evolution of ideas.
- Plato believed that moral intuition was subordinate to reason.
- David Hume believed that reason was subordinate to moral intuition.
- Thomas Jefferson believed that moral intuition and reason were co-leaders.
Haidt’s position is that Hume was right.
Another way Haidt points out that we know morality is not necessarily a rational process is from his own experiments where people were given scenarios that trigger taboos but in which no one is hurt and no one outside of the main protagonists in the stories even know what happened.
Subjects were asked if the individuals in the stories were wrong to do what they did, and then pressed for reasons why. It was clear that most people were inventing reasons to justify their reactions. Even if they couldn’t give a good reason, they still stuck with their initial judgment.
It’s a pretty good demonstration that we’ve evolved to condemn certain behaviors, even if we can’t explain why, and even when those behaviors aren’t hurting anyone. Judgment and justification are separate processes.
Further, Haidt goes on to assert that moral reasoning is not about reconstructing our own conclusions, but rather “we reason to find the best possible reasons why somebody else ought to join us in our judgment.” (p. 52)
So, then, if our moral decision making is subconscious and automatic, why does our rational, conscious mind bother crafting post hoc justifications for what we believe? Haidt refers to our rational minds as a press secretary. Why bother with the post hoc reasoning, since we aren’t going to change anyone else’s mind?
Partly, it serves a social function. Reason itself isn’t very effective for changing minds, but it can reinforce our beliefs and the beliefs of other, like-minded individuals. It feels good when someone comes up with a killer argument in favor of what we believe, and that creates social bonds.
It’s also true that we do sometimes change our positions, particularly in response to people who we like or admire assuming a contrary position. Our rational brains help us rationalize the reasons why we change our views, when really we changed them due to social pressure.
Also, to be clear, it is also possible to change your own position based on logical reasoning, and we clearly sometimes do. As Haidt notes, it just doesn’t happen very often.
Haidt did some research early in his career to ascertain whether increased cognitive load would interfere with making decisions in situations that involve applying morality or ethics. The idea here is that if morality is controlled by automatic processes instead of rational ones, then cognitive load shouldn’t matter.
And, indeed, that’s what he found. Moral judgement is fast and automated. It doesn’t depend on the rational mind.