Morality and Our Inner Chimp

Most people seem to tend to believe that we get our sense of right and wrong from high above, whether it be written in stone tablets or whether there is a transcendent source that guides our intuitions; in other words, we tend to think of morality as somehow divine in its inspiration.

Current research shows, however, that the source of our moral feelings might not be from high above but from down below. When confronted with highly interesting moral dilemmas, such as the one used in the audio below, people's constant inconsistency in answering logically indistinguishable scenarios raises the question: why the inconsistency?

These are two scenarios, different in content, but exactly equal in their logical structure, that I use when I teach ethics, primarily to show highly opinionated students that they are not the moral experts they think they are, and that they ought to open their minds to new possibilities, and as an interesting question that we seek to answer when we cover evolutionary ethics:

There is a train chugging along a rail, headed furiously toward 5 people who are completely unaware of their impending death, and who cannot be warned about it. You find yourself in a unique position, however: you are standing next to a lever that you can pull in order to change the tracks the train will take, thereby saving the 5 innocent people. The catch, however, is that there is one person on the new track who is unaware of what's happening and cannot be warned. The question is what the morally right thing to do would be: do nothing and let 5 people die, or pull the lever and sacrifice 1 person in order to save 5. The answer is virtually unanimous and utilitarian: do the math, save the greatest number.

You are a doctor, and have 5 patients who need organ transplants today or they'll die. In the waiting room there happens to be a perfectly healthy man who just miraculously happens to be a perfect match for all 5 patients. Again you have a choice: do nothing and let the 5 innocent people die, or gut the 1 guy who is a perfect match for everyone and save them.

The logic between the two cases is exactly the same, so if we were to assume that people are rational we'd have to predict that we would have the same unanimous and utilitarian answer we got in the first scenario. The truth, however, is that although the answer is virtually unanimous, it ceases to be utilitarian and sounds a bit more Kantian and absolutist: you can't sacrifice innocent human beings for the sake of others.

The explanation for this inconsistency seems to be, as discussed in the audio below, that different parts of our brain produce the two different responses, and these findings are consistent with evolutionary predictions: the more primitive part of the brain should come up with the more absolutist answers and justify itself with phrases like "that's just wrong," while the more rational and calculating part of the brain, which evolved much more recently should come up with the more 'rational' and calculated answers.

And that's exactly what happens: many, if not most, of our moral intuitions are based on our evolutionary history, on what Josh Greene rightly calls "our inner chimp": the moral beliefs and feelings that our evolutionary ancestors shared as a social species.

The implications for this are tremendous, I think. For one, it supports bundle theory and seriously challenges the belief in a unitary self that is the command center of thought and action. Two, it seriously challenges the belief in free will: if we are not conscious of why we do what we do or think what we think, then our choices are not actually free. Three, morality doesn't come from God, it comes from our inner chimp. Four, our moral judgments are ultimately expressions of our own subjective biases. And on and on and on...

But for the more interesting and amusing investigation of these ideas, listen to the segment of the Radio Lab episode below. I think whether you agree or not, you'll find it highly fascinating and amusing.

I met Josh Greene a few years ago at a seminar where he discussed his research. We talked about the role of 'priming' subjects prior to testing them: it turns out that if you prime subjects, their responses are much more consistent with the priming than with what they would otherwise recognize as their own normal moral attitudes.

This points to the wonderful flexibility of the brain to adapt to different situations, but also opens up the dark prospect of manipulating people into expressing certain moral 'beliefs' to further others' ends, such as political gain; something the Bush administration seems to have perfected into almost a science: scare people to death and they'll support measures that limit freedom for the sake of protection...

I'll be posting a more thorough interview with Marc Hauser on this same topic soon, so look out for that.
Related Posts Plugin for WordPress, Blogger...

Embed this blog on your site