Saturday 1 April 2017

Can Moral Psychology Explain Our Political Convictions?

In these times of hostile political discussions, with the rise of far-right movements on one side of the spectra and increasingly intolerant privilege-obsessed SJW on another, bridging ideological gaps between groups has become incredibly important: Trump and Brexit shocked more than one poll, pundit or political group, and not for lack of trying to get different messages across, but for lack of receiving them. In the field of politics and philosophy (and, of course, economics) I sometimes rant about those who believe different things from me: ignoring, overlooking, strawmanning, insulting, patronising  which perhaps contributes to the problem. During my most happy moments, this is all done with a friendly smile on my face; when I'm upset or particularly annoyed about something, I wish to go much further in my attacks. Let me spend the next few paragraphs on the content of a book that so radically changed my perception and understanding of moral opinions themselves, answering the basic question of why "it's so hard for us to get along" (p. xviii).

The moral psychologist Jonathan Haidt now at NYU has risen to fame doing precisely this. In his 2012 book The Righteous Mind: Why Good People are Divided by Politics and Religion he set out to explain why smart and brilliant people disagree so fundamentally, and more importantly, why they sometimes can't even comprehend what their opponent is thinking. He does this in a particularly lively and approachable way, without getting too bogged down in the intricate language of scientific psychology, and he makes a very compelling case for what's known as the social intuitionist model. With chapter titles like "Where Does Morality Come From?" (Ch. 1) and "The Moral Foundations of Politics" (Ch. 7), Haidt basically had me at hello.

I'm gonna start this review at the end, so we're all clear where Haidt is going with this. Part 3 of his book is called 'Morality Binds and Blinds', and for good reasons:
This is not just something that happens to people on the other side. We all get sucked into tribal moral communities. We circle around sacred values and then share post hoc arguments about why we are so right and they are so wrong. We think the other side is blind to truth, reason, science and common sense, but in fact everyone goes blind when talking about their sacred objects. (p. 364)
But Haidt's argument is not primarily a story preaching tolerance between constantly opposing and stubborn political camps, although Haidt does that too. Rather, it's an investigation into how our minds form morality and by extention political conviction, how this morality broadly functions and the misunderstandings created as a result between say liberals and conservatives. And Haidt's result may be surprising: we're not blank slates. We don't favour conservative agendas because we're indoctrinated by wealthy, racist families or selectively exposed to only one perspective. Instead, Haidt summarises psychological research and experiments pointing to another conclusion: there is a genetic component to our moral convictions, using research on identical twins (pp. 322-325). Genes producing brains that receive pleasure from "novelty, variety, and diversity, while simultaneously being less sensitive to signs of threat" (p. 365) tend to be more liberal; brains with opposite "settings" tend to lean conservative. Now, Haidt stresses over and over again that people are predisposed to these ideas, not predestined, a very important distinction to make. Let me explain the central metaphor before diving in to the Moral Foundations Theory Haidt is advancing here.

The Elephant and the Rider
["Intuitions come first, strategic reasoning second ... The mind is divided, like a rider on an elephant and the rider's job is to serve the elephant.", (pp. xx-xxi)]


Imagine our minds composed of two entities, an intuitionist elephant and a rational, justifying rider. The rider has very little control over where the elephant moves, but is there to support his relation to other elephants (a portable PR firm). The elephant leans towards something, makes a split-second decision, and the rider comes up with the justifications and rationales for doing so:
The rider acts as a spokesman for the elephant, even though it doesn't necessarily know what the elephant is really thinking. The rider is skilled at post hoc explanations for whatever the elephant has just done and it is good at finding reasons to justify whatever the elephant wants to do next. (p. 54)
It took me about a year and a half before I understood the brilliance of this metaphor; at first I thought it simply meant that all our opinions come from emotions, a position I in my righteous enlightenment-reasoning style refused to accept: in my stubbornness I said “no, I believe what I believe because it is right, because it has most scientific backing, because it makes most coherent sense.” It was not until I read Haidt that I realised how this is my rider talking. I was clearly not receptive to his message back then.

This isn't only some random dude talking, but Haidt has rigorous psychological experiments backing up his story (Ch.2 and 3 for more details). Neither is it a complete indictment against arguing with people, but it is however a strong indication that arguments alone are unlikely to sway your opponent's elephant one way or another, as depicted in the schedule:

Moral Foundations Theory
["There is more to morality than harm and fairness ... the righteous mind is like a tongue with six taste receptors." (p. xxi)]

Part of the problem Haidt is trying to address is the narrow understanding of morality as dealing only with harm or equity: if nobody is hurt, or there is no injustice involved, who are we to judge? He spent a lot of time conjuring up stories or moral blame that did not involve those two: burning flags, sex between consenting siblings or eating meat from your neighbour's already-dead dog. He then asked participants what they thought about those actions, if they were morally right and to state reasons. What he found, interestingly enough, were instances of moral dumbfounding, where participants strongly opposed certain actions despite having all their justifications disproven and stripped away from the examples. Stubbornly did the elephant oppose the examples, although the rider could no longer give reasons.

He distinguishes several different categories of morality, exploring five different foundations (later spliting fairness into two separate ones, see below) on which our elephants make intuitive choices:

  • Care/Harm: the most basic of evolutionary motherly instincts; since societies where women and men had "an automatic reaction to signs of need or suffering [...] from children in their midst" (p. 154) had evolutionary advantages over groups that didn't, it is easy to see how this foundation is deeply rooted in our minds (see attachment theory).
  • Fairness: essentially a story of social trust and reciprocal altruism  selfish genes "can give rise to generous creatures, as long as those creatures are selective in their generosity." (p. 158), and groups "whose moral emotions compelled them to play 'tit for tat' reaped more of these benefits" than other groups. The Fairness foundation takes different shapes on the left and the right: for liberals it often implies equality, but for conservatives it means proportionality. 
  • Loyalty: group cohesion, tribal conflict, hatred for traitors etc.  "our ancestors faced the adaptive challenge of forming and maintaining coalitions that could fend off challenges and attacks from rival groups. We are the descendants of successful tribalists, not their more individualistic cousins." (p. 163)
  • Authority/Subversion: duty, respecting figures of authority and hierarchical relationships  some examples are addressing people with titles or last names, and more generally maintaining order and justice, fulfilling their roles or (social) responsibilities (see Fiske's Authority Ranking)
  • Sanctity/Degradation: purification, pollution, disgust sense of divinity. The prime example Haidt uses is the case of Armin Meiwes who killed, cooked and ate the body of Bernd Brandes, who completely consented to the treatment: Meiwes and Brandes "caused no harm to anyone in a direct, material or utilitarian way. But they desecrated several of the bedrock moral principles of Western society, such as our shared beliefs that human life is supremely valuable, and that the human body is more than just a walking slab of meat." (p. 174). In less extreme versions this moral foundation shows up in most Big Questions of marriage, of abortion, of cloning; it's about virtues to be guarded on the right or repugnance towards toxins or environmental degradation on the left. 

What does Haidt do with this? Gathering hundreds of thousands of responses he constructs the following chart, showing that he's indeed on to something. People registering at YourMorals.org self-reported their ideological convictions, and they neatly show how liberals tend to draw much more on certain moral foundations ('Care' and 'Fairness') whereas the more conservative you are, the more you draw on all of them:
From this, Haidt constructs the various "Moral Matrices" of Conservatives, Liberals and Libertarians:

He then spends fifteen pages or so analysing the various matrices and where they, in his opinion, go wrong. If anything, the last chapter ("Can't We All Disagree More Constructively?") is well worth a read, and he points to several important insights in these varying frameworks that is often lost for others: 
  • Libertarians are right that markets are miraculous, especially when issues of externalities/public goods and monopolies are addressed.
  • Liberals are right that some problems can be solved by regulation; moreover, corporate superorganisms must be restrained.
  • Conservatives are right that you can't help the bees by destroying the hive: some institutions, habits and cultural norms exist for the benefit of all, dismantling them on the basis of Care or Fairness alone may be detrimental for all. Or as Haidt neatly puts it: 
Care and compassion sometimes motivate liberals to interfere in the workings of markets, but the result can be extraordinary harm on a vast scale. (p.356)
Conclusion and Final Piece of Advice
Haidt's conclusion is beautiful enough to reproduce in full:
I've tried to show how our complicated moral psychology coevolved with our religions and our other cultural inventions (such as tribes and agriculture) to get us where we are today. I have argued that we are products of multilevel selection, including group selection, and that our 'parochial altruism' is part of what makes us such great team players. We need groups, we love groups and we develop our virtues in groups, even though these groups necessarily exclude nonmembers. If you destroy all groups and dissolve all internal structure, you destroy your moral capital. Conservatives understand this point. Edmund Burke said it in 1790. (p.358-9)
His book is quite a masterpiece, and I can highly recommend it, especially entering an era where simply understanding of one's political opponent is a rare quality indeed. His final piece of advice is to follow the sacredness: Care/Harm for liberals; Institutions that support Moral Capital for conservatives; and Individual Liberty for libertarians. And more specifically for liberals, following what helped him overcome his own early and mistaken understanding of conservatives' morality: "Stop dismissing conservatism as a pathology and start thinking about morality beyond Care and Fairness." (p. 194)

See also this WSJ interview from today with Jonathan Haidt discussing the roots of campus illiberalism.

No comments:

Post a Comment