Being a Good Person by Deceit?

By Nadira Faulmüller Lucius Caviola

Recently, Peter Singer, Paul Bloom and Dan Ariely were discussing topics surrounding the psychology of morality. Peter was emphasizing the importance of helping people in need by donating money to poverty fighting charities. That’s easier said than done. Humans don’t seem to have a strong innate desire of helping distant strangers. So the question arises of how we can motivate people to donate considerable amounts to charity. Peter suggested that respective social norms could be established: in order to make people more moral their behaviour needs to be observable by others, as Dan pointed out, only then they will be motivated to help strangers on the other side of the world. Is this true? – do people only behave prosocially because they feel socially pressured into doing so?

First, research confirms that people tend to behave prosocially even towards strangers. In numerous experiments behavioural economists and psychologists have demonstrated the high degree of cooperative tendencies people show. Often, this has been researched using the “Dictator Game” (DG) in which two strangers anonymously interact with each other – typically via computers so that they won’t see each other. Usually the game is played with perfect information – both players are informed about the rules and the possible actions players may take. One person is randomly assigned the role of the dictator and the other person the role of the recipient. Dictators get an initial $10 and are given the opportunity to decide what amount of money they want to hand over to the recipient. Recipients, in contrast, start into the game without an initial amount and cannot influence their final payoff. Evidently, a selfish dictator would keep everything for herself. However, people don’t. A meta-analysis covering 41,433 single DGs shows that on average people give about 28.35% of their endowment to the other person and 17% of the dictators even give half. Behavioural economists have tried to explain this kind of prosocial behaviour by assuming that people have an intrinsic motivation to behave fairly towards others, i.e. that they have social preferences. However, there are indeed good reasons to question this assumption.

Let’s, for example, consider this modified version of the DG: As in the classic DG dictators can decide how to split their $10. After they have made their decision, but right before its implementation, the experimenter intervenes offering the following option: either dictators can go ahead with their decision and divide the amount of money as intended, or they can choose to exit the game without letting the recipient know about the game in the first place. So in fact, this game was not played under perfect information as assumed by the dictator. Pretty sneaky! The experimenter explains that the quiet exit costs $1, thus if they exit they will get $9 and the recipient gets nothing. How do people react? If people were truly motivated to ensure fair distributive outcomes, they should of course not choose the exit option and go for a fair split. But as it turns out, between 28% and 43% of the participants chose the exit option. These participants initially decided to split fairly but as soon as they knew the recipient was not expecting anything, they behaved selfishly. Worse than this, they were even willing to pay money to make sure others will not find out.

It has been shown that even children at age eight already show such strategic behaviour when it comes to fairness.

Prosocial behaviour is also reduced when the connection between choices and outcomes is obfuscated. In another modified version of the DG, dictators could choose between two options. They get $5 for choosing option A and $6 for choosing option B. However, at this point they don’t know how much the recipient gets. It’s either $5 or $1. If they wish, dictators can – without any costs – learn what the exact payoffs are for the recipient. But as it turns out, only 50% of the participants decided to do that while the other half preferred to remain ignorant and chose the option that is in their best self-interest. By leaving the relationship between their decision and the recipient’s payoff uncertain, people can preserve the illusion of behaving fairly.

What these studies seem to show is that many people are not really interested in fair outcomes per se. Instead, they seem to be interested in being perceived as fair, while at the same time they primarily care about their own payoff. Nobody wants to be perceived as unfair and unpleasant, even among strangers. Knowing that someone might think you are unfair might be sufficient to elicit a feeling of guilt that one would rather avoid, even if it costs something.

Thus, as discussed by Peter, Paul and Dan, if we want people to behave more prosocially, we ought to give them a reason to do so. But should such socially pressured prosocial behaviour still be regarded as moral behaviour? After all, it implies that people’s intentions were not necessarily altruistic. From a consequentialist perspective, however, one might argue that it doesn’t really matter why people are helping as long as they just do help. After all, those who are being helped really don’t care whether this was due to a purely altruistic intention or due to a cleverly arranged decisional situation motivating people to help more.

 

Lucius Caviola received his Bachelor’s degree in Psychology from the University of Basel and will continue his studies with a MSc in Psychology at the University of Oxford. He is currently doing a full-time research internship at the University of Oxford (Department of Experimental Psychology). His research interests focus on questions at the intersection of psychology, ethics and rationality.

  • Facebook
  • Twitter
  • Reddit


Open bundled references in tabs:

Leave a Reply