If you look at the ‘output’ of the University of Oxford’s politics department, you will find that in the last decade, it gave us Boris Johnson, Liz Truss, and Jacob Rees-Mogg. What you may not know is that the effective altruism movement so beloved by tech-bros and epitomised by convicted crypto-scammer Sam Bankman-Fried also started at Oxford University thanks to the efforts of William MacAskill and the people around him. It’s a good thing Oxford University also invented the Covid and now the malaria vaccine to balance out the damage and destruction its humanities department inflicts on the world.
Ever since I came across the effective altruism movement several years ago, I felt the need to comment why I think this is philanthropy gone mad but John Lanchester in his excellent article on Sam Bankman-Fried and the effective altruism movement in the London Review of Books made the case better than I ever could. To quote from his article:
“An unlikely bet with a huge pay-off could well offer a higher [expected value] than something which looked smaller and more sensible – and this applies to both ethics and finance. Say, a charitable exercise has only a one in a billion chance of succeeding, but if it does succeed, will save humanity […] You have $25,000 to give. Should you give it to a political party, to help fund your sick neighbour’s cancer care, to build wells in Africa, or to this one-in-a-billion long shot? The [effective altruism] calculation is that it costs $5000 to save a life, so your long shot represents good value. The [expected value] of your bet is $35,000: 7 billion lives x $5000 each, divided by a billion for probability. Good deal! Though bad luck for your political party, your neighbour and those Africans.”
This is the calculation that Sam Bankman-Fried made with his business ventures. He thought he could invest a relatively small sum into crypto trading. This venture had a small chance of making him billions that he could then give to charitable causes. So what if he had to resort to financial fraud? Bad luck for the investors who got stiffed by him on his way to making the world a better place.
In my view, the problem with effective altruism is that it is rooted in utilitarianism. In other words, the ends of maximising utility justify the means of getting there. This is a philosophy that many people trained in economics and finance find appealing because classical economics is based on it (why do you think economists talk about utility functions all the time).
But we need to remind ourselves that utilitarianism is a philosophical choice and does not necessarily describe the actual behaviour of people. You can contrast utilitarian approaches or the broader concept of consequentialist approaches (where actions are guided by their consequences) with deontological approaches (where actions are guided by some commonly accepted rules). The prime example is Kant’s categorical imperative which tells us to only act in such a way that if others acted the same way it would be acceptable to us.
Deontological approaches focus on humans as part of a society and insist that we need to consider the consequences of our actions on other people to enable our group as a whole to survive and thrive. This is, admittedly oversimplified, the position that behavioural economics takes on when it focuses on the behaviour of people in groups and the often non-utilitarian actions we take.
By now, most of my readers probably wonder where I am going with all that, so let me point you to a new paper full of experiments on how humans make real-life economic decisions when they are torn between a consequentialist and a deontological approach. In a battery of games, the researchers checked if people acted more based on the consequences of their choices or based on some behavioural rule.
To test the utilitarian approach to economics, they tried to figure out how well the standard trolley problem predicted behaviour in other situations. If you don’t know the trolley problem, in its most basic form it is a thought experiment where you are confronted with a trolley running down a track toward a group of three people unaware of the danger. If the trolley hits them, it will kill all three of them. You cannot warn the three people, but you can flip a switch which redirects the trolley onto another track with one person on it. Do you flip the switch thus saving the lives of three people but killing an innocent bystander?
In a utilitarian world, everyone would flip the switch because killing one person is better than killing three, but in practice, only about three in four people flip the switch. This already tells you that there is a substantial minority of people who do not act in a utilitarian way even in thought experiments when nothing is at stake.
What the study did was to design six real-life experiments where real money was at stake and people got into a dilemma where they had to choose between utility-maximising behaviour and rules-based or moral/ethical behaviour.
Several interesting things happen in these experiments but I want to emphasise three in particular:
The trolley problem has no correlation with how people behave in real life. This classic utilitarian view is unable to explain the behaviour of real people in real-world situations.
Whether people choose a utilitarian/consequentialist course of action or a deontological/rules-based course of action depends on the situation at hand. People are not consistent in their actions at all, sometimes choosing one path sometimes another.
The dictator game which measures prosocial behaviours and social preferences reliably predicts the behaviour of real people in real world actions.
And for the uninitiated on the dictator problem, it is a game where you are told you are given a certain amount of money and you have to share it with another person. You suggest the share you keep and how much the other person gets. Then the other person can either accept the offer or reject it. If she accepts it, you both get the agreed amount. If the other person rejects the offer, you both get nothing.
Utilitarians would argue that the first person should offer as little money to the other person as possible. This way, the first person maximises her utility. The second person would rationally not reject any offer no matter how small the amount because getting even a little bit of money is better than getting nothing. Yet, in real life, two things happen.
First, extremely lopsided splits where the first person keeps 80%, 90% or even more of the money get rejected by the other person in almost all cases. People will incur personal costs (the other person loses out on the possibility of gaining a small amount) to punish unfair and asocial behaviour by others.
Second, anticipating this rejection in response to an unfair offer, most people put in the position of the first player offer to share the money equally or near-equally.
And this is the problem with classical economics and effective altruism alike. In their utilitarian or consequentialist view of the world they ignore something important about people. To conclude with another quote from John Lanchester’s LRB article: “Effective altruism has no place for empathy.” And I would argue neither does classical economics, which makes it fail so often.
"And for the uninitiated on the dictator problem, it is a game where [...] If the other person rejects the offer, you both get nothing." This is the ultimatum game, not the dictator game; which is a variant of the ultimatum game where if the offer is rejected, only the second person gets nothing. The first person - the dictator - gets what they claimed for themselves regardless of the second person's action.
'People will incur personal costs'. And monkeys. Cucumber is good enough until the neighbor gets a grape...
https://www.youtube.com/watch?v=-KSryJXDpZo