About a year ago, when I was making my first serious inroads into the technical underbelly of my field, following my utilitarian opponents into a futurist landscape of backwards Es and upside-down As, Robert Paul Wolff (who seems to be entirely incapable of stopping writing, bless his soul) ran a tutorial on formal methods in political philosophy which I found very useful, especially its introduction to Arrow's Impossibility Theorem concerning social choice theory. (The tutorial is archived along with his autobiography, which I heartily recommend, and some other bits and pieces here.) However, I thought that Wolff was rather harsh on the prospects of the prisoner's dilemma as a topic for serious philosophy, largely because the suppositions behind the case are harshly disconnected from the conditions of the actual world. Contrary to Wolff's harsh evaluation, I think there is something to be said for the prisoner's dilemma as an analytic tool. Heaven knows a lot of people make ridiculous claims regarding it (for instance, I was told once that it shows that ethics is impossible), but there are at least two reasons to take it seriously, if only as an analytical device. The prisoner's dilemma might be a good tool for cutting to the heart of various hypotheses, even if we agree with what Wolff has said about its limitations (as we should).
If I may remind the reader of what the prisoner's dilemma (hereafter, PD), I'll quote the description from the Stanford Encyclopedia of Philosophy entry:
Tanya and Cinque have been arrested for robbing the Hibernia Savings Bank and placed in separate isolation cells. Both care much more about their personal freedom than about the welfare of their accomplice. A clever prosecutor makes the following offer to each. “You may choose to confess or remain silent. If you confess and your accomplice remains silent I will drop all charges against you and use your testimony to ensure that your accomplice does serious time. Likewise, if your accomplice confesses while you remain silent, they will go free while you do the time. If you both confess I get two convictions, but I'll see to it that you both get early parole. If you both remain silent, I'll have to settle for token sentences on firearms possession charges. If you wish to confess, you must leave a note with the jailer before my return tomorrow morning.”
We can represent the options in a little pay-off matrix, mapping out all the possibilities:
- Cinque stays silentCinque confessesTanya stays silentTanya is jailed very briefly / Cinque is jailed very brieflyTanya is jailed for a long time / Cinque goes freeTanya confessesTanya goes free / Cinque is jailed for a long time /Tanya is is jailed for a short time/ Cinque is jailed for a short time
Each of the pair can see that, whatever the other does, they get less time in jail if they confess and hang the other prisoner out to dry. But, that would lead both of them confessing, leading to a situation that is worse for both than if they had stayed silent. That is the prisoner's dilemma.
Wolff complains that discussing the PD in terms of this story distorts our understanding of the picture, because there are a number of assumptions about how players in such a game would act which maps very badly indeed with how actual people in real situations act. For instance, it is very hard indeed to imagine someone only caring about how little time they spend in jail, with no regard for the other's welfare, and any other concern. Wolff's point is well-taken, but I want to say that even if we accept what he says, we can mine some interesting results from using this scenario as an analytic tool: in particular, in seeing what it tells us about the types of reasoning decision-theorists and the like would like us to do (Perhaps I am better disposed towards the PD than Wolff is because I didn't need to wade through the thousand-odd journal articles written on this subject in the 60s and 70s, when nobody could shut up about this thing). To that end, it is useful to consider the in its most general and useful form the PD as game described by the following pay-off matrix (one agent choosing a row, the other choosing a column, like Tanya and Cinque had above) where the options are either to co-operate with the other agent (staying silent, in the prisoner's case) or defecting (confessing to the crime and selling the other prisoner up the river):
- Co-operateDefectCo-operateGood / GoodWorst / BestDefectBest / WorstBad / Bad
Any situation which has a pay-off matrix like this in it can be analysed in terms of the prisoner's dilemma.
Having done the throat-clearing, let me now present the two reasons why I think we should pay attention to the PD. The second is far and away the most important, but the first helps to lead us there.
The first reason is that there are simply so many theoretically interesting cases which can be modelled as some variation of the PD, that is, where situations arise with the payout matrix I described above. There are traveller's dilemmas, the centipede game, the ultimatum game, etc. I'll leave it up to the reader to investigate these cases, and their link to the PD, on their own. But note that understanding any situation which can be modelled in this way is going to necessitate understanding the implications of the PD (which includes, as Wolff stresses, knowing what it doesn't entail).
Secondly, the most important reason to look at the PD (which I was surprised to see get no mention at all in the tutorial) is that it gives a very embarrassing and problematic result for the mass of people who believe that decision theory, etc., provide the gold standard for human reasoning. That is, the PD shows that utility maximisation doesn't lead to Pareto-optimal situations (which was a bit of a surprise, since under similar suppositions the free market, which is driven entirely by utility-maximisation, does lead to Pareto-optimal distributions of resources – a bit more on that later in this paragraph). Utility maximisation is the procedure whereby at each point you need to make a procedure you take whatever course of action has the best prospects for getting you what you want (after taking into consideration all the likely future effects of your actions), and Pareto-optimality is the idea that one situation is preferable to another if every person involved finds the first one to be at least as good as the latter. In non-wonk terms, the PD demonstrates that if everybody tries at every step to take the action with consequences they'd most prefer, they are quite likely to end up in a situation they find less preferable than one they would have reached had they acted differently. It in fact does even more, in that the situation of the two prisoners if both defect is worse for both of them, whereas it's Pareto-suboptimal if only one person reaches a situation they don't prefer. This is embarrassing and problematic to the decision theorist, because Pareto-optimality is a very low bar indeed. There are a range of terrible situations that are Pareto-optimal – for instance, a fiefdom with its range of landlords and impoverished serfs is a Pareto-optimal distribution of land, since to give any land to a serf you need to take it away from a landlord, which means that changing the distribution of land would always be against the preferences of at least one person. If utility-maximisation can't even ensure reaching situations with that low level of goodness, then the decision theorist has reason to worry.
It's this feature of the PD which gives force to the tragedy of the commons (as Garrett Hardin described it in 1968, though only later was this analysed as a PD). Each member of a community who tends sheep and has access to the common pasture always has the incentive to put one more sheep in the field: though this lowers the total productivity of the commons through being overloaded, the individual's gains of having the extra sheep outweigh the marginal loss to each sheep. But if everybody follows this incentive (as utility-maximisation demands) then the commons will soon be exhausted and every farmer will be worse off in the end. The lesson to be learn here isn't that co-operation in such situations is impossible (as some people bizarrely claim, showing off a staggering confusion about the structure of human purposive action) but that utility-maximisation – the hard-nosed pragmatism which makes the prisoner defect every time – is untenable as a general guide to action. In scenarios with PD pay-offs (and the insights of the countless writers on this topic indicate just how many there might be) utility-maximisation turns out to lead us by the nose to our downfall. And that is what we should learn from the prisoner's dilemma.
Boring conclusion after reading a giant wall of text.
ReplyDeleteAlso typo in the table.
Also minimax strategy would also lead to both confessing.
Also both confessing is a nash equilibrium while staying silent is not.
Also if actors give a stuff about other person, then the utilities can easily be modified so that they both opt not to confess.
1) Given that Pareto-optimality and utility-maximisation are both taken by many people to be bedrock features of rationality, the fact that they conflict in a interesting range of cases isn't boring. Though this certainly isn't a new or exciting result, it is one I typed up in contrast to the view of Wolff, which is that everybody should just shut up about the damn thing already.
ReplyDelete2) Thanks, fixed.
3) Yes, and?
4) I'm aware that both defecting is a Nash equilibrium and both co-operating isn't (and that playing Nash in the traveller's dilemma is to claim $1, on the caterpillar game is to cash out as soon as possible, etc.). That's part of the problem, isn't it?
5) Yes, and?
Thanks for the post!
ReplyDeleteLooks like the main problem attributed to the decision theorist is this: if one is guided by the goal of utility maximization, then it is possible to thereby hinder one's attainment of utility maximization.
However, this would not be a problem if the goal of decision theory is to assess choices rather than guide them (which seems to be what many decision theorists think, given that many hold to the claim that it is sometimes irrational to guide one's decisions by rational choice theory). Similarly, utilitarians might think that happiness is maximized only if the majority of people believe that utilitarianism is false. This spells doom for a theory that aims at helping us make choices but not for a theory that aims at assessing choices from an external p.o.v.
Stating that PD shows that utility maximisation is not always an ideal strategy is a boring point which I happen to agree with. But it's boring and does not necessarily follow. Plus I think most decision theorists realise their techniques are just tools to analyse situations, and are not meant to be followed blindly in real life situations.
ReplyDeleteCounter arguments:
"In other PD situations where the problem description doesn't have such a moral bent to it, people would not necessarily see the confess decision to be the wrong one. People take an antagonistic stance to the problem because the well has been poisoned. The confess decision is a legitimate one, given that the strategy dominates not confessing."
"If we are worried that this approach does not take into account moral considerations, then moral considerations should simply be built into the utilities."
Stag Hunt is a more interesting example.
I guess there is at least two different anonymous commentators (and one of them is Andrew) but I think I can answer both of you at once. That is because I think my response to both of you is to insist in keeping Pareto-optimality and utility-maximisation separate (and both separate from whatever is supposed to count as moral). Perhaps my choice of an example and how I concluded the piece muddied the water, but I want to concentrate on the contrast between Pareto-optimality and utility-maximisation, which is a valuable contrast.
ReplyDeleteFirst, the reason why neither of these is automatically the same thing as morality is simply that the problem I highlighted remains even if we disregard any moral issue. Even the most calloused opportunist in a PD case should be upset about the fact that utility-maximisation leads to the result it does, if not because it fails to be Pareto-optimal but because there is another outcome which is Pareto-optimal to theirs (meaning, at the very least, that they themselves would prefer the double-co-operation outcome to the double-defection one they arrive at). The question of the moral worth of the outcomes is simply a different issue than any I was trying to get at. While we're on the topic, I should add that a reply like 'if morality is so important, add it into the utlities' is not available to the utilitarian (and perhaps not to any consequentialist) because surveying the utilities was supposed to be how you determine what the moral things are in the first place. In any case, it is trivial to construct PD cases for whatever way of counting utilities you might think of.
Secondly, I am not claiming that 'utility-maximisation is not how you maximise utilities' (the split between decision procedures and value theories which has become popular among consequentialists, despite its bizarre result that suddenly consequentialism doesn't offer action guidance). That's a separate issue. I'm saying that PD shows a tension between two different ways to measure good outcomes, a very surprising tension since the two standards seem very close indeed (though Pareto-optimality is a tremendously weak, and utility-maximisation a painfully strong measure). Pareto-optimality suggests we look at which situations are preferable to everybody, and utility-maximisation says we go for the ones which make everybody better off (this can be a measure from any perspective, internal or external, idealised or actual, so the decision-procedure/value-theory split doesn't enter the picture). The PD shows us that, in a range of important cases, we can't have both. The tragedy of the commons makes this point, and what it entails, in a very powerful way, which is why I brought it up.
Neither commentator was Andrew, who enjoyed the piece, several months too late.
ReplyDeleteNon-Andrew commenters on my blog?! My oh my!
ReplyDelete