The Prisoner’s Dilemma: We’re All in This Together

Rev. Josh Pawelek

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don’t have enough evidence to convict the pair on the principle charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there’s a catch. If both prisoners testify against each other, both will be sentenced to two years in jail.

The prisoners are given little time to think this over, but in no case may either learn what the other has decided until he has irrevocably made his decision. Each is informed that the other prisoner is being offered the same deal. Each prisoner is concerned only with his own welfare, the minimizing of his own prison sentence.

The prisoners can reason as follows: “Suppose I testify and the other prisoner doesn’t. Then I get off scot-free (rather than spending a year in jail). Suppose I testify and the other prisoner does too. Then I get two years (rather than three). Either way I’m better off turning state’s evidence. Testifying takes a year off my sentence, no matter what the other guy does.”

The trouble is, the other prisoner can and will come to the very same conclusion. If both parties are rational, both will testify and both will get two years in jail. If only they had both refused to testify, they would have got just a year each![1]

This is the classic formulation of the “prisoner’s dilemma,” first articulated in the early 1950s by mathematician Albert Tucker. He was developing the work of mathematicians Merrill Flood and Melvin Dresher who created—some say discovered—this game. They were developing the work of mathematician John Nash. All of them were working in the new field of game theory, originated largely by mathematician John Von Neumann. And all of them, including Von Neumann, worked in the early 1950s for the RAND Corporation, an organization founded after World War II to provide research and analysis for the US military. According to Von Neumann biographer, William Poundstone, “in the public mind, RAND is best known for ‘thinking about the unthinkable,’ about the waging and consequences of nuclear war.”[2] Game theory was one resource RAND scientists brought to bear in their efforts to determine US nuclear strategy. According to Poundstone, “no example of a prisoner’s dilemma has been more popular, both in technical articles and in the popular press, than a nuclear arms rivalry. This is so much the case that the term ‘prisoner’s dilemma’ is sometimes taken to be part of the jargon of nuclear strategy, along with ‘mutual assured destruction.’”[3]

Having said that, I’m not going to talk about the Cold War or nuclear strategy. This sermon was purchased by our beloved Fred Sawyer at last year’s goods and services auction. Fred said, quite clearly, “I don’t want to hear about nuclear weapons or the Cold War. What I want to know is whether or not the prisoner’s dilemma tells us anything useful about morality.” I’m grateful to Fred because the prisoner’s dilemma does say something useful about morality, and I’d much rather explore that than give a history of its use in predicting Cold War Soviet behavior. I’ll first explain the prisoner’s dilemma and what it tells us about morality. Then I’ll reflect on Unitarian Universalist moral impulses in light of the prisoner’s dilemma.

Two words game theorists use to describe what’s happening in a prisoner’s dilemma, and which also help us discern the moral implications of the game’s results, are cooperate and defect. There are two players. They each face a choice: to work together—cooperate—or to work against each other—defect. A player cooperates when they make the decision that best supports the other player. A player defects when they make the decision that least supports the other player. There are four possible outcomes: both players choose to cooperate; both choose to defect; one chooses to cooperate and the other defects; or the other chooses to cooperate and the one defects. There are consequences for each choice, and each player bases their choice on what they think will best serve their interests. In the classic formulation of the prisoner’s dilemma, a player cooperates when they choose not to testify against the other player. A player defects when they choose to testify. In essence, do you sustain your relationship or break it?

This classic formulation is one of many ways to imagine the prisoner’s dilemma. In fact, there are unlimited formulations, both hypothetical and real. Earlier we watched a clip from the British game show, “Golden Balls.”[4] Although this isn’t a true prisoner’s dilemma because the players negotiate before choosing, the game follows the basic prisoner’s dilemma model. There’s a £100,000 jackpot. The players can choose to split it—cooperation—or to steal it—defection. If they both choose to cooperate, they split the money. If one chooses to defect and the other chooses to cooperate, the defector gets all the money. If they both choose to defect, neither gets the money. Do you cooperate or defect?

Game theorists are not necessarily looking for the most moral way to play. They’re looking to see how players understand their self-interest in relation to the other player. They assume players who attempt to maximize their self-interest are behaving rationally. This is, of course, a somewhat loaded assumption, but stay with it for now and I’ll name some objections to it later. For now, since morality has to do with how we treat others—how kind, compassionate, sensitive and fair we are towards others; how generous we are in balancing our needs with the needs of others—we can make a general claim that the most moral way to play the game is to cooperate—to make the choice that best supports the other player. The problem with behaving morally is that if you cooperate but the other player defects, you receive the harshest penalty, often referred to as the “sucker payoff.” The more moral choice always comes with a degree of vulnerability and, at least in the context of the game, it can appear to be the less rational choice. On its face, defection is more selfish—or at least self-interested. While I hesitate to call it the immoral choice (whistle blowers exposing corruption are often defectors), we can make the general claim that it is the less moral way to play in relation to the other player: it sacrifices the other player for the sake of personal gain. If the point is to maximize self-interest, the less moral choice appears to be the more rational choice.

This is especially true if you only play the game once. If you only have one opportunity to cooperate or defect, it is always statistically more advantageous—and thus more rational—to defect.  Poundstone calls it common sense.”[5] If your partner cooperates and you defect, you go free. If your partner defects, you’re much better off having defected as well. So it’s best to defect. There’s a paradox here. Mutual cooperation is a better outcome for both players than mutual defection. But to arrive at that better outcome, both must independently choose to act against their own best self-interest. We might say both must behave less rationally. It appears the more moral choice is not the more rational choice. The mathematicians who created/discovered the prisoner’s dilemma had always hoped there was some way to resolve this paradox. In 1992 Poundstone wrote that “Flood and Dresher now believe that the prisoner’s dilemma will never be “solved,” and nearly all game theorists agree with them. The prisoner’s dilemma remains a negative result—a demonstration of what’s wrong with theory, and indeed, what’s wrong with the world.”[6] It reveals the egoism at the heart of human nature.

But there’s a lot to object to here. What if I know the other player? What if I trust they’d never testify against me? What if we had a pact? What if my own moral code won’t let me testify against them? What about the fact that cooperation among criminals isn’t necessarily moral?[7] What about people who act against their self-interest—people who, for example, vote for candidates who favor policies that hurt them economically?  What about the fact that people don’t always behave rationally, or that rationality does not necessarily equate to following self-interest, or that rationality in the absence of emotion, compassion, love, etc., may not be the most reliable guide to effective decision-making? All these factors can and do come into play in a real-life prisoner dilemmas, but there’s no good way to account for them theoretically if you only play the game once. However, it turns out that when we play the game repeatedly, players can introduce a variety of strategies that do account for some of these factors. For example, if you play with the same person over time, unless they play completely randomly, you can get to know how they play; you can start to anticipate what they’re going to do and adjust your play in response. It’s more like a real relationship: the players share a history. Or, if your moral code prevents you from defecting, you can play a strategy of only cooperating. You’ll end up in jail, but you’ll have a clean conscience. Or, if you want to play as a pure egoist and defect every time, that’s a viable strategy, in part because it exploits the kindness of others, but over time others stop trusting you and you spend more time in jail.

There’s a strategy known as Tit for Tat that tends to produce the best overall results in competition with other strategies—that is, over time, it yields the least amount of prison time. Tit for Tat is known for being nice. It always begins with cooperation. That is, it starts the game by trusting that the other player will cooperate. It gives the other player the benefit of the doubt and risks being vulnerable. From there it simply copies what the other player does. If the other player defects, Tit for Tat defects on the next round—a punishment.  If the other player cooperates, Tit for Tat cooperates on the next round—a reward. It’s a punishment and reward strategy, but it always begins with cooperation, and it is by and the large the most successful strategy. This was the conclusion of mathematician Robert Axelrod after extensive research in the 1970s and 80s.[8] Even though it is always in our immediate self-interest to defect, if we’re playing repeatedly—which is more akin to real life—we maximize our self-interest by cooperating. The ethicist John Robinson says, “Alexrod and others … have [successfully shown] how cooperation arises from self-interest, and is a stable strategy in many contexts. They have discovered a reason to be good, an evolutionary explanation for morality that works even though, underneath it all, people are egoists.”[9]

This can be tested even further by having multiple groups of players playing simultaneously and rotating around to each other. Not only does Tit for Tat continue to perform well, but even a small group of Tit for Tat players in the midst of a larger group of more egoistic players can move the whole group towards adopting their strategy and thus orient the whole group—the whole society—towards cooperation. This conclusion affirms that wisdom from the late cultural anthropologist, Margaret Mead: “Never doubt that a small group of thoughtful committed citizens can change the world; indeed, it’s the only thing that ever has.” It also suggests, again, that the more moral choice to cooperate ultimately serves our self-interest better than the less moral choice to defect.

This conclusion certainly resonates with Unitarian Universalist moral impulses—and perhaps the moral impulses at the heart of many religions—although Tit for Tat is not language I would use to describe those moral impulses. Our morality begins in and grows out of relationships. Ours is a covenantal religion. We’re all in this together. As Unitarian Universalists we covenant to affirm and promote seven principles.[10] And as a congregation we have crafted a unique covenant to guide our interactions with one another.[11] We come here to be part of a community. We recognize at a deep level that we benefit from being part of a community, that in community we find grounding to counter all those trends in the larger world that drive people apart, that erode social bonds, that thrive on and exploit our isolation. We know our principles are hard to make real in the world, and even harder to make real in the absence of community. Thus, our first move is cooperation. We’re all in this together.

But it’s not our moral impulse to play a Tit for Tat strategy. It’s not our impulse to defect as soon as the other player defects. It’s not our impulse to punish. Our moral impulse is to sustain relationships, to continue cooperating with the defector, to continue articulating a message—through word and deed—that those who participate in our community, and indeed all those with whom we come into contact, have inherent worth and dignity, are part of the same interdependent web, are deserving of our love and care, deserving of the benefit of the doubt. Our UUS:E covenant even says that if we fail to uphold it we will strive for forgiveness. In the terms of the game, we strive to meet defection with cooperation, again and again and again.

Can this impulse be exploited? Yes.. This impulse would likely land us in prison frequently. Should we tolerate ongoing behaviors that weaken our community? No, of course not. There are times when any faith community needs to draw lines, set boundaries, defect. But we have faith in the power of community. We have faith in the power of relationship. We’re all in this together. And it’s good to know what the data say: over time, self-interest is best attained through cooperation. What’s good for the whole is ultimately good for the individuals who make up the whole. And that’s how we strive to play.

Amen and blessed be.

 

[1] Poundstone, William, Prisoner’s Dilemma: John Von Neumann, Game Theory, and the Puzzle of the Bomb (New York: Anchor Books, 1992), pp. 118-119.

[2] Poundstone, Prisoner’s Dilemma, p. 90.

[3] Poundstone, Prisoner’s Dilemma, p. 129.

[4] See the clip at http://www.youtube.com/watch?v=p3Uos2fzIJ0. Also, this Radiolab story, “What’s Left When You’re Right?” incorporates the “Golden Balls” clip and is very entertaining: http://www.radiolab.org/story/whats-left-when-youre-right/.

[5] Poundstone, Prisoner’s Dilemma, p. 121.

[6] Poundstone, Prisoner’s Dilemma, p. 123.

[7] Hayden Ben, “Rethinking the Morality of the Prisoner’s Dilemma,” in “The Decision Tree,” Psychology Today, July 28th, 2013. See: http://www.psychologytoday.com/blog/the-decision-tree/201307/rethinking-the-morality-the-prisoners-dilemma.

[8] Poundstone, Prisoner’s Dilemma, pp. 236-248. For more information, see Axelrod, Robert The Evolution of Cooperation (New York: Basic Books, 1984).

[9] Robinson, John, “The Moral Prisoner’s Dilemma” is at http://intuac.com/userport/john/writing/prisdilemma.html.

[10] For the language which names the seven Unitarian Universalist principles as a covenant, see: http://www.uua.org/uuagovernance/bylaws/articleii/6906.shtml.

[11] See the UUS:E covenant at https://uuse.org/ministries/principles-and-mission/#covenant.