Regulars to this blog know I am partial to game theory. The very idea that mathematical reasoning can teach us a thing or two about the strategies we deploy in social interactions, is most intriguing. Game theory recognizes that humans do possess rational and selfish characteristics, and builds models describing human decisions based on no more than these two characteristics. This minimalistic approach teaches us a lot about the character of economic behavior and the emergence of strategies build on cooperation, retaliation, etc.

At the same time, game theory forces us to think deeper about what it means to be rational. I have written about this matter before (here and here). There is much more to be said about the subject (and I will return to it in one of my next blog posts), but today I want to focus on a game that, in all honesty, has little to add to the concept of rationality.

I am talking here about the Prisoner's dilemma (PD).* It is a game that is boring from a game theory perspective, yet over the years it has attracted an impressive amount of attention, particularly so in the pop science media. The reason for all the attention is that the predicted outcome for this game surprises most people. Game theory predicts that rational players who focus on optimizing their return will knowingly avoid the win-win situation for this game and settle for a smaller return.

How can this be?

The simple answer is that in any game rational players will end up in an equilibrium of individual choices, and not necessarily in an optimum, and that PD is a game designed to render manifest the difference between the equilibrium outcome and the optimal outcome.

Fixing What Ain't Broken

Yet, although there is no paradox, most people are stunned by PD's sub-optimal outcome. Many of them reason they would play the game differently. While they admit to being selfish, they see themselves capable of transcending the choice game theory marks as 'rational', and making the 'superrational' choice that leads to the win-win outcome. For a group of selfish individuals this is the preferred strategy, so they reason, simply because when the game is played amongst people who all follow the 'superrational' approach, only win-win outcomes will result. These outcomes, when compared with the game-theoretical outcome, will be rationally preferred by all participants.

Note that these 'superrationalists' do not argue against the participants being driven by selfish motives. Rather they declare that the game-theoretical definition of 'rational behavior' does not do justice to the fact that humans are capable of seeking win-win situations. They argue that selfishness in no way blocks this drive to land on a win-win. On the contrary, as a win-win in PD represent the optimal feasible result also from an individual perspective, selfishness should drive you towards such a win-win. That is: provided you act 'superrationally'.

Douglas Hofstadter, the cognitive scientist who managed to sell 'superrationality' to the masses, happily admits he is intrigued by the PD game:

"The emotions churned up by the Prisoner’s Dilemma are among the strongest I have ever encountered, and for good reason. Not only is it a wonderful intellectual puzzle, akin to some of the most famous paradoxes of all time, but also it captures in a powerful and pithy way the essence of a myriad deep and disturbing situations that we are familiar with from life. Some are choices we make every day; others are the kind of agonizing choices that we all occasionally muse about but hope the world will never make us face."

Quite a different view than that expressed in my statement above: "from a game theory perspective PD is boring". Let's dive into PD, and let's see if I can convince you that 'superrationalists' read way too much into the game PD, and that the concept of 'superrationality', that is tightly linked to this specific game, represents a bunch of patent nonsense resulting from a doomed attempt to fix what ain't broken.

What Do You Do?

Imagine you are given a unique chance to earn some money. You will be playing against one other person who you don't know, who you don't get to see, and who you can't communicate with. Both of you are simultaneously presented with an envelope, and both of you have the choice of leaving the envelope empty, or to put $ 2,000 in it. The money put in the envelope gets doubled up by the organizer of the game, and goes to the other participant who will receive the doubled amount the next morning.

What do you do?

You are tasked to maximize your return. You can trust the organizers of the game. You are not concerned about the earnings of the other individual. If it helps, imagine you are poor and you have been informed the other player is a wealthy millionaire. You have a very sick child at home. Without any medicine your child will die within two weeks. Every $ 1,000 you spend on medicine will reduce her chances of dying by 20%. The two grand in your hand is your last money.

Again: what do you do? Do you cooperate and put two grand in the envelope, or do you defect and leave the envelope empty?

The simple answer is that the money you receive from the other participant will be independent of the amount of money you send his direction. No matter what the other participant is up to, if you leave the envelope empty, your end result will be $ 2,000 higher. So you would be a plain fool to cooperate and put your money in the envelope. You would in fact be justified to argue you are morally obliged to leave the envelope empty. Just think about what conversation you would have with your partner who is mourning your dead child: "Sorry honey, we ran out of money: I expected a stranger to ensure $ 4,000 would come our way, and in anticipation of this gift I'd figure I'd make this stranger happy and send him our last money".

And how would you respond to a Mr. Hofstadter who insists the 'superrational' choice is to cooperate and put your last money in that envelope? Will that be a polite "sorry, not interested!" Or will your response use more blunt terms?

Can you imagine yourself in the shoes of a 'superrationalist'? Would you give 'supperrational' steers to a poor guy with a dying child in his arms? Would such an advise even be morally defensible?

Playing The Wrong Game

The whole 'superrationality' thing is a 'fix' for a non-issue. Sure, the game-theoretical outcome (both participants defecting, that is both sending in empty envelopes) is not the optimum outcome for you. But neither is the outcome 'both cooperating'. When your opponent happens to cooperate, the optimum outcome for you is realized when you defect. Unfortunately, this optimum outcome for you can not be realized when the other participant has other plans than cooperating. All that is under your control is to optimize your play by avoiding putting any money in the envelope. There literally is nothing more you can do.

Yet, many people continue to be unhappy with the PD-outcome. They reason: "Under the guidance of game theory, you both end up with the money you started with. When both of you would have put $ 2,000 in the envelope, you would both have become $ 2,000 richer. Surely, the latter is better for each of you individually and therefore rationally to be preferred by both of you."

The problem with these 'superrationalists' is that they are playing a game that is different from PD. They wrongly assume that choices can be coordinated, or that individual players driven by selfish motives can be made to forfeit a sure gain. In PD neither is the case. Both players are independent, they can not communicate, and there is no punishment or retaliation mechanism available to steer the other towards a behavior more rewarding for you.

A two-player game with such characteristics is quite uncommon. As humans we are used to interactions with individuals in which we can discuss matters and negotiate upfront, and in which we can exercise influence via threats of retaliation. Such threats can linger in the background and don't need to be made explicit. Yet, these threats are essential in directing participants in a game to win-win outcomes.

PD behaviors (rational individuals knowingly avoiding a win-win situation) disappear when retaliation options get added to the PD game. A direct way of introducing options for retaliation in a game is to allow for repeats of the game between the same participants. Many people are familiar with the iterated PD computer tournament conducted by Robert Axelrod and reported in his 1984 book The Evolution of Cooperation. However, few people realize that standard game theory is perfectly capable of predicting the outcomes of such iterated games.

Let's see how that works.

Iterating PD

To keep things simple and manageable, we assume the iterated PD game is between robot players equipped with a one bit memory.** This one bit memory allows them to remember the opponent's last move. Under these circumstances a total of six pure strategies are feasible. That is, two zero-bit strategies:

"Cooperate": cooperate (send two grand to the other party) in each round

"Defect": defect (leave the envelope empty) in each round

and four one-bit strategies:

"Gullible Tit-for-Tat": start by pretending the other party has last cooperated, and in each round select the choice made by the other in the last round

"Suspicious Tit-for-Tat": start by pretending the other party has last defected, and in each round select the choice made by the other in the last round

"Suspicious Rob-me-Blind": start by pretending the other party has last collaborated, and in each round select the choice not made by the other in the last round

"Gullible Rob-me-Blind": start by pretending the other party has last defected, and in each round select the choice not made by the other in the last round

With single-bit players playing against each other, the outcomes will turn into a repetitive pattern. Depending on which strategies are pitted against each other, six possible cycles of game outcomes can emerge. These cycles are listed below (3 cycles of length 1, 2 of length 2, and 1 of length 4; "C" indicates cooperation, and "D" stands for defection) together with the earnings per round (averaged over a full cycle) for both players:

You can work out for yourself which cycle will occur depending on your strategy and your opponent's strategy. The full table of 6x6 possible strategy encounters with the cell values showing your earnings per round (in thousands of $) is listed below:

The color coding of the cells refers to the colors of the six game cycles.

Game theory tells us that rational player will select strategies that form a Nash equilibrium. In other words, a rational player will act according to the assumption that all players will individually aim for a choice such that post the act no one will regret his/her choice. For this game that is symmetric in both players,*** this means that you as a rational player will select a strategy corresponding to a diagonal entry in the table that has no element in the same column exceeding the earnings listed in the diagonal cell.**** For the game under consideration, there are two such Nash strategies: "Always Defect" and "Gullible T4T" (see below).

Choices other than the equilibrium choices "Always Defect" and "Gullible T4T" will not be considered by a rational player. If all players limit their strategies to these two rational choices, "Gullible T4T" will never do worse than "Always Defect", while doing better when it meets the same strategy. So in choosing between both Nash equilibria, the rational choice is to select "Gullible T4T": start by cooperating, and thereafter copy your opponents last move.

Irrational Superrationality

Gullible T4T emerging as winner is a result all 'superrationalists' can be happy with: "cooperate and retaliate when the other doesn't". Note, however, that there was no need for any 'superrationality' to reach this cooperative outcome. All that was needed was standard game theory and retaliation to sneak in via repeat plays against he same player. In fact, also when retaliation is build directly into a single shot PD game, a cooperate-and-retaliate strategy emerges as the game-theoretical outcome. In other words, add the possibility of retaliation to rational play and the whole need for 'superrational' coordination evaporates.

Where does this leave us with 'single shot PD'?

Basic 'single shot PD' without any retaliation opportunity represents a degenerate situation that many find difficult to imagine. In real life an encounter with an individual is seldomly anonymous, and hardly ever guaranteed not to reoccur. So, retaliation is always there, at least as a threat.

Anyone not fully appreciating the weird 'no retaliation possible' aspect of single shot PD, will fall victim to the urge to cooperate. An urge build by many years of experience with real-life games inundated by opportunities for retaliation. Superrational play of PD is nothing more than playing PD based on habits that evolved in a retaliative environment. Such habits make perfect sense in many day-to-day social interactions, but the same habits going unchallenged in the degenerate game 'single-shot PD' translates into irrational behaviors.


* The 'single shot PD' game, to be precise.

** Participants with larger memory capacity can be handled at the expense of rendering the analysis exponentially more cumbersome.

*** The game is symmetric in the sense that if both players would swap their roles, their situations would not chance (both player simultaneously have to make a move based on the same choices with the same pay-offs).

**** In this case we are allowed to ignore mixed strategies. Note that if the memory of the players gets extended to two bits, a number of interesting mixed strategies do come to the fore.