What's Wrong With Superrationality?
    By Johannes Koelman | January 12th 2013 12:18 PM | 31 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune


    View Johannes's Profile
    Regulars to this blog know I am partial to game theory. The very idea that mathematical reasoning can teach us a thing or two about the strategies we deploy in social interactions, is most intriguing. Game theory recognizes that humans do possess rational and selfish characteristics, and builds models describing human decisions based on no more than these two characteristics. This minimalistic approach teaches us a lot about the character of economic behavior and the emergence of strategies build on cooperation, retaliation, etc.

    At the same time, game theory forces us to think deeper about what it means to be rational. I have written about this matter before (here and here). There is much more to be said about the subject (and I will return to it in one of my next blog posts), but today I want to focus on a game that, in all honesty, has little to add to the concept of rationality.

    I am talking here about the Prisoner's dilemma (PD).* It is a game that is boring from a game theory perspective, yet over the years it has attracted an impressive amount of attention, particularly so in the pop science media. The reason for all the attention is that the predicted outcome for this game surprises most people. Game theory predicts that rational players who focus on optimizing their return will knowingly avoid the win-win situation for this game and settle for a smaller return.

    How can this be?

    The simple answer is that in any game rational players will end up in an equilibrium of individual choices, and not necessarily in an optimum, and that PD is a game designed to render manifest the difference between the equilibrium outcome and the optimal outcome.

    Fixing What Ain't Broken

    Yet, although there is no paradox, most people are stunned by PD's sub-optimal outcome. Many of them reason they would play the game differently. While they admit to being selfish, they see themselves capable of transcending the choice game theory marks as 'rational', and making the 'superrational' choice that leads to the win-win outcome. For a group of selfish individuals this is the preferred strategy, so they reason, simply because when the game is played amongst people who all follow the 'superrational' approach, only win-win outcomes will result. These outcomes, when compared with the game-theoretical outcome, will be rationally preferred by all participants.

    Note that these 'superrationalists' do not argue against the participants being driven by selfish motives. Rather they declare that the game-theoretical definition of 'rational behavior' does not do justice to the fact that humans are capable of seeking win-win situations. They argue that selfishness in no way blocks this drive to land on a win-win. On the contrary, as a win-win in PD represent the optimal feasible result also from an individual perspective, selfishness should drive you towards such a win-win. That is: provided you act 'superrationally'.

    Douglas Hofstadter, the cognitive scientist who managed to sell 'superrationality' to the masses, happily admits he is intrigued by the PD game:

    "The emotions churned up by the Prisoner’s Dilemma are among the strongest I have ever encountered, and for good reason. Not only is it a wonderful intellectual puzzle, akin to some of the most famous paradoxes of all time, but also it captures in a powerful and pithy way the essence of a myriad deep and disturbing situations that we are familiar with from life. Some are choices we make every day; others are the kind of agonizing choices that we all occasionally muse about but hope the world will never make us face."

    Quite a different view than that expressed in my statement above: "from a game theory perspective PD is boring". Let's dive into PD, and let's see if I can convince you that 'superrationalists' read way too much into the game PD, and that the concept of 'superrationality', that is tightly linked to this specific game, represents a bunch of patent nonsense resulting from a doomed attempt to fix what ain't broken.

    What Do You Do?

    Imagine you are given a unique chance to earn some money. You will be playing against one other person who you don't know, who you don't get to see, and who you can't communicate with. Both of you are simultaneously presented with an envelope, and both of you have the choice of leaving the envelope empty, or to put $ 2,000 in it. The money put in the envelope gets doubled up by the organizer of the game, and goes to the other participant who will receive the doubled amount the next morning.

    What do you do?

    You are tasked to maximize your return. You can trust the organizers of the game. You are not concerned about the earnings of the other individual. If it helps, imagine you are poor and you have been informed the other player is a wealthy millionaire. You have a very sick child at home. Without any medicine your child will die within two weeks. Every $ 1,000 you spend on medicine will reduce her chances of dying by 20%. The two grand in your hand is your last money.

    Again: what do you do? Do you cooperate and put two grand in the envelope, or do you defect and leave the envelope empty?

    The simple answer is that the money you receive from the other participant will be independent of the amount of money you send his direction. No matter what the other participant is up to, if you leave the envelope empty, your end result will be $ 2,000 higher. So you would be a plain fool to cooperate and put your money in the envelope. You would in fact be justified to argue you are morally obliged to leave the envelope empty. Just think about what conversation you would have with your partner who is mourning your dead child: "Sorry honey, we ran out of money: I expected a stranger to ensure $ 4,000 would come our way, and in anticipation of this gift I'd figure I'd make this stranger happy and send him our last money".

    And how would you respond to a Mr. Hofstadter who insists the 'superrational' choice is to cooperate and put your last money in that envelope? Will that be a polite "sorry, not interested!" Or will your response use more blunt terms?

    Can you imagine yourself in the shoes of a 'superrationalist'? Would you give 'supperrational' steers to a poor guy with a dying child in his arms? Would such an advise even be morally defensible?

    Playing The Wrong Game

    The whole 'superrationality' thing is a 'fix' for a non-issue. Sure, the game-theoretical outcome (both participants defecting, that is both sending in empty envelopes) is not the optimum outcome for you. But neither is the outcome 'both cooperating'. When your opponent happens to cooperate, the optimum outcome for you is realized when you defect. Unfortunately, this optimum outcome for you can not be realized when the other participant has other plans than cooperating. All that is under your control is to optimize your play by avoiding putting any money in the envelope. There literally is nothing more you can do.

    Yet, many people continue to be unhappy with the PD-outcome. They reason: "Under the guidance of game theory, you both end up with the money you started with. When both of you would have put $ 2,000 in the envelope, you would both have become $ 2,000 richer. Surely, the latter is better for each of you individually and therefore rationally to be preferred by both of you."

    The problem with these 'superrationalists' is that they are playing a game that is different from PD. They wrongly assume that choices can be coordinated, or that individual players driven by selfish motives can be made to forfeit a sure gain. In PD neither is the case. Both players are independent, they can not communicate, and there is no punishment or retaliation mechanism available to steer the other towards a behavior more rewarding for you.

    A two-player game with such characteristics is quite uncommon. As humans we are used to interactions with individuals in which we can discuss matters and negotiate upfront, and in which we can exercise influence via threats of retaliation. Such threats can linger in the background and don't need to be made explicit. Yet, these threats are essential in directing participants in a game to win-win outcomes.

    PD behaviors (rational individuals knowingly avoiding a win-win situation) disappear when retaliation options get added to the PD game. A direct way of introducing options for retaliation in a game is to allow for repeats of the game between the same participants. Many people are familiar with the iterated PD computer tournament conducted by Robert Axelrod and reported in his 1984 book The Evolution of Cooperation. However, few people realize that standard game theory is perfectly capable of predicting the outcomes of such iterated games.

    Let's see how that works.

    Iterating PD

    To keep things simple and manageable, we assume the iterated PD game is between robot players equipped with a one bit memory.** This one bit memory allows them to remember the opponent's last move. Under these circumstances a total of six pure strategies are feasible. That is, two zero-bit strategies:

    "Cooperate": cooperate (send two grand to the other party) in each round

    "Defect": defect (leave the envelope empty) in each round

    and four one-bit strategies:

    "Gullible Tit-for-Tat": start by pretending the other party has last cooperated, and in each round select the choice made by the other in the last round

    "Suspicious Tit-for-Tat": start by pretending the other party has last defected, and in each round select the choice made by the other in the last round

    "Suspicious Rob-me-Blind": start by pretending the other party has last collaborated, and in each round select the choice not made by the other in the last round

    "Gullible Rob-me-Blind": start by pretending the other party has last defected, and in each round select the choice not made by the other in the last round

    With single-bit players playing against each other, the outcomes will turn into a repetitive pattern. Depending on which strategies are pitted against each other, six possible cycles of game outcomes can emerge. These cycles are listed below (3 cycles of length 1, 2 of length 2, and 1 of length 4; "C" indicates cooperation, and "D" stands for defection) together with the earnings per round (averaged over a full cycle) for both players:

    You can work out for yourself which cycle will occur depending on your strategy and your opponent's strategy. The full table of 6x6 possible strategy encounters with the cell values showing your earnings per round (in thousands of $) is listed below:

    The color coding of the cells refers to the colors of the six game cycles.

    Game theory tells us that rational player will select strategies that form a Nash equilibrium. In other words, a rational player will act according to the assumption that all players will individually aim for a choice such that post the act no one will regret his/her choice. For this game that is symmetric in both players,*** this means that you as a rational player will select a strategy corresponding to a diagonal entry in the table that has no element in the same column exceeding the earnings listed in the diagonal cell.**** For the game under consideration, there are two such Nash strategies: "Always Defect" and "Gullible T4T" (see below).

    Choices other than the equilibrium choices "Always Defect" and "Gullible T4T" will not be considered by a rational player. If all players limit their strategies to these two rational choices, "Gullible T4T" will never do worse than "Always Defect", while doing better when it meets the same strategy. So in choosing between both Nash equilibria, the rational choice is to select "Gullible T4T": start by cooperating, and thereafter copy your opponents last move.

    Irrational Superrationality

    Gullible T4T emerging as winner is a result all 'superrationalists' can be happy with: "cooperate and retaliate when the other doesn't". Note, however, that there was no need for any 'superrationality' to reach this cooperative outcome. All that was needed was standard game theory and retaliation to sneak in via repeat plays against he same player. In fact, also when retaliation is build directly into a single shot PD game, a cooperate-and-retaliate strategy emerges as the game-theoretical outcome. In other words, add the possibility of retaliation to rational play and the whole need for 'superrational' coordination evaporates.

    Where does this leave us with 'single shot PD'?

    Basic 'single shot PD' without any retaliation opportunity represents a degenerate situation that many find difficult to imagine. In real life an encounter with an individual is seldomly anonymous, and hardly ever guaranteed not to reoccur. So, retaliation is always there, at least as a threat.

    Anyone not fully appreciating the weird 'no retaliation possible' aspect of single shot PD, will fall victim to the urge to cooperate. An urge build by many years of experience with real-life games inundated by opportunities for retaliation. Superrational play of PD is nothing more than playing PD based on habits that evolved in a retaliative environment. Such habits make perfect sense in many day-to-day social interactions, but the same habits going unchallenged in the degenerate game 'single-shot PD' translates into irrational behaviors.


    * The 'single shot PD' game, to be precise.

    ** Participants with larger memory capacity can be handled at the expense of rendering the analysis exponentially more cumbersome.

    *** The game is symmetric in the sense that if both players would swap their roles, their situations would not chance (both player simultaneously have to make a move based on the same choices with the same pay-offs).

    **** In this case we are allowed to ignore mixed strategies. Note that if the memory of the players gets extended to two bits, a number of interesting mixed strategies do come to the fore.


    Gerhard Adam
    While not directly related to PD, it is also important to recognize that in real-world encounters, the cost of retaliation has to be factored in.  The implicit assumption in such models is that retaliation or loss translates into the same cost for both parties, which is part of what drives their behaviors.  However, if there is no retaliation that you can advance against me that matters, then my decision is effectively independent of any threat you may level.

    This is precisely what we find happening in many economic situations where threats against financial monoliths are ignored, because there is a clear recognition that without an unprecedented level of customer organization, such threats are fundamentally meaningless.  Therefore the idea of consumer behavior affecting business practices is only true when there is enough of a role played by the consumer to have an effect on the business.  It's very similar to the role of probability theory in Las Vegas.  Even a large win by a single gambler has no material effect on the "house", so it doesn't affect how such games are managed.

    So, to use your example of the cash in an envelope, if I were super rich, then there is no incentive for me to ever put money in the envelope, under the supposition that even in an iterated game, I will benefit if the other individual elects to cooperate even once.  Whereas if I am poor, I may be inclined to "hope" that I have at least one chance of doubling my money.  So, in the case of GT4T, the "gullible" or "cooperative" individual always incurs the first loss against a defect-only strategy.  While it's not optimum, if I don't really need the gain, then it's a viable strategy [always defect] to at least come out a little bit ahead and certainly ahead of the other player.

    Again, this is often what we see in real-world economic situations, such as health-care insurance or some comparable benefits situation, where an initial claim is always denied.  The insurer may pay the claim on an iterative set of claims, but if the claimant simply goes away it's more beneficial to the insurer.  Since there is no viable retaliation mechanism, the most rational business model is to uniformly reject first time claims regardless of merit.
    Mundus vult decipi
    It is a game that is boring from a game theory perspective, yet over the years it has attracted an impressive amount of attention, particularly so in the pop science media.
    Take a course on game theory: PD pops up in all kinds of games as a sub-game. It is somewhat like an harmonic oscillator in physics.
    And how would you respond to a Mr. Hofstadter who insists the 'superrational' choice is to cooperate and put your last money in that envelope?
    Please - you make not even the attempt to understand his position, and this is certainly a straw man if I ever seen one.  Look - if you want to deride superrationalism, why don't you do it in a mature, scientific way? I still am puzzled why you could not even answer the simple and very friendly question I asked you the last time.  You seem to be way too emotionally invested into this question as to be able to have an objective perspective on it.
    Superrational play of PD is nothing more than playing PD based on habits that evolved in a retaliative environment.
    Great - exactly what I expected.  Superrationalism is not because here is how superrationalism evolves.  Yeah - and temperature is total nonsense, because there are only particles bumping around.
    Thanks. Very instructive to see a standard game-theoretical analysis yielding Tit-for-Tat as the preferred equilibrium outcome. Other science publications did leave me with the wrong impression such iterated games are beyond scope of standard game theory.

    Gerhard Adam
    The problem with these 'superrationalists' is that they are playing a game that is different from PD. They wrongly assume that choices can be coordinated, or that individual players driven by selfish motives can be made to forfeit a sure gain.
    Actually it seems that you're using an unusual definition.  After all, it is a contradiction to argue that there is a rational choice to be made and then proceed to an irrational one because it's the only one that makes sense.  It strikes me that the confusion occurs by inventing the term "superrationalist", instead of simply confining it to rational players.

    If there is a rational choice to be made and it is not made, then one can't argue that the players are behaving rationally.  That's the point in the original articles, by claiming that adding 3 + 5 ALWAYS leads to 8, then one cannot argue that there is a choice or a decision process that is being followed.  Therefore if there is a rational choice to be made, then one cannot argue that more than one option is available and still claim that it is a rational game.

    If there is a rational choice, then, by definition, all players must reach the same rational conclusion.  If they do not, then one cannot claim that the players are rational.  Your examples continue to be fraught with concepts like trust, cooperation, etc.  None of these are traits required by a rational game.  It also doesn't require cooperation or communication to arrive at the conclusion that 3 + 5 = 8, and consequently this is something that is rational and independent of any decision or trust issues.

    That's the point being made by proposing the notion of the "superrationalist".  Such an individual is aware of the game, aware of the possible outcomes, and consequently recognizes that the only rational thing to do would be to cooperate.  To conclude that someone would not cooperate, leads to a cascade of irrationality which is what produces the less than optimum result.  In short, rationality gives way to fear and uncertainty.

    While the argument can be made that game theory reflects actual outcomes based on human behavior, we are also forced to conclude that many of these human interactions are not actually rational, and are based on many other factors.  In fact, it is the fear of retaliation that drives us to the much more reasonable, rational choice.
    Mundus vult decipi
    Johannes Koelman
    Your are confusing rationality with optimality. It is perfectly rational for a group of interacting selfish individuals to land on a sub-optimum outcome.

    I agree with one thing you say but it leads me to a conclusion opposite to yours: the very term 'superrationality' causes a lot of confusion, as it describes a group of selfish individuals reaching an irrational decision. After the fact, each of the 'supperrational' participants will regret his/her choice (cooperate), as an alternate choice was available (defect) that would have led to a much higher return.

    If your child dies, despite the $ 4,000 of cash you could spend on medicine, would you not regret not having made the choice that would have led to a total cash of $ 6,000, enough to guarantee your child will fully recover?
    Gerhard Adam
    No, it appears that the problem is that you're focusing on selfishness, rather than the suggested meaning of rationality here.  Such a state doesn't change the answer of 3+5, and consequently it's irrelevant.

    The argument for the "super-rationalist" is that if there is a particular logical outcome that is considered "rational", then, by definition, all supposedly rational players should understand the problem and arrive at the same conclusion.  It cannot be otherwise.  When you begin to introduce other behavioral considerations, then your problem may reflect real-world behaviors, but it says nothing about what is rational. 

    I agree that optimum outcomes may not be a necessary condition to being rational, but then you haven't defined what you mean by rational, since your examples focus exclusively on optimal outcomes.  So, if an optimal outcome is not the objective of the game, then what is rational play supposed to consist of?
    Mundus vult decipi
    Johannes Koelman
    A rational strategy is one that avoids choices that can be predicted to lead to regret. Single shot PD is a boring game in the sense that there is no play by your opponent that could - after the act - make you regret having played "Defect" instead of "Cooperate".

    I will discuss a key complicating factor to the definition of rationality (individual rationality vs group rationality) in one of my next blog posts. PD is a too obvious game to be affected by such a complication.

    Gerhard Adam
    A rational strategy is one that avoids choices that can be predicted to lead to regret.
    Unfortunately, we are, again, stuck with a value judgement for something that should preferably be objective.  So, the rationality of any particular action is purely subjective. 

    After all, isn't the judgement of "regret" simply a rephrasing of an optimal outcome as viewed by outside observers?  You might argue that "optimal" is objective, but that simply presumes that all the players share the same values and motivations.  If they don't, then establishing that a player is behaving rationally becomes difficult if not impossible to assess.

    Of course, that doesn't include players that may effectively change the game.
    Mundus vult decipi
    Johannes Koelman
    No, there is no value judgement. In PD the utilities are objectively defined.

    Can I tempt you to play a one-shot PD game with me? We could make this a replica of the PD game described above. Each of us puts an amount of $ 2,000 in deposit (we can negotiate how we arrange things such that we both can fully trust the money is safe). Next, we simultaneously send an envelope to the other. The envelope is either empty or contains $ 2,000. The content of each envelope gets doubled up using the deposit. We can each keep the doubled up money we receive. If there is money left in the deposit, it get distributed between us 50:50.

    In case we both make the same choice, I propose you accept this choice as 'rational'. Ok?

    Gerhard Adam
    I would argue that the game itself isn't rational.  If we both put up $2000 in deposit, then there is $4000 available.  If each player sends the other $2000 and the amount gets doubled, they merely get what they started with.  If one of the players sends $2000 while the other sends nothing, then the player sending money loses and if they both send nothing, then they simply divide up the original deposit [which is what they started with].

    This isn't a form of the prisoner's dilemma since there is no means by which an individual can make a choice that is better than simply not playing the game at all.

    However, even if the game were formulated differently, it doesn't come down to rational play, but rather in assessing human character and how willing one is to trust the other player.  As a result, the game is contingent on values, and not rationality aimed at optimizing outcomes.
    In PD the utilities are objectively defined.
    But you cannot define each individuals attitude towards them, nor their sense of how they assess other individuals.  In fact, a rational decision based solely on the utility would invariably result in a loss, which is why most people are surprised at the less than optimal outcomes.

    In fact, the problem can be demonstrated much more readily.  Using a computer program that is written to optimize results [and not use deception or subterfuge], it will invariably always lose in PD, because if it seeks to optimize utility it will "misunderstand" the true nature of the game which is not to be rational.  Two computer programs written to that end, would always optimize their results, because they would represent the "super-rational" players that could only reach one conclusion. 
    Mundus vult decipi
    A computer program that is written to optimize results in a single PD game contains a single statement: "Do Defect". This program will do very well and will certainly not "lose". Whatever choice the other party made, a post-mortem will conclude this simple program managed to get the best result achievable.

    J.K. is correct here with the regret - in fact, that is why suicide is always rational (zero uncertainty about non-regret).  The problem with his writing is that he seems to charge others with holding superrationality as mystic rather than those others already understanding better than he does that superrationality is an evolved meta-strategy that is succesful in situations where primitive (a-social, psychopathic) rationality is denied access to the game (a coevolved environment).

    In order for your game to involve a "coevolved environment", you "are playing a game that is different from [single-shot] PD."  (So long as you're alternate game involves a decrease in reproduction for the [single-shot] PD "losers", and the variation in reproduction shows a preference for replicating the reproducer, then your game will have the co-evolutionary outcome you refer to.  However, this doesn't change the fact that it is a different game.)


    Correct - the superrational actor believes to play a different game, namely one with superrational opponents.  The rational player assumes rational opponents.  Further down are the mere pick-whatever players who believe nothing and play against other pick-whatever units.  Evolution brings us from one to the next level.  What JK is doing is the equivalent of holding "rationality" for nonsense because all players are fundamentally just do-whatever mechanisms, and if you play against those, then you better play whatever and not at the Nash equilibrium.
    Do you have references to papers that support your claims? Any peer reviewed paper on evolution towards superrational choice will do. Thank you.

    Please. Stop this. Stop educating the ignorant. Let Sascha and Gerhard and countless others indulge in their super-rational delusions. It's not profitable and no fun to play against rational opponents. We need more super-rational doormats!

    Omar Chedda
    If you consider other people as doormats for your selfish interests, it means that your parents did not raise you properly. If perchance you happen to be walking down a lonely street one day, and a robber accosts you with a gun knowing that he will not be caught, you should smile and say, "Now I am the doormat."
    No single peer-reviewed paper on 'super rationality' has ever been published. Pure crackpottery.

    Gerhard Adam
    Perhaps you could provide a link to a peer-reviewed paper on "rationality"?  Of course, I'm not convinced you actually understand what peer-review is for.
    Mundus vult decipi
    I am a peer reviewer myself, so if you can teach me on the subject that would be greatly appreciated. Have you ever read a paper on game theory? The paper by Nobel Laureate Robert Aumann "Rationality and Bounded Rationality" (Games and Economc Behavior vol. 21, p 2-14, 1997) would be a good start. This one I like better though: Robert Aumann and Jacques Dreze: Rational Expectations in Games, Am. Economic Review, vol. 98, p 72-86, 2008.

    Gerhard Adam
    Do you actually read the papers you supposedly post about?

    From Robert Aumann "Rationality and Bounded Rationality"
    The paradox is resolved by noting that in game situations, one player’s irrationality requires another’s super-rationality. You must be super-rational in order to deal with my irrationalities. Since this applies to all players, taking account of possible irrationalities leads to a kind of superrationality for all. To be super-rational, one must leave the equilibrium path. Thus, a more refined concept of rationality cannot feed on itself only; it can only be defined in the context of irrationality.
    Mundus vult decipi
    Exactly what I expected. You scan for "superrationality" and "aha!" You clearly haven't read the paper. Aumann is not discussing Hofstadter's ideas here. Read the paper and you might learn a few things.

    Gerhard Adam
    I never claimed that Aumann was discussing Hofstadter's ideas.  You simply dismissed super-rationality as a crackpot idea.  You then tried to argue from authority as if an anonymous poster claiming to be engaged in peer-review would be intimidating.  Then you flogged a Nobel laureate's papers.

    Sorry, but it simply seems that you don't understand the idea and want to hide behind whatever sheen of authority you can garner.  Argue against the idea or not, but your comments at this point are little more than anonymous trolling.

    However, if you insist.  Aumann is actually arguing in a very similar vein to the ideas put forward by Hofstadter. is assumed that with high probability the players are ‘‘rational’’ in the sense of being utility maximizers, but that with a small probability, one or both play some one strategy, or one of a specified set of strategies, that are ‘‘crazy’’}that is, have no a priori relationship to rationality.

    ...who always plays tit-for-tat no matter what the other player does, and it turns out that the rational type must imitate the crazy type}he must also play tit-for-tat, or something quite close to it.
    Mundus vult decipi
    Wow. Who is trolling here? You are freely bullying and venting your insults. That's fine, go ahead, I will not lower myself to your level. Let me instead help you and spell things out. Aumann presents a rational response to irrational opponents, and refers to it as 'superrationality'. Hofstadter describes opponents who 'transcend rationality' and lift each other to a level he refers to as 'superrationality'. Aumann's and Hofstadter use the same term for exactly opposite situations. Why does Aumann not avoid using the same term that Hofstadter uses for different purposes? That's simple, no single game theorist has ever taken Hofstadter's ideas seriously, and no single game theorist would ever associate the term 'superrationality' used in a publication with Hofstadter's inconsistent ideas.

    Not a game theorist, but familiar with Aumann's work. Aumann certainly never suggested something silly like cooperating in a single shot PD being logical or rational. So Anonyrat is correct that Aumann and Hofstadter use the term "super-rational" to denote two widely different concepts. However, it would have helped if Anonyrat would have been more precise in his initial remark. It seems he intended to say "No single peer-reviewed paper on Hofstadter's super rationality has ever been published. Pure crackpottery." People can disagree with this remark, but it seems an indisputable fact that game theorists are pretty unanimous in not assigning any weight to Hofstadter's ideas.

    Gerhard Adam
    ...never suggested something silly like cooperating in a single shot PD being logical or rational.
    Again, why is this "silly"?  The conflict occurs because the game isn't about maximizing results, but rather in second-guessing an opponent that is likely to be seeking personal advantage.  I don't have a quarrel with that assessment, but then we're talking about behavior, not rationality.

    As I said before, this is readily demonstrated if we eliminate human motivation from the game and simply let two "rational systems" [i.e. computer programs like tit-for-tat] engage in the game.  So, the problem is that either these two systems are not rational, or the game isn't representative of rational behavior.

    The notion of super-rationality simply argues that if we eliminate such behavioral elements, then the only rational conclusion one could reach is the cooperative strategy.  While you may criticize it as not representing human behavior, it certainly isn't silly.  In fact, the counter-intuitive result in many game theory scenarios is because we are forced down certain paths, not because they are rational, but because we can't trust the behavior of others.  So, therefore we simply redefine rational to represent the condition in which we don't want to be taken advantage of.
    Mundus vult decipi
    Your state "the game isn't about maximizing results" and that is not correct. The single shot PD game is about maximizing your individual result and nothing else. "Second-guessing an opponent that is likely to be seeking personal advantage" does not enter the picture. Firstly, you are supposed to be selfish (that is the standard starting point for game theory, and also Hofstadter's assumption) and not envious, and secondly the single shot PD game doesn't require second guessing an opponent. There are only two scenarios you need to take into account: 1) opponent cooperates, 2) opponent defects. In case 1) you maximize your gain by defecting. In case 2) you also optimize your gain by defecting. Assuming you are rational and you want to maximize your gains you must defect. If you end up in a defect-defect situation: tough luck. You were facing a strong (non silly) participant and this is the best result you could get.
    Unless you can shoot holes in this logic, there is no need for further discussions. In fact, I don't want to get dragged into lengthy disputes, I reacted to this thread only because I thought I might be able to clarify a misunderstanding. I guess my clarification didn't help. Anyway, you and Hofstadter are entitled to your views, but don't expect others to embrace them as rational or super-rational (whatever that means).

    Gerhard Adam
    The single shot PD game is about maximizing your individual result and nothing else.
    This is simply silly, since you cannot consider maximizing your individual results without considering your opponent.  Certainly we expect self-interest to be the driving motivation.

    The problem with your scenario is that case number 1 can never occur unless your opponent screws up.  So, you automatically screw yourself by going for the worst result.  The point of the "super-rational" player is that if both recognize this situation, then they would presumably recognize that they maximize their results in the cooperate-cooperate situation, since this is preferable to the defect-defect result.

    So, the only reason why defect-defect occurs is because the opponent isn't trustworthy.  It certainly isn't the most rational choice, it is a decision that is exclusively based on attempting to take advantage of an opponent that cooperates.  In other words, it is based on anticipated behavior not rational thought. 

    However, it is readily recognized that the reverse occurs in the iterated PD, so the question becomes  ... what action do you take if you don't know whether the game will be iterated or not?  In this case you find that even the single shot PD game is played as a cooperate-cooperate strategy.  Again, this is readily demonstrated by the simple tit-for-tat game. 
    ...secondly the single shot PD game doesn't require second guessing an opponent.
    Of course it does, that's precisely why you are compelled to defect.  Since the only way you can maximize your benefit is if your opponent cooperates.  If your opponent defects [using the same reasoning], then you both fail to maximize your results.  Therefore the "rational" solution is for you both to cooperate since that will result in a higher return than defect-defect, and not be dependent on your opponent throwing the game.

    I understand the premise of the game and based on behaviors, we are compelled to go for the worst outcome, because we can't trust nor control the behavior of our opponent.  The point in considering the "super-rational" position is that, if the problem is well known and clearly understood, then it should also recognize that the optimum outcome is achieved through cooperation.  In fact, this is precisely what we do find largely happens in the real world, because no one can be assured that any such encounter won't be iterative.

    The notion of selfishness being presented is a single-minded pursuit that neglects consideration for anything.  That's fine if we're analyzing bacteria, but we can expect more rational results from supposedly rational players.
    Mundus vult decipi

    Considering the state and nature of the present "peer-review" system, I, and I expect many here, would not take this evidence as anything but a very weak argument for the designation of "Pure crackpottery."


    You need to get your logic right. I agree there is a lot of crap out there also in peer-reviewed journals, but that lends even more support to my claim. Hofstadter's ideas on rational choice are not covered by any publications, not even a single one that managed to slip through the rather leaky peer-review filter.


    My logic is quite sound, based upon the present nature and state of the present "peer-review" system".  It is you that is making the mistake, not so much in logic, but in having based your logic upon a flawed assessment/understanding.

    One's logic can be flawless and yet come to false conclusions when the premises of such logic are anything but sound.

    Of course, you have already exhibited additional flawed/fallacious thinking/reasoning/"argumentation" in your exchange with Gerhard.  So, why should we bother?