Banner
    Homo Stupidus
    By Johannes Koelman | October 30th 2011 07:33 PM | 36 comments | Print | E-mail | Track Comments
    About Johannes

    I am a Dutchman, currently living in India. Following a PhD in theoretical physics (spin-polarized quantum systems*) I entered a Global Fortune

    ...

    View Johannes's Profile
    You are in a game with one hundred other players. They don't know you, you don't know them, and you can not communicate with any of them. The game is called 'even/odd(s)' and is explained to you as follows:

    "You have the choice between two selections: 'even' or 'odd'. The hundred other contestants face the same choice. You all make your choice simultaneously. If the total group of players select an even number of 'evens' and an odd number of 'odds', those who selected 'odd' will receive $3, and those who selected 'even' will receive $4. However, if the result amounts to an odd number of 'evens' and an even number of 'odds', no-one will receive a penny. Now go ahead and make a choice!"

    What is your choice?

    I suggest you give this half a minute of your thoughts, and note down your choice. We will come back to this game later in this post.


    Survival Of The Stupidest


    Half a year ago I posted an article on this blog entitled "Survival of the Stupidest". The post got quite a few reactions, and I think it deserves a follow-up. But first a few words about the definition of the term 'stupid'. In the current post I will continue using the same definition of 'stupid' as utilized in my earlier post. This is the definition of stupidity as given by Cipolla in "The Basic Laws of Human Stupidity". So wherever you read 'stupid', keep in mind that I am talking about stupid defined as: 
    A choice is stupid when it causes losses to others, while deriving no gain and even possibly incurring losses to the person making the choice.
    The term 'stupid' thus defined allows us to apply the machinery of game theory, and to analyze in detail the impact of strategic choices that don't benefit the individual, nor the group of others. It should be clear that in the following, 'stupid' is not synonymous to qualifications as 'foolish' or 'brainless'. More to the point, in the present context the classification 'stupid' should not be interpreted as an insult. The term is no more than the label of a quadrant of an impact table*:



    Cipollo's S4 impact table


    Cipolla stresses the importance of the lower-left quadrant in this impact table as follows:
    "Our daily life is mostly, made of cases in which we lose money and/or time and/or energy and/or appetite, cheerfulness and good health because of the improbable action of some preposterous creature who has nothing to gain and indeed gains nothing from causing us embarrassment, difficulties or harm. Nobody knows, understands or can possibly explain why that preposterous creature does what he does. In fact there is no explanation - or better there is only one explanation: the person in question is stupid."
    In the previous post on stupidity I made the case that acting stupidly - consistently harming yourself and others - can equate to being a winner. It all depends on the numbers: stupid choices tend to flourish when others make similar stupid choices. In simple terms: stupid choices become winning choices provided enough players will make stupid choices, and if the effect on the other players is that they harm smart players more than stupid players. Following my earlier post on this subject, I will refer to this phenomenon as the 'survival of the stupidest' (SotS) effect.

    I received many reactions to this 'survival of the stupidest' claim. Some react with arguments like: "Forget SotS effects. Just pit a stupid person in a chess game against a grandmaster. The stupid person will derive no benefits from his stupid moves. The grandmaster will tear him apart." 

    Others pose the question: "How abundant is the SotS effect really? Is it an exotic effect that pops-up in a few carefully constructed example games, or is it much more generic in nature?"

    These are very relevant questions. Answering them allows me to elaborate on the SotS effect, to place it in context, and to better specify its relevance.

    In terms of the above S4 quadrants, we can split games into two broad categories. The first category are games that force its participants to act like a saint or to act selfish. These are competitive games. Zero sum games like chess have this kill-or-perish character. The second category are games that force its participants to act smart or to act stupid. These are cooperation games. 


    Selfish Versus Saint


    An example of the first category originates when groups go out for a dinner and agree beforehand to split the check equally. When placed in this situation: are you going to order an expensive or an inexpensive dish? The expensive dishes are only marginally better. If you would have to pay for your own meal, you would order a cheap dish, but as the costs will be shared by all, you prefer the more expensive dish. Welcome to the diner's dilemma!



    A typical pay-off matrix for a three person Diner's Dilemma is shown above. It lists the net benefit (value of the meal to you minus the costs incurred to you) depending on you choice (left column) and the choices of the other at the table (top row). In the example shown, the fact that the Diner's Dilemma belongs to the Selfish-vs-Saint group of games is made explicit by labeling the choices 'Selfish' (expensive dish) and 'Saint' (inexpensive dish). The situation in which you order the cheap dish, and so does everyone else, leads to the situation in which you pay for and get the cheap dish. This results in a net benefit to you of 2 units (lower right cell in the matrix). The situation in which you order the expensive dish, and so does everyone else, leads to the situation in which you pay for and get the expensive dish. This results into a reduced benefit (1 unit, top left cell) to you. Independent of your choice, your net benefit will increase if more people at your table select the cheap dish, as this will reduce the price that you will have to pay. This is reflected in your benefits increasing in both rows when going from left to right. 

    You prefer the others to act 'saint-like' by selecting the cheap dish. However, you have yourself a clear incentive to act 'Selfish' and select the more expensive option. This is because, regardless of the choice of the others, you acting selfish will result in an increase of your net benefit of 1 unit. To see this, just compare the lower row of net benefits with the upper row. As all the guests at the table are in the same situation, you are bound to end up in the top left corner situation where everybody pays for and gets the expensive dish. You have found yourself in a multiplayer prisoner's dilemma. 

    This Diner's Dilemma represents a clear example of a Selfish-vs-Saint game. The rational outcome is given by a situation in which all realize a personal benefit that can not be improved by changing one's choice. Such a situation is referred to in game theory as a Nash equilibrium. The top left corner in the above game matrix represents such a Nash equilibrium. If you would deviate from the strategic choice 'Selfish' and opt for 'Saint', you would hurt yourself as your benefit would reduce from 1 to 0 (downward pointing red arrow). If, however, one of the other players would deviate from the Nash equilibrium, this would lead to an extra benefit to you (right pointing arrow). 

    Now let's consider how this changes when considering a Smart-vs-Stupid game.


    Smart Versus Stupid


    An example of such is a game that, as a Dutchman, is close to my heart: the 'Dike Maintenance Dilemma' or 'Dike Dilemma' for short. Imagine a community of people living at the low lands next to the sea. Dikes have been build to keep the water out. However, the dikes run over private properties and the owners of these properties have to work hard and invest in maintaining their part of the dike system. Fortunately for all involved: the landowners have an incentive to do so: if they invest in their dikes and so do all others, they enjoy the benefit of protection against the water, a situation we assign a net value of '5' to. If despite his efforts of maintaining his part of the dike system a landowner does not obtain protection against the water (because at least one other land owner does not put in the same effort), we assign a zero net value to his situation. In that circumstance the landowner who failed to maintain his part of the dike is better off, as he is in the same situation (no protection against the water) but has avoided any investments. This situation we assign value '1' to. What would you do if you were a land owner? Would you incur the expense of maintaining your part of the dike? Would it make a difference if you were in this game with hundreds of landowners or with only a few?



    The pay-off matrix for a three person Dike Dilemma is shown above. The fact that the Dike Dilemma belongs to the Smart-vs-Stupid group of games is made explicit by labeling the choices 'Smart' (maintain dike) and 'Stupid' (neglect dike). 

    The preferred outcome is given by a situation in which all realize the maximum personal net benefit of '5'. This happens when all act smart (top left cell in the pay off matrix). 'All smart' represents a Nash equilibrium, as a single individual deviating from this situation will harm his own benefits (red arrow pointing down). However, in doing so, he will also hurt the others (red arrow pointing to the right). This is the key difference with Selfish-vs-Saint games: in Selfish-vs-Saint games anyone deviating from the expected outcome will hurt himself and help the others, while in Smart-vs-Stupids games anyone deviating from the expected outcome will hurt himself and also hurt the others. Saints help while Stupids harm.  

    In the above, we focused on the effect of one person acting stupid. However, the Dike Dilemma features a second Nash equilibrium in the form of 'all stupid' (lower right cell).** Although this equilibrium yields a smaller net value and is therefore less preferred than the 'all smart' equilibrium, it is an equilibrium nevertheless and therefore a feasible outcome to the game. If this low-value equilibrium is the outcome, deviating from the strategic choice 'Stupid' and opting for 'Smart', would cause you to hurt yourself as your benefit would reduce from 1 to 0. When you wake up in the morning and, despite your dike maintenance efforts, find your property flooded, you will regret not having acted stupidly. This is the strength of stupidity and key to the SotS effect: stupid choices act as attractor to smart people who fell victim of stupidity.

    Where are we now in answering the two challenges? 


    If You Can't Beat Them, Join Them!


    We have already seen the answer to the first challenge of SotS effect being absent in games like chess.  This is to be expected, as the game of chess, like any other zero-sum two-player games, is a prime example of a Sefish-vs-Saint game.  

    The second challenge asked about the abundance of the SotS effect. We can phrase this question more precisely by asking the question: in what games can it be advantageous for a rational participant to select a stupid strategy? The cross-over of rational participants from a smart choice to a stupid choice is the hallmark of the SotS effect. So how abundant is such an effect?  We have already seen a clear hint of an answer. And that is that SotS effects occur in virtually every cooperation game. In fact, any cooperation game that features more than one Nash equilibrium is affected by SotS effects, and so are many cooperation games characterized by a single Nash equilibrium. This means that SotS effects can be excluded in no more than a few trivial cooperation games. 

    Not only the diner's dilemma, but also other well known multiplayer cooperation games such as the volunteer's dilemma and are all affected by SotS effects.  

    Let's go back to the 'Evan/Odd(s)' game I confronted you with at the start of this post. Did you select 'even'? Good, you understood that with many people involved the outcome to be winning (an even number of evens and an odd number of odds) is basically a 50:50 proposition. So it is best to select 'even' as this guarantees you the better payoff in case of a winning outcome. 

    Right. 

    You just have firmly put yourself in the group labeled 'stupid'. You screwed all of your smart co-readers who didn't fail to select 'odd', knowing that if all do so, everyone is a guaranteed winner. Unfortunately,  it takes only one stupid person like yourself to render the whole group of smart selectors a bunch of losers. 

    The winning answer is therefore indeed to make the stupid choice. Park your smartness for a while and select 'even', knowing that you can't trust the hundred fellow participants in the game that each belong to the species Homo Stupidus. They won't all act smart. And if you can't beat them, join them! 

    One last point: 

    If you feel uncomfortable about some (or most) of the above, you should check if this discomfort was induced by the terminology rather than by the factual content. To check whether the first is the case, just change everywhere in the above the labels in the S4 matrix. For instance: replace all occurrences of 'Stupid' with 'Streetwise', all occurrences of 'Selfish' with 'Smart', all 'Saints' with 'Stupids', and all 'Smarts' with 'Saints'. Now read again the whole text. Does this make you feel better? 

     

    Notes

    * In using the terms 'smart' vs 'stupid', and 'saint' vs 'selfish', I purposely deviate from the terminology selected by Cipolla.
    ** Where the Diner's Dilemma can be seen as a multiplayer extension to the Prisoner's Dilemma, the Dike Dilemma represents a multiplayer extension to the game Stag Hunt. Just like Stag Hunt, the Dike Dilemma features two equilibria: a high yield (preferred) equilibrium, and a lower yield low-risk equilibrium.

    Comments

    vongehr
    A choice is stupid when it causes losses to others, while deriving no gain and even possibly incurring losses to the person making the choice. ...
    The winning answer is therefore indeed to make the stupid choice.
    You assume a gamer base assuring close to 50/50 outcome (Science2.0 audience perhaps), thus it is understood that the choice "even" does not cause losses to others and does instead derive gain, therefore it is in fact not stupid according to your own definition.
    Your 'smart' choice is pushing the inevitable (there being some dissent) closer to
    [(0*4+101*3)+(1*0+100*0)]/2 = 303/2
    Tossing a coin, this expectation is raised to
    [(50*4+51*3)+(0)]/2 = 353/2
    Smart saints use a biased coin that favors "even" but not too much (degree of bias depends on knowledge about the 100 others), thus approaching
    [(100*4+1*3)+(0)]/2 = 403/2.
    Johannes Koelman
    Wow... Where to start? It seems you have completely misunderstood the game. Please check for yourself: everyone making a smart choice ('odd') leads to a $3 gain for all. As soon as players start to deviate from this strategy (as soon as they start selecting 'even'), there is no control anymore on the fact whether there will be any payout or not. If 50% of the time there is a payout, the expected payout is 0.5*$4 = $2 for those who selected 'even', and 0.5*$3 = $1.50 for those who selected 'odd'.

    The choice 'odd' is the smart choice, but it is vulnerably to any stupidity and misinterpretations such as the ones highlighted in your comment. It takes only one stupid person to spoil it for all...

    vongehr
    "The choice 'odd' is the smart choice, but it is vulnerably"
    Therefore, since you explicitly assume a certain gambler basis where the vulnerability is the main problem, it is not the "smart choice". The third one I gave maximizes the expectation value.
    303/1 > 403/2, stupid.

    vongehr
    It is 303/2 smartipants, either that or the assumptions have been stated incorrectly.
    Not exactly. If one assumes a single outcome rather than random chance, there's only the 303 payout, with no 50% chance. Of course, I see no reason to assume certainty in the specific case where everyone chooses odd and use probabilities in all other cases. You can't

    The argument that could be made for using probabilities is that any no-communication strategy that can be shared by all and aims at getting paid more than 3 each need for each participant to have a random (roughly 1/101, this might not be exact) chance of choosing odd. Then one needs to calculate the average payout given the odds of 0,1,2 etc participants choosing odd.

    I would postulate that the latter strategy is more robust when you don't know exactly what odds the others have of choosing odd vs even. Strategies, shared by all, with very high chances of choosing odd only become more profitable when the odds of even a single participant choosing even are fairly low.

    Correct. The proposal to work with dice must be based on a misunderstanding or misinterpretation of the game. Sacha is proposing to aim for getting 100 participants to select 'even' and 1 participant to select 'odd'. The problem is: this can not be realized, as communication between the participants is not possible. Using dice to realize this 100:1 split gets you in deep trouble. In order to beat the $303 payout for 'all odd', you have to hit exactly 100 'even + 1 'odd' some 75% of the time. That is not possible.

    So the smart choice is 'odd'. And indeed, if all participants are smart (and realize all others are smart) this will be the outcome. Unfortunately, there will always be people amongst the 100 others who don't understand the game. We only have to look at the reactions to this thread to become aware of this sobering fact.

    vongehr
    Unfortunately, there will always be people amongst the 100 others who don't understand the game.
    Unfortunately, gravity is always there and pulls me down. Or could it be that those who know about gravity are not stupid and actually "understand the game" (especially given that gravity was explicitly assumed)?!?
    I am not proposing 'to get 100 even and one odd', which would be only marginally less ridiculous than claiming 101 odd. (100*4+1*3) is one of two terms in an expression of a limit.
    Generally: One needs to distinguish the way a game is actually played (no dice at all perhaps) from the way expectation is calculated (assuming dice or a many world interpretation). There is always the possibility to do a numerical simulation (Mathematica or Excel perhaps). You will get a feeling for what happens that way. Humans are not evolved for statistics, so do not be surprised if expectation works out unexpectedly.

    You can do a simulation plotting EV for each value of "chance of choosing odd" under the assumption every player has the same chance. To make it more generic, give every player the same initial chance but with a standard deviation of x and do a 3-d plot. As in, one point in the plot would be that players have 0,5 +-0,2 chance of choosing odd.

    Thanks. I made the same error as Sascha, but this clears thing up. I did the simulation you suggested. Starting from a zero EV associated with a zero chance of selecting odd (Pod = 0), the EV increases and goes through a local maximum when increasing Podd. However, this maximum never reaches any values above $2 let alone that it exceeds the $3 (obtained at Podd = 1).

    Gerhard Adam
    The winning answer is therefore indeed to make the stupid choice.
    I'm sorry, but your definitions aren't even consistent within the same article.
    A choice is stupid when it causes losses to others, while deriving no gain and even possibly incurring losses to the person making the choice.
    Your terminology isn't a matter of feeling better or worse, as much as it strikes me as being almost completely irrelevant.  Your using provocation phrases like "survival of the stupidest" where you clearly mean "stupid" in the normal sense, and then use all manner of examples that illustrate that the decisions aren't stupid according to your own definition.

    Neither in this post, or in the other, have you actually provided an example of where someone engaged in a self-destructive decision (i.e. incurring a loss) simply to deprive someone else of a benefit. 

    More to the point, you are also aware that most of these games produce strikingly different results when they are iterative (as are actual human relationships), and that any conclusions one draws from such singular games are invariably counter-intuitive, because they don't reflect reality.
    Mundus vult decipi
    Johannes Koelman
    There is no contradiction. I didn't want to spell things out in the article, but maybe I should have.

    Do we agree that (according to the definitions used) selecting 'odd' is the smart choice? Anyone who deviates from this smart choice (by selecting 'even') causes a loss to himself (an expected loss of $1.00, see response to Sascha) and an even bigger loss to the ones who selected 'odd' (an expected loss of $1.50).

    Facing this reality, and knowing (or expecting) that not all 100 other players will make the smart choice, to limit your losses you have to play stupid and select 'even'. That's what I mean with 'winning move'. Insisting on playing it smart will make you a loser.

    Note that there is an interesting meta level to this: for a smart outcome to emerge, not only do you need all 101 players to be capable of understanding the game and it's strategic implications, but also you need all 101 players to be confident that all 100 other player have that capability. A stupid outcome results even from a single person suspecting others to suspect others to ... be incapable of fully understanding the game.





    Gerhard Adam
    Sorry, but it seems to me that you're using unusually loaded terms to describe what was traditional been cooperator or defector behavior.  The net result is that you're equating "smart" with cooperative and "stupid" with selfish. 

    In fact, this is precisely the point raised in the Prisoner's Dilemma unless it become iterative.  At this point, cooperative behavior has a better chance of emerging and suddenly; voila!; your choices become "smart" (according to your definition).

    You can dress up your definitions however you like, but in the end, everyone reading this post is going to impose the colloquial definitions on "smart" and "stupid" and so your point becomes emotional and value-laden.  Instead, the lesson is that singular games tend to result in self-interested behavior without repercussions, while iterative games have to consider future encounters and will tend to cause cooperative behavior to arise (even potentially altruistic). 

    So, to follow that train of thought, according to your definitions, those that are the most cooperative are the "smartest" while those that are "self-interested" are "stupid".  This also clearly overlooks other demonstrated aspects of game theory where others may elect to punish someone that isn't being fair, so is that behavior also to be considered "stupid"?
    Mundus vult decipi
    Johannes Koelman
    "you're equating "smart" with cooperative and "stupid" with selfish."

    Where did you get that from? I have spend at least half of the article on explaining the difference between stupid and selfish, I have elaborated on the precise mathematical definition in terms of differences observed in deviations from Nash equilibria, and now I get this remark? Please read and digest the article before you comment.

    "In fact, this is precisely the point raised in the Prisoner's Dilemma unless it become iterative.  At this point, cooperative behavior has a better chance of emerging and suddenly; voila!; your choices become "smart" (according to your definition)."

    You are over-rating PD. This game gets a lot of attention in popular texts on game theory, but is seen by very few game theorists as relevant in describing competition in social interaction. Stag hunt is widely recognized as the prototypical game that captures cooperative behavior.

    "in the end, everyone reading this post is going to impose the colloquial definitions on "smart" and "stupid" and so your point becomes emotional and value-laden."

    Everyone? I think you under estimate the capabilities of my audience. And frankly: I can't be bothered if it happens.

    "the lesson is that singular games tend to result in self-interested behavior without repercussions, while iterative games have to consider future encounters and will tend to cause cooperative behavior to arise (even potentially altruistic)."

    No. You will not read any such remark in game theoretical literature. You don't need game iteration to render cooperative outcomes to a PD game. See for instance my post 'You Retaliator!'.

    "according to your definitions, those that are the most cooperative are the "smartest" while those that are "self-interested" are "stupid".

    Absolutely not. Please read the article.

    "others may elect to punish someone that isn't being fair, so is that behavior also to be considered "stupid"?"

    What kind of simplifying question is this? That of course depends entirely on the game being considered. Write down the payoff matrix of your retaliative game and you can answer yourself. That is: once you've read the above article. Success.





















    Interesting article. However, it seems many readers misinterpret the term stupid. Single handedly deciding to deviate from the Nash equilibrium = stupid if it causes the others a reduced gain. It is stupid according to the definition, because this deviation also doesn't give the deviator a gain (per definition a singlehanded deviation from a Nash equilibrium will yield no gain). However, this all assumes everyone else stays with the Nash equilibrium strategy (a key assumption in game theory). As soon as multiple persons start acting stupidly (deviate from the Nash equilibrium), it is very well possible that stupid becomes the winning strategy. There is no inconsistency in definitions, but one has to keep in mind that the choice of terminology is based on the effects resulting from single persons deviating from the rational (Nash) strategy.

    "A choice is stupid when it causes losses to others, while deriving no gain and even possibly incurring losses to the person making the choice."

    "The winning answer is therefore indeed to make the stupid choice."

    I'm sorry, but your definitions aren't even consistent within the same article.

    Stupidity = you incur losses and so does everyone else

    Winning by Stupidity = you incur losses, but everyone else loses more

    So far the reactions here certainly support the case for a SotS effect. :)

    I can not help but imagine multiple groups of 101 Vulcans playing the even/odd game. They all go home $3 richer, asking themselves "what was the point?". Then they witness in utter confusion groups of 101 individuals belonging to the species Homo Stupido attempting to succeed at the same game, but gong home empty handed.

    Einstein was right when he made the remark "Two things are infinite: the universe and human stupidity, but I am less certain about the first".

    Heh, - 101 and 0 would produce an odd number of odds and an even number of evens. The Vulcan go home with 7 bucks!

    You are adding up the gains for a person who selected 'odd' to that of person who selected even. But each person gets only the amount ($4 or $3) corresponding to his selection (even or odd). So there is no basis for an addition. And besides; in the smart outcome you are referring to, there is no single player who selected even. So the amount $4 doesn't even enter the picture.

    wouldnt a better strategy be this:
    1: play odd
    2: if loss, goto 3, else goto 1
    3: play even
    4: goto 3
    ?

    Over a million games; the net loss compared to always play even would be negligible. But if a nash is reached on step 1, the net gain is possibly substantial.

    ..... if you can play the lottery a million times for free, I'm quite sure the net gain will also be substantial. Problem is of course that you can not play it for free and in this game you (the group) can only play it once.

    I love reading all the butt-hurt by the people who chose "even" at the beginning.

    You're idiots. Face it.

    Of course, the optimal solution is to kill every other player and then choose 'odd', thus guaranteeing success.

    Gerhard Adam
    ... and you're precisely one of the reasons why I objected to the characterization of this "game" as being "smart" or "stupid".  Of course, if you were actually smart, then you're understand how incredibly stupid your statement is.

    Despite claims to the contrary, the failure to understand this game is in characterizing pay-out as being exclusively "smart" while risk is being arbitrarily characterized as "stupid".  However, if two individuals were to choose even while 99 others chose odd, then they would achieve a higher pay-out and everyone else would be stuck with the "suckers -payout".  The only difference is that the two players were willing to take a risk. 

    Similarly, while everyone can pat themselves on the back for being the geniuses that played it safe to get their guaranteed $3, if they were honest, they would recognize that they can't truly guarantee that this would occur.  So while it may be gratifying to consider all those that don't play it safe as being "stupid", it is also unrealistic and naive. 

    So, yeah ... I get it.  The $3 payout is a "sure thing", assuming that it is of the same utility value and no one is willing to risk anything.  However, the conclusion that it is the only "smart" thing to do .... sorry ... that's precisely the same reason why people always have a problem understanding why the ultimatum game doesn't work the way they thing it should.  The "smart" answer there would simply ensure that you will be perpetually abused in future deals. 
    Mundus vult decipi
    You got this backwards. 'Odd' is the risky choice. The $3 payout is not a "sure ting". It is put at risk by anyone who selects 'even'. Those who select 'even' go for the zero risk option: it is a guarantee that no one will beat them.

    Gerhard Adam
    'Odd' is the risky choice.
    If that's true, then it would be foolish to not opt for "even".  The ONLY value that going "odd" has is that if everyone does it, then it is the only method by which a guaranteed payout can occur.  However, that's only if everyone does it.

    All other possibilities create a risk of a zero payout.  Since one can't be guaranteed that everyone will say odd, then even that solution carries the risk of a zero payout, so it makes little sense to opt for a lower value payout for the same inherent risk.
    Those who select 'even' go for the zero risk option: it is a guarantee that no one will beat them.
    I think you're misunderstanding the game.  Since the game begins with an odd number of players, then the only "guarantee" is if everyone selects "odd" since that is the default position.  If that possibility doesn't occur, then all other combinations are subject to the same risk and therefore there is no benefit in going "odd" over "even" (especially since even offers a higher payout for the same risk).


    Mundus vult decipi
    Look at it this way: communication is not possible so there is no way you can individually control if there will be a payout or not. The relevant risk is not if there is a payout, but the risk is rather if you allow others to go home richer than you. That is a risk you can control. Select 'even' and no one will be better of than you.
    Selecting 'odd' is the high risk high reward choice. It is the smartest thing to do, but only if you now everyone else is smart as well. In practice, selecting 'odd' is probably not a very clever thing to do. I think that is the whole point of this post: if some others are probably going to make a stupid choice, it is better to join them. This will not give you the highest expected reward, but that high reward is not feasible in the presence of stupid choices.

    Gerhard Adam
    It is the smartest thing to do, but only if you now everyone else is smart as well.
    That's where I have a problem with this use of "smart" versus "stupid".  By placing a condition of ("only if you know everyone else is smart") you've negated the entire premise, because, as you said, you cannot make such a guarantee.

    Why not place the condition that "if I could ensure that one other person would pick equal", or any other combination?  Each condition is equally valid in terms of its likely occurrence, so it is only reasonable to try and maximize my payout, knowing the attendant risk.  It is completely misleading to argue that one choice is "smarter" than another.  The real comparison is whether someone wants to opt for a safer likelihood of a return versus a higher risk.  However, since the safety of the decision can't be guaranteed, then the "smart" decision is to take your risk and maximize your return if you succeed.  After all, you can't do any worse than zero.  So, which is "smarter"?  To risk getting $4 or a zero payout, or to risk getting $3 or a zero payout? 

    As I've said before, the point of indicating a "smart" decision is based solely on the notion that if everyone sits around and thinks solely in terms of guaranteeing a payout, then obviously everyone needs to say "odd" and be content with their $3 return. 

    Another mistake that often occurs in these scenarios is in assuming that zero represents a loss.  I start with zero, so there's absolutely no reason for me to assume that I must leave the game with more than that.  This is why people get hung up on the ultimatum game, because they fail to appreciate that I'm not investing anything to lose in playing.  Therefore, the only objective in this game is to risk a gain, and consequently it makes sense to take the risk for the largest gain possible. 

    One can argue about "smart" or "stupid" strategies all day long, but the only thing that is unequivocally true for the individual that selects even versus the one that selects odd;  if you got $3 paid-out, then they got $4.  So who was the "stupid" one.
    Mundus vult decipi
    "So who was the "stupid" one."
    That is of course (according to Cipolla) all those who selected 'even'. They limited everyone's expected earnings to $2 when $3 was up for grabs. (This time all participants got lucky, but that is in itself an irrelevant statistical event, the outcome could equally well have been $0 for all.)
    .

    Gerhard Adam
    They limited everyone's expected earnings to $2 when $3 was up for grabs.
    That's not true.  It's a one time game, and there is no $2 return.  If you're going to do it that way, then the choice (assuming 50/50 is a $2.00 or $1.50 expected return). 

    However, the reality is that it will either be $4.00 or $0.00, or $3.00 or $0.00.  The point remains, that since there is no guarantee that everyone will say odd, then the odds are no worse in opting for the higher return.

    Yes, I know all about the ... "if everyone says odd, then it's guaranteed", but that's only IF everyone goes along.  Since you have no assurance that this will happen, then you're either going to end up opting for a lower pay-out, or whining about how stupid people are (while you still risk ending up with zero). 

    Yes, I also know about all the people that think that getting $3.00 is worth it if everyone agrees, but that's a question of utility and not the game strategy.  If the $3.00 isn't enough utility, then the risk is warranted, and calling it stupid, doesn't change the legitimacy of the strategy.  Now, if the amount were $3 million vs. $4 million , then I suspect you'd have a different set of strategies that focus more on the "safe" payout.
    Mundus vult decipi
    "Yes, I know all about the ... 'if everyone says odd, then it's guaranteed', but that's only IF everyone goes along."

    In my interpretation of this, that is precisely the point that Johannes (I would like to give proper prefix to him but I do not know what it is and I am not going to take the additional time to look it up because I am a dumb college student and should be working on a regenerative biology paper, and yes, this is a long parenthesis) is trying to make. The only guarantee to walk away with the winnings (this is to highlight that risk SHOULD NOT apply) is to pick odd, being that both you and the others will all benefit if they all pick odd. The stupid choice of even (for clarification I am sticking with the definitions outlined in the post) results in not knowing whether or not you will get the winnings.

    For those who are smart, there is no application of the IF situation. If there are 101 intelligent, non-entirely self-serving individuals, the obvious choice is to pick odd so that everyone benefits. For emphasis, the point is that for intelligent people, the smart choice is obvious. For those who are not as intelligent, they introduce the IF situation and screw the whole game up.

    Please elaborate more clearly if you disagree.

    "If there are 101 intelligent, non-entirely self-serving individuals, the obvious choice is to pick odd so that everyone benefits."
    Right. But even if they are entirely self-serving individuals, as long as they are intelligent and have reasons to believe all others to be equally intelligent, and so on... the obvious choice is to pick odd.
    People who are not used to (or don't want to) think in terms of expectation values, will have difficulty with the even/odd game. For those it is much better to consider the dike dilemma: You are in a game with 100 others. If you select 'A' you get a payout of $1. If you select 'B' you get a payout of $5 provided all others also select 'B'. If not, you get nothing. What do you do?

    Thank you for pointing out my mistake.

    And as a response to the question you posed, that is exactly the point!!!

    I guess I'm worse than stupid; I'm dumb. I did assume the 50:50 nature of the game, and figured that if I were the 101st person, I should pick 'odd', so that there'd be a good chance of 50 evens and 51 odds. (I never saw this as a 'try for $4' game - I saw it as a 'try for > $0' game.)

    Then I realized that if the game were described to everyone the same way, they might all think of themselves as the 101st person, and choose 'odd'.

    What keeps me dumb is that I find mathematicians who call 'zero' an even number to be a bit pedantic. I know - I'm inventing my own rules - I already said I was dumb.

    I think the game 'Dike Dilemma' is the real interesting game being presented here. This game is being played in a big way right now. Just look at the European Union. One country has completely neglected their dikes (Greece) and some other countries have also been sloppy. Will the Greek dikes soon fail and let the water in? Or will the countries who did maintain their dikes help rebuild the Greek dikes? Who is stupid? Greece? Or Germany? These are fascinating times.

    Gerhard Adam
    It's also a variation on the "tragedy of the commons" scenarios.  Unfortunately there's too little information to make much of an assessment.  For example, how much damage (cost&recovery) is involved if flooding occurs?  Is there a recovery?  While it is easy to make it appear that the dikes must obviously be maintained, this would only be true if the cost of maintenance is ultimately less than the cost of damage & recovery.  In other words, if it costs $5 to maintain the dike, but only $2 to recovery from a flood, then it would be foolish to invest in the maintenance.

    This is the same problem in the "tragedy of the commons" where exploitation and ultimately ruin can occur in a commonly shared resource, and the "free-rider" problems.
    Who is stupid?
    That's why such comments aren't particularly helpful.  While it may make you feel better if you're up to your ass in a flooded field because your neighbor didn't maintain his dikes, it doesn't actually solve much of a problem.  In fact, it will simply be more infuriating, if it costs you both the same to recovery and it turns out your neighbor comes out ahead because he never invested in the maintenance to begin with.  You can rant all you like, but financially ... who's the stupid one?
    Mundus vult decipi
    Just imagine we witness a group of 101 aliens playing the even/odd game. Suppose they all come out of this game $3 richer. Each of them. Unbelievable, huh? Would we not all agree these aliens are, well... somewhat smarter than us?