Last week I posed the following problem from game theory called “war of attrition.” It is a simple yet famous game that explains some strange real world behavior. More on that later; first the problem and analysis:

You and a competitor will battle in rounds for a prize worth $5. Each round you may choose to either fight or fold. So may your competitor. The first one to fold wins $0. If the other player doesn’t also fold he wins the $5. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costs $0.75 per round per player (that is, both players pay $0.75 per round in which they both choose to fight onward). How many rounds of fighting would you be willing to go? How would your answer change with the size of the prize? With the size of the per-round fee?

I will analyze this problem with a mostly intuitive approach, sidestepping some slightly more advanced game theoretic ideas. I claim that if this game is played under conditions of common knowledge of rationality there is a rational strategy to fight in each round with probability 0.87. That’s quite a specific claim. Let’s see if I can back it up.

First, let’s notice a few things. If you know with certainty that your opponent will fold in any round 1-7 then it is rational for you to fight because you will win $5 and pay less than $5 in fight fees ($0.75 per round after round 1). However, if your opponent intends to fold in any round then it is only sensible for him to do so in round 1. Why pay fight fees only to fold later? One can make the same argument with the roles of the players reversed.

Hence, if either player is not willing to fight forever he should fold in round 1. If the other player knows this to be the case, the other player should fight. It turns out these are the two pure strategy Nash equilibria (game theory jargon) in this game: (1) you fight, your opponent folds in round 1 and (2) you fold, your opponent fights in round 1. (You can think of a Nash equilibrium as a pair of strategies–one for each player–from which neither can profitably deviate. As I argued above, if one is to adopt a “fold” strategy one should do so in round 1. It is clear that the “fighter” cannot gain more by also folding.)

In real life (as well as in the game) neither player knows for certain that the other player is going to fold. So the above analysis isn’t complete. There is another Nash equilibrium, one that mixes the two pure strategies–fight and fold–probabilistically. To find it, we can use the argument I made in The Game (Theory) within the Game: a mixed strategy Nash equilibrium is one in which the pure strategies are mixed in such a way to make the pure strategy payoffs to each player equivalent. Put another way, one mixes precisely in a way to make one’s opponent indifferent to his options.

So, you fight with probability* p *such that your opponent receives the same expected payoff (expected dollar winnings) under each of his options. This is a symmetric game so your opponent is also going to fight with probability *p *as well. How do we find the value of *p*? It isn’t so hard, but it is worth doing it for a more general case than the one described in the game.

To make things more general, let’s call the prize *V* (=$5 in the game) and the per-round fight fee *C* (=$0.75 in the game). We can find *p* in terms of *V* and *C* by using the property of a mixed strategy Nash equilibrium given above: *p *makes one’s opponent indifferent to his options. So, the solution approach is to figure out your opponent’s payoffs under each option and set them equal to each other.

If your opponent fights in round 1 he expects to earn –*C *with probability *p *if you also fight, and he expects to earn *V *with probability 1-*p *if you do not fight. Therefore, his expected payoff if he fights is –*pC*+(1-*p*)*V*. On the other hand, if your opponent does not fight in round 1 he expects to earn $0 no matter what you do. Setting these two expected payoffs equal gives an equation with one unknown, *p:*

*-pC+(1-p)V *= 0

The solution is *p*=*V*/(*V*+*C*). Plugging in the values from the game we get that *p*=5/(5+0.75)=0.87 (rounded). Notice how *p *changes with *V *and *C. *It increases with increasing *V *and decreases with increasing *C. *This should be consistent with one’s intuition. The greater the prize the more likely one is to fight for it. The higher the fight fee the less likely one is to fight. Notice also that there is a chance that the fight could go on for a long time, even to the point that the cumulative sum of the fight fees are higher than the prize. In this fight, players are exhausting resources they can never recoup. In real life the player who runs out of resources (money to pay the fight fee) will have to fold, hence the name “war of attrition.”

I’ll provide some interpretation of this solution and discuss applications in my next post about the war of attrition game.

*Later: Those really interested in the nitty-gritty mathematical details might want to read the comments to my original post of this problem.*

by SJ on September 14th, 2009 at 15:15

This was really nice. I enjoy game theory, but I’ve yet to try to formally educate myself upon those. Any suggested readings?

Keep these coming =)

by Austin Frakt on September 14th, 2009 at 18:45

@SJ – Please check back next Monday for a review of a very good source on game theory.