Last week I posed the war of attrition game, and earlier this week I analyzed it. Building on that analysis, in this post I provide some interpretations and applications for the mixed strategy Nash equilibrium solution we found. As a reminder, here’s a short summary of the game in more general notation than originally posed:

You and a competitor will battle in rounds for a prize worth

Vdollars. In each round each of you may choose to either fight or fold. The first one to fold wins $0. If the other player doesn’t also fold he wins theVdollar prize. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costsCdollars per round per player. AssumeV > C.

Recall that what we found in the analysis was that there was a mixed strategy Nash equilibrium to fight with probability *p=V*/(*V+C*). In the case *V=*$5 and *C=*$0.75, *p=*0.87. What does this mean?

There are multiple ways to interpret mixed strategy Nash equilibria. One way is to interpret the probability as a statement about a population. Applied to the game of attrition this interpretation would say that proportion *p *of the population are fighters and the rest are folders. That’s certainly plausible. I bet that upon reading the statement of this problem last week some folks immediately thought “I will not fight even one round,” while other folks immediately thought, “I would fight forever.” Even if nobody actually thought the latter, experiments show that people will really fight a very long time, even to the point that the cumulative fight fees exceed the prize. There really are “fighter” and “folder” personality types in the population.

A second interpretation is that each individual will play a mixed strategy. That is, you yourself will “roll the dice” in your head and fight with probability *p *and fold otherwise. Notice that each round is an independent “roll of the dice.” Past fight fees have no bearing on your probability of fighting in the current round. They are sunk costs. With probability *p *you will fight on, and on, and on…

What is the probability that this fight will go to round 2? It is the probability that both you and your opponent fight in round 1, or *p*^{2}. What is the probability the fight will enter round 3? It is the probability that you and your opponent both fight in round 1 and both fight in round 2. Those decisions are independent so the probability of entering round 3 is *p*^{4}. In general, the probability of fighting to round *n*+1 is *p*^{2n}. When *p *is large (i.e., *V *is large relative to *C*) some very long fights can occur. With each round there is hope of earning some money (if you win) so it is rational for you to continue precisely when your expected winnings of doing so are equivalent to those if you don’t. That’s exactly what *p *promises.

In fact, long wars of attrition have occurred in history, in warefare, in competition between firms, and in politics. Wars of attrition also occur in auctions. Each side is rational to continue the war but not because they wish to recoup past fight fees. Those are sunk costs, they cannot be recovered, and therefore they are irrelevant to current play. Each stage is independent of the next so players fight on because the expected benefit is equivalent to not fighting (in mixed strategy Nash equilibrium play). Eventually one side’s resources are exhausted and the war of attrition comes to an end.

My set of things to say about this game has also been exhausted so this series on the war of attrition game also ends here, at least for now.

by Ian Crosby on September 16th, 2009 at 16:57

Thanks so much for the clear exposition of the mathematics as well as the illuminating discussion of its implications. Yet even knowing quantitatively that a pure fold strategy fails to maximize my average expectation over an appropriately calibrated mixed strategy, I remain inclined not to play. And I don’t think this is a psychologistic reaction. The reason is that zero is qualitatively different than a lot. If I have X, the pain I experience from having zero (starving in the street) is far more to be avoided that the pleasure of having 2X (an extra house perhaps) is to be sought. So I will avoid a game that has a significant nonzero possibility of consuming all my resources even if I play optimally, even though my average expectation from optimal play exceeds my certain expectation from not playing at all.

by Austin Frakt on September 16th, 2009 at 17:28

@Ian Crosby – I understand your inclination not to play. I wouldn’t either. But I think it is psychological. I’m not sure what else it can be other than psychological that we prefer X to zero more than we prefer 2X to X.

While neither you nor I would fight in this game isn’t it an abstraction of many other situations in which we would? It is in the trivial sense that almost any model is a (poor) abstraction of almost anything. But I’ll bet many of us really do fight in repeated game like circumstances where there is a loss if our partner does and gain if she does not. If not at home, perhaps we do so at work, on the highway, etc. They may not be proper games of attrition in that resources may be practically infinite. But there are similarities to more realistic settings in which we may behave differently.

by Ian Crosby on September 16th, 2009 at 20:03

If one accepts that the marginal utility of money has an inverse relationship to the amount of money one has, then would it not be economically rational to prefer X to zero more than one prefers 2X to X? And since the marginal utility of money under this theory approaches infinity as the amount of money one has approaches zero, it seems intuitive to me that p for the mixed strategy Nash equilibrium for this problem would also approach zero if you normalized the equations for the marginal utility of money. Assuming my total utility after winning V starting with X is V=1/(X+V) and my total utility after losing C starting with X is C=1(X-C), then it seems the normalized mixed-strategy Nash equilibrium probability of fighting for two players having the same amount X would be p=(1/(X+V))/(((1/(X+V))+(1/(X-C)))). Assuming both players have ten dollars, for V=$5 and C=$0.75, normalized p=0.38 instead of 0.87, and approaches zero as C approaches X, consistent with my hypothesis. Does this make sense?

by Austin Frakt on September 16th, 2009 at 20:34

@Ian Crosby – Oh dear. A few things are confusing me in what you wrote. Either some words and notation are mixed up or I’m just not getting it. Can we take it to a more intuitive level? I’m pretty sure I agree with you intuitively, and hence probably so formally.

About rationality: I unfairly edited my own comment just after I wrote it to excise my claim about rationality. I realized it was ambiguous as I wasn’t specific about the utility function. But, yes, 2X > X > 0 (for X > 0) and is therefore to be preferred holding all else constant.

I think you’re saying marginal utility of money is decreasing with amount of money (inverse correlation, but not inverse as in reciprocal). This is standard so OK.

I think your expressions V = 1/(X+V) and C = 1/(X-C) [that’s what you meant, right?] are not meant as equations but as redefinitions of V and C. So let’s call the normalized ones V’ and C’ as in V’ = 1/(X+V). So, sure, plugging those in to the expression for p you get 0 as C –> X.

Is this the right way to normalize V and C? As V goes up the normalized version goes down (?).

I’m just not sure what we’re trying to do here with the mathematics. Hence my suggestion to keep it intuitive, where I think we’re in agreement anyway. So I’m a bit lost in the weeds here. Sorry if I’m missing something.

by Ian Crosby on September 16th, 2009 at 21:27

Sorry, my notation is confusing. I should have e-mailed you before posting because I am out of my depth here. My expressions were meant to be redefinitions of V and C to account for the marginal utility of money, which decreases the more of it you have. The equations I am looking for have V’ getting larger but at a diminishing rate as V increases, and C’ getting larger at an accelerating rate as C gets larger. Perhaps V’ = V/(X+V) and C’ = C/(X-C), where V’ and C’ are the marginal utility for V and C, not my total utility after the move? The equilibrium would be p=(V/(X+V))/(((V/(X+V))+(C/(X-C)))), and yields .80 instead of .87 on your inputs, and approaches zero as C -> X, which is not true of the non-normalized formula.

by Austin Frakt on September 17th, 2009 at 06:47

@Ian Crosby – Yep, makes sense what you’re going for. I don’t know what, if any, would be the “standard” way to normalize. What is slightly interesting is that the running sum of C approaches X anyway yet p stays constant. That’s an artifact of the set up. In “real life” when the sum of C reaches X you’re out of the game.

by Ian Crosby on September 17th, 2009 at 10:23

The definitions of V’ and C’ are hacks to demonstrate the concept. I haven’t found a drop in equation for calculating marginal utility, but I have seen some abstracts in databases I don’t subscribe to suggesting there is debate about constants that should be incorporated into such equations, and therefore I would assume disagreement about how steeply marginal utility diminishes, which would have implications for p in any normalized equilibrium. I’m not sure I understand your comment about p staying constant as the sum of C approaches X. I’ve defined the marginal utility of a win or a loss relative to how much you have X in each round, and X will change from round to round as you win or lose. If you lose, then X is lower, and p would be lower too for any given C, and vice versa. Of course, your opponent’s endowment changes as well, and this simplified equation is clearly limited (to the extent it has any validity at all) to the special case where the players’ endowments are equal. It would be interesting to see an equation that could account for changing relative endowments, or, more simply, that defined marginal utility relative to zero. I’m pretty sure I’m not capable of deriving it.

by Ian Crosby on September 17th, 2009 at 10:34

One further thought – this exercise, so far as it goes, does not confirm that is economically rational not to play at all when the diminishing marginal utility of money is taken into account. p really only starts it dive to zero when C gets quite large relative to X, and I don’t think that fine-tuning V’ or C’ would really change that. The aversion to playing comes from knowing that in successive rounds, there is a nontrivial possibility where you could find your rational move pushing you in to that territory. And that aversion may well be psychological.

by Austin Frakt on September 17th, 2009 at 12:10

@Ian Crosby – In my last comment where I talked about the running sum I meant that the running sum of C incurred as both players keep fighting approaches your initial endowment. (I didn’t have it straight what your X was until now and misused it before.)

That’s the same thing as the initial endowment less the running sum of C (your X) approaching C.

We’ve both beaten the horse enough on this and are in full agreement.

by Ian Crosby on September 17th, 2009 at 12:21

On further reflection, I think the math (however imperfect) has been helpful for me to conceptualize why the diminishing marginal utility of money cannot provide an economic explanation for the reluctance to play at all. So long as X is not very close to zero, the difference in utility between gain V’ and loss C’ will never be much greater than the absolute difference between V and C. I certainly know that my own unwillingness to play is not calibrated to my available resources. Plugging in an explicitly psychological constant multiplier for C would probably more accurately describe people’s behavior when X is not close to zero. In “Fooled by Randomness,” which I am reading now, Nassim Taleb cites a finding that the negative psychological impact of a loss is on average 2.5 times greater than the positive impact of an equivalent gain. For V=5 and C=0.75, p=V/(V+2.5C)=0.73 instead of 0.87, perhaps closer to real life. Obviously, individual constants may vary. I expect mine is about 50 (p=0.12) if my bewilderment every time we drive past the vast and completely packed parking lot at the Indian casino on our way out of town on the weekends is any measure.

by Ian Crosby on September 17th, 2009 at 12:26

(Sorry, I beat the horse further before I saw your last post. No more flogging.)

by Austin Frakt on September 17th, 2009 at 20:11

Horse. Dead. Beaten.