This is another game theory problem called “war of attrition”:

You and a competitor will battle in rounds for a prize worth $5. Each round you may choose to either fight or fold. So may your competitor. The first one to fold wins $0. If the other player doesn’t also fold he wins the $5. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costs $0.75 per round per player (that is, both players pay $0.75 per round in which they both choose to fight onward).

Example 1: In round 1 you and your competitor both fold. The game is over. Both you and your competitor incur no cost and win no money.

Example 2: In round 1 you fold but your competitor fights. The game is over. You lose nothing and gain nothing. Your competitor incurs no cost and wins the $5.

Example 3: In round 1 both you and your competitor fight. Therefore you move on to round 2 and each pay $0.75. In round 2 you both fight again. Therefore you move on to round 3 and each pay another $0.75. In round 3 you fight and your competitor folds. The game is over. You have spent $1.50 and win $5 for a net gain of $3.50. Your competitor has lost $1.50.

How many rounds of fighting would you be willing to go? How would your answer change with the size of the prize? With the size of the per-round fee? I’ll post analysis next week followed by interpretations and applications.

by Ian Crosby on September 11th, 2009 at 05:47

There does not appear to be any pure strategy to win here, so my inclination is to answer that it is best not to play. I hesitate only for a non-economic reason – because if I fold in the first round, there is at least some chance my opponent will not and get the five bucks. Even though this is no skin of my nose, it sticks in my craw. However, since my opponent is a computer, and therefore not only perfectly rational itself, but presumably programmed to know that a human player will act irrationally some portion of the time, it will probably play or fold in proportion to that likelihood. Of course, I could play that game too, but since, being human, I am more averse to a loss than I am avaricious for an equivalent probabalistically discounted gain, I will still stand pat and take satisfaction in the fact that the computer is not winning as much as it could because it has to take into account the likelihood that I am crazy.

by Austin Frakt on September 11th, 2009 at 06:21

@Ian Crosby – I like your answer. It has seeds of the classical (game theoretic) economics one, plus a bit more. Your intuition is quite good (or you have a bit of game theory training?). Stay tuned.

by Ian Crosby on September 11th, 2009 at 06:44

I have as much formal economics training as a legal education typically provides, plus what I’ve picked up in ten years of antitrust litigation, and a current interest going back several years in the field of behavioral economics (clearly in evidence). Looking forward to your analysis!

by Austin Frakt on September 11th, 2009 at 08:21

@Ian Crosby – You’re formally trained (antitrust) and informally read up (behavioral econ) in areas of personal interest. I’d welcome further e-mail exchanges with you on these topics. If you’re interested, contact me via my contact form.

by STaylor on September 12th, 2009 at 18:11

Interesting game. If each player only cares about maximizing his or her own expected payoff (although Ian mentions a few good reasons why this might not be a reasonable assumption), then I think the only stable equilibrium will be to choose to fold with some probability P in each round. I set the game up as follows:

Size of the prize = V

Cost of continuing to the next round = C

Probability that opponent folds = Q

Expected payoff if neither player folds and the game continues to round 2 = Y

Then, my expected payoff today can be written as follows:

X = P(0) + (1–P)[QV + (1–Q)(Y – C)]

If the game is played indefinitely and we assume no discounting, then X = Y. To get a symmetric equilibrium we need P = Q. So, the expected payoff can be written as follows:

X = (1–P)[PV + (1–P)(X–C)]

Solving for X:

X = [(1–P)PV – (1–P)(1–P)C ] / [1–(1–P)(1–P)]

Now we can choose P to maximize X. I find that if V = $5 and C = $0.25, then we should choose P = 0.418, which would yield an expected payoff of $1.46. So, in each round, each player will choose to fold with a probability of 41.8%. If we increase the value of V, then our optimal P will decrease (we will be less likely to fold). If we increase the value of C, then our optimal P will increase (we will be more likely to fold).

I enjoy your blog and am interested to read your thoughts. Keep up the good work.

by Austin Frakt on September 12th, 2009 at 21:07

@STaylor – Your approach is very clever and it is hard to see what could be wrong with it. Yet it does not correspond to the correct solution as I know it. (Note first that the C in the problem is $0.75 not $0.25, but that doesn’t matter in evaluating your proposed solution.)

Algebraically embedded in your solution is the correct one, as you will see on Monday or by following what I’m about to describe.

In a mixed strategy Nash equilibrium (which is what you’re solving for) the probabilities with which each player mixes his pure strategies are such that each player is indifferent to his choices. Put another way, the expected payoff of fighting must equal that of folding. You already know that the expected payoff of folding is zero. Therefore, the expected payoff of fighting must also be zero. This holds for every round. Thus, in your notation X = Y = 0.

Working this through, one should arrive at something like QV – (1-Q)C = 0 or Q = C/V+C, where Q is the probability of folding for either player (symmetric game). I’m using Q, not P because my notation in the analysis on Monday uses P for the probability of fighting (not folding). So P = 1-Q.

So, why don’t you arrive at this solution too? We can see from your algebra that you’re adding the expected payoff of folding to that of fighting, not equating them as is required in a mixed strategy Nash equilibrium. You’re adding them because you’re maximizing expected payoff. I’m not sure why this doesn’t produce the correct result though.

by STaylor on September 13th, 2009 at 12:26

I see that the player must be indifferent between folding and fighting so I agree that I did something wrong. I think my mistake was imposing P=Q before I maximized. The probability of my opponent folding (Q) should be taken as given when I choose P (using my notation). So, I should first solve the problem from my perspective taking Q as given, and then solve the problem from my opponent’s perspective taking P given. This will yield two symmetric equations in terms of P and Q which I can solve to get P = Q = C/(C+V). (I haven’t acutally worked it out, but this seems to make sense–your methodology is clearly more straightforward.)

Of course, if I am free to collude with my opponent, then I will just pay him or her $2.50 to fold in the first round:)

by Austin Frakt on September 13th, 2009 at 13:19

@STaylor – We agree what the solution is. That’s good. My colleague, an expert in game theory, confirmed that setting P=Q before maximizing is the root of the problem. He wrote me that doing so “implicitly means that I can change my opponent’s actions. One needs to take the derivative first and then apply symmetry.”

This instant I don’t see all the way through to the solution keeping P and Q distinct. I haven’t tried to write it out though. If you have, could you share it?

by STaylor on September 13th, 2009 at 17:09

I was afraid you were going to ask that. It’s been a while since I’ve attempted algebra like this, but here’s a stab at it.

After we recognize that X = Y, our profit function is as follows:

X = (1–P)[QV + (1–Q)(X – C)]

X = (1–P)QV + (1–P)(1-Q)X – (1–P)(1–Q)C

X = [(1–P)QV – (1–P)(1–Q)C] / [1– (1–P)(1–Q)]

X = (1–P) [QV – (1–Q)C] / [1– (1–P)(1–Q)]

We choose P to maximize X.

FOC: dX / dP = 0

To make things simpler, let X = f(P) / g(P) where,

f(P) = (1–P) [QV – (1–Q)C]

g(P) = 1– (1–P)(1–Q)

f’(P) = –[QV – (1–Q)C]

g’(P) = 1–Q

So, using the “quotient rule”:

dX / dP = [g(P) f’(P) – f(P) g’(P)] / [g(P) g(P)]

After verifying that the denominator is not equal to zero, we can write the FOC as follows:

[g(P) f’(P) – f(P) g’(P)] = 0

g(P) f’(P) = f(P) g’(P)

Plugging P and Q back in:

– [1– (1–P)(1–Q)] [QV – (1–Q)C]= (1–P) [QV – (1–Q)C] (1–Q)

[QV – (1–Q)C] is on both sides of the equation, so one solution is when QV – (1–Q)C = 0. Or, Q = C/(V+C).

The other solution is when

– [1– (1–P)(1–Q)] = (1–P)(1–Q)

But this is not really a solution, as it can never hold…

1– (1–P)(1–Q) = – (1–P)(1–Q)

1 = 0 (?)

So, Q = C / (V+C).

by Austin Frakt on September 13th, 2009 at 17:18

@STaylor – I didn’t check every line of algebra but it looks like it worked out. That’s a relief! That second solution is strange (1=0). What’s the source of that oddity I wonder? Does it have any meaning at all?

I’m glad you illustrated this solution approach. When I was learning about how to find mixed strategy Nash equilibrium my first instinct was to do as you did. But then I was quickly taught to look for payoff equivalence. I never bothered to come back and think about explicitly maximizing the payoff. Thank goodness maximal payoff occurs at the Nash equilibrium!

Well, anyone reading our comments this far has little need for Monday’s post. Anyone puzzled by them may find Monday’s post an easier route to the solution.

by STaylor on September 13th, 2009 at 18:20

The second “solution” is not really a solution if QV – (1–Q)C = 0, because we would be dividing through by zero. I think it just illustrates that there is no other solution if QV – (1–Q)C does not equal zero, so QV – (1–Q)C = 0 is a unique solution.

I have had some training in game theory, but was having difficulty recalling much about mixed strategies. Like Ian, I also work on antitrust matters (but as a consultant, not a lawyer).

by Austin Frakt on September 13th, 2009 at 19:30

@STaylor – Well your recall of math is quite good! This has been a rewarding dialog.

by STaylor on September 14th, 2009 at 10:00

I was thinking about this some more, and I think that the math from my previous post only gets us halfway there. The purpose of the maximization is to find some value P that maximizes my expected profits. But my final condition does not contain P! What it actually tells me is that when QV – (1–Q)C = 0, it does not matter what I choose for P, my expected profits will be the same regardless. In fact, it seems that there is no interior solution to this maximization (which we probably would be able to see if we graphed the profit function). Rather,

when Q > C / (V+C), we have a corner solution at P = 0;

when Q < C / (V+C), we have a corner solution at P = 1; and

when Q = C / (V+C), any P is optimal.

The first statement tells us that I should always FIGHT if my opponent folds with a probability GREATER than C / (V+C). The second statement tells us that I should always FOLD if my opponent folds with a probability LESS than C / (V+C). The third statement tells us that I am indifferent between fighting and folding if my opponent folds with a probability equal to C / (V+C).

Because my opponent has analogous conditions, we can see that one equilibrium is at Q = P = C / (V+C). This is consistent with what we know about mixed strategies. I am actually indifferent between folding and fighting as long as my opponent randomly folds with a probability of C / (V+C). However, I need to choose P = C / (V+C) to keep my opponent indifferent. (This brings to mind your point in a previous post about the battle between the pitcher and the baserunner.)

Perhaps you or your game theorist friend can verify that this is true, but I think that there are two additional equilibria at

Q = 0, P = 1; and

Q = 1, P = 0.

In other words, it is also stable equilibrium when one of us always fights and the other always folds.

by Austin Frakt on September 14th, 2009 at 10:44

@STaylor – You got it! I think today’s post with the solution says the same thing only at a far more intuitive level. Once again, it is nice to see the formal math correspond to both intuition and to the game theoretic (valid!) short cuts.

I’ll call your additional comment to the attention of my game theorist friend. If he has anything to add beyond what you or I have already written perhaps he will.

by Julian on September 14th, 2009 at 13:47

As the ‘official’ game theorist here, I can say you both have it spot on. The fact that one chooses one’s own probabilitiy in order to make one’s opponent indifferent is a necessary but troubling aspect of mixed-strategy Nash equilibria. Several attempts have been made to add a little realism to this construct, most notably using incomplete information:

http://en.wikipedia.org/wiki/Purification_theorem

But there’s no real agreed upon ‘solution’ to this issue…

by STaylor on September 14th, 2009 at 22:30

Thank you both. It’s always interesting to see how the math supports the intuition. Looking forward to the upcoming post with applications.