• Help me learn new things! – Game Theory

    This post is part of a series in which I’m dedicating a month to learning about twelve new things this year. The full schedule can be found here. This is month three. (tl;dr at the bottom of this post)

    I can’t believe March is already over. This month was game theory. I learned one big lesson – I’m not that fond of game theory.

    For some reason, I thought this would be fun, or something that would be intuitive and easy to do in my head. Instead, it turns out that game theory is much more of a grind it out mathematical thing. It’s my fault; my expectations were off. Also, if I never hear about the Prisoners’ Dilemma again, it will be too soon. Enough with the Prisoner’s Dilemma! At this point, I want to choose “confess” just to get out of the conversation.

    That’s not to say I didn’t learn anything. I certainly did. For instance, I now get what John Nash was talking about in A Beautiful Mind when he figured out Nash equilibria (for which he won a Nobel Prize). Moreover, there were certain books along the way which were much more practical than others. You can all benefit from my experience. Let’s begin.

    The first book I read was Game Theory 101: The Complete Textbook, by William Spaniel. It wasn’t a bad place to start. It begins, as all of these pretty much do, with the Prisoners’ Dilemma. It then builds off of that example in increasingly more complex matrices to illustrate different ways of determining Nash equilibria and optimal outcomes. You also get to learn about the many other classic games used to illustrate game theory, including stag hunt, matching pennies, chicken, ultimatum, centipede, hawk-dove, and the-one-where-the-couple-have-to-independently-decide-whether-to-go-to-the-ballet-or-a-boxing-match. There’s math, there are diagrams, and it’s pretty well described. If I had one complaint about this book, it’s that it’s a bit dry. I’m glad it was first.

    Next, I turned to Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life by Dixit and Nalebuff. This was much less dry than the first book. I was feeling pretty positive about it, until two books later. I felt even less positive about it once I got to Dixit’s textbook at the end of the month. Both were superior to this one. It did its best to make the subject relevant and interesting, but once it got to the usual “Prisoner’s Dilemma”, it lost a lot of its spark.

    I tried Game Theory: A Very Short Introduction (Very Short Introductions). It was short. It was also not illuminating above anything else I read.

    61KepmnEc8LGame-Changer: Game Theory and the Art of Transforming Strategic Situations by David McAdams was a breath of fresh air (thanks, Peter Ubel!). This was the best written of all the books I read, immediately getting into real-world applications rather than focusing on the usual bread-and-butter games others do. This was the first book that got me thinking above simple games into actual strategy. Easy to read, and I got a lot out of it.

    William Poundstone’s Prisoner’s Dilemma was a different kind of book. It was much more of a biography of John Von Neumann, one of the architects of game theory, than about game theory itself. If that’s what you’re looking for, you might enjoy this book. I wanted to learn about game theory more than the men behind it, so this was sort of lost on me.

    Next, I tried Game Theory for Applied Economists, by Robert Gibbons. This might be a good book, but I can’t tell you. I should have peid more attention to the title. This is definitely written for skilled applied economists, because the math was way beyond what I wanted to do. Maybe I could have figured it out, but I wasn’t going to spend the time, as I had 8 books to read this month. It was very, very detailed and very, very complicated.

    The Compleat Strategyst, by J.D. Williams is evidently a classic tome on the subject. I should have been tipped off by the oldey-English in the title. This wasn’t for me, either. It was too hard a read, and I couldn’t finish it.

    41SU8PZFiKL._SY344_BO1,204,203,200_Finally, I read Games of Strategy, by Dixit, Skeath, and Reiley. This was an odd book to finish with, because it was the most comprehensive and thoughtful of the bunch. It was an actual textbook. The chapters were well designed, the examples were good, and I found it to be an easy read. If I had to pick one book to give you the foundation of game theory, this would be it. I’m glad I pushed my way through to the end of my list, because I would have missed this.

    Finally, based on Austin’s recommendation, I watched ECON 159: GAME THEORY from the Open Yale Courses. You can read his post on the course. It’s totally worth it, but it’s a 25+ hour commitment, and I like plowing through books in a couple hours more than the time it took to go through all of the lectures.

    So where does that leave me? Unlike previous months, I don’t feel like I learned any big lessons that will change what I do. I was already somewhat immersed in decision trees, utilities, backward induction, and decision analysis from my research. I didn’t realize how much overlap there was in game theory and this body of work. I also think that I’m a pretty good strategist, given my huge interest in games in general, as well as in human behavior. Maybe I’ve got a lot of this internalized without knowing all of the math and proofs behind them.

    If I was ranking my three months so far, this would unfortunately be third. But that may be less of a comment on game theory as it is on me, and what I knew coming in. I’m looking forward to April!

    tl;dr: If you want to read one enjoyable book, make it Game-Changer: Game Theory and the Art of Transforming Strategic Situations; if you are willing to read a textbook, then go with Games of Strategy; if you’re patient and willing to put in the time (or, if you hate to read), ECON 159: GAME THEORY is the way to go.

    @aaronecarroll

    Comments closed
     
  • Repost: What’s 2/3 of the average?

    I originally posted the following on August 13, 2009, but if you are unfamiliar with it, have some fun this weekend thinking about it. (That post is long enough ago that I bet even if you saw it, you can’t immediately remember how to think this through.)

    Suppose everyone in your town selects a real number between 0 and 100, inclusive (i.e. 0 and 100 are both possible choices, as is any other number between). The winner is the individual (or individuals) who selects the number closest to 2/3 of the average of numbers chosen. What number do you choose? Why?

    Just to be clear, suppose there are three players who select the numbers 10, 20, and 30. The average is 20, and 2/3 of the average is 13.333… . Therefore, the winner is the individual who selected 10 because it is closest to 2/3 of the average.

    Yes, you can find out all about this problem on the internet. If that’s how you want to go about this, then just read my analysis and application to speculative bubbles in financial markets. But, do yourself a favor and think about the problem, talk it over with your colleagues, family, and friends (even play it with them). I bet you’ll have some fun, or 2/3 of the average of fun, or something.

    @afrakt

    Comments closed
     
  • Common knowledge

    A long time ago I blogged about the notion of common knowledge in game theory. Yesterday, over at the Cheap Talk blog, Jeff Ely put up a post with an interesting problem that hinges on the concept (Cheap Talk is a good blog for lovers of game theory):

    Two generals, you and me, have to coordinate an attack on the enemy.  An attack will succeed only if we both attack at the same time and if the enemy is vulnerable.

    From my position I can directly observe whether the enemy is vulnerable.  You on the other hand must send a scout and he will return at some random time. We agree that once you learn that the enemy is vulnerable, you will send a pigeon to me confirming that an attack should commence.  It will take your pigeon either one day or two to complete the trip.

    Suppose that indeed the enemy is vulnerable, I observe that is the case, and on day n your pigeon arrives informing me that you know it too.  I am supposed to attack.  But will I?

    Go to Cheap Talk to find out.

    Comments closed
     
  • War of Attrition: Filibuster Bluff Edition

    Taking on faith that John Chait’s interpretation of the reporting is accurate, it seems that Republicans have announced a plan to filibuster the first vote on the financial regulation bill and then fold. Something seems fishy about that plan.

    Now that the Democrats know the Republicans are planning to defect after the first vote, why on Earth would they compromise? Moreover, what is the point of taking the hit by filibustering reform in the first place? It could work, in theory, if you could bluff the Democrats into thinking the GOP might hold the line indefinitely. But I’m pretty sure the Democratic party has access to articles published in Politico, which means the jig is up. So now the Republicans are trying to bluff in poker when they and their opponent know they have the weaker hand, and their opponent has heard them admit that their strategy is to bet for a couple rounds and fold before the end. Why not just cut their losses now? This makes zero sense.

    Agreed. Though it isn’t necessary to appeal to game theory, this is an illustration of some game-theoretic ideas. After all, game theory is really just common sense with logic. But, in case the logic is lost on any Republicans (or Democrats), it isn’t hard to find a good review.

    Oh, let’s see, it turns out I wrote one in a post on the War of Attrition game.

    If you know with certainty that your opponent will fold [early] … then it is rational for you to fight because you will win. … However, if your opponent intends to fold in any round then it is only sensible for him to do so in round 1. Why pay [the costs of a] fight only to fold later? …

    Hence, if either player is not willing to fight forever he should fold in round 1. If the other player knows this to be the case, the other player should fight. It turns out these are the two pure strategy Nash equilibria (game theory jargon) in this game: (1) you fight, your opponent folds in round 1 and (2) you fold, your opponent fights in round 1.

    Is this really so hard to understand?

    Comments closed
     
  • Gaming Frivolity

    This past week, the Senate Judiciary Committee heard testimony on whether to overrule two recent Supreme Court decisions that made it much harder for plaintiffs to bring civil lawsuits.  The decisions, Bell Atlantic v. Twombly and Iqbal v. Ashcroft, held that a court may dismiss a suit if it finds that the plaintiff’s claims are “implausible” even before the plaintiff has had an opportunity to obtain information from the defendant to prove its case.

    Critics have slammed the decisions as barring the courthouse door to meritorious suits in which crucial information is solely in the hands of the defendant.  An example of such a suit might be one alleging a price-fixing.  In such a case, evidence showing that the defendant’s prices were the result of a conspiracy rather than simply meeting competition is not likely to be available to a plaintiff unless the defendant is compelled to produce it in a lawsuit.  But under Twombly and Iqbal, a court could dismiss a price-fixing lawsuit before the plaintiff had the opportunity to obtain that information on the ground that the defendant’s actions appeared equally consistent with fair competition.

    Supporters of these decisions counter that requiring a plaintiff to identify facts at the outset that tend to exclude innocent explanations is warranted in order to weed out frivolous lawsuits.  They claim that otherwise, plaintiffs may blindly file meritless lawsuits that defendants will be coerced to settle rather than face the cost of litigation.

    While the cases and editorial pages are replete with references to the scourge of frivolous litigation, it turns out that there is very little empirical support for the claim that courts are awash in it.  And there are reasons to believe that defendants settle too few rather than too many of the kind of suits that Twombly and Iqbal seek to eliminate.

    The game theory of frivolous litigation is nicely modeled in a 1997 article by law professor Robert G. Bone.*  Among the many scenarios he analyzes is the case, like our price-fixing example, where a defendant knows at the outset whether a plaintiff’s case has merit, but the plaintiff can only find out before filing suit, if at all, though a very expensive investigation (bribing an insider, for example).

    In this case, it turns out that the defendant’s best strategy is to fight every unmeritorious case, and offer to settle some but not all meritorious cases at the outset.  The plaintiff’s best strategy is drop its case some of the time when the defendant declines to settle at the outset, not knowing whether or not its case has merit.  Thus defendants never pay to settle unmeritorious cases, but plaintiffs sometimes unknowingly drop meritorious ones.  The result is a transfer of wealth from uncompensated meritorious plaintiffs to guilty defendants, not from innocent defendants to frivolous plaintiffs.

    Twombly and Iqbal, it turns out, are a solution in search of a problem.  They should be repealed.

    * Modeling Frivolous Lawsuits, 145 U. Pa. L. Rev. 519 (1997).

    Comments closed
     
  • Interview with Ben Polak

    I learned of Ben Polak through his course Econ 159, available online through Open Yale Courses (see my review). In addition to being a superb teacher, Polak is an expert on decision theory, game theory, and economic history. His work explores economic agents whose goals are richer than those captured in traditional models. His contributions to game theory range from foundational theoretical work on common knowledge, to applied topics in corporate finance and law and economics.

    Most recently, he has made contributions to the theory of repeated games with asymmetric information. Other research interests include economic inequality and individuals’ responses to uncertainty. Professor Polak is currently engaged in an ambitious empirical project that tackles questions of industrial organization in the setting of industrial revolution in England. For a list of his achievements, awards, and selected papers visit his Yale School of Management page.

    Polak was kind enough to answer a few questions by e-mail.

    Austin Frakt (AF): The overarching hook of Econ 159 was to build toward a game theoretic model of human cooperation. In doing so, more and more aspects of human behavior were discussed and incorporated. What is the current and historical relationship between game theory and behavioral economics? How has one influenced the other?

    Ben Polak (BP): Some of the best work in behavioral economic theory is directly about games, Matt Rabin’s work for example.  Behavioral economics naturally introduces some new game-theoretic issues. For example, in many behavioral models, the agents’ preferences are not “dynamically consistent”.  This means that what they may want to do tomorrow may not be what they want today for themselves to do tomorrow.   If the agents anticipates this, then they are going to be playing a game with themselves.  Many of the ideas from the class – such as backward induction – take on a new importance.  The agent now has to anticipate and roll back what she will do tomorrow to decide what she should do today.  And ideas like commitment take on a new role: agents may want to commit themselves not just influence others choices (like burning the boats) but simply to control the scope of their future selves’ choices.

    AF: Nash is so closely associated with game theory. How much of the content of Econ 159 was pioneered by him? Which is due to the more recent work of others? Where is the frontier today?

    BP: Nash introduced the notion of Nash equilibrium but, at the time he was working, most of the work was on cooperative game theory rather than non-cooperative game theory.  So, for example, bargaining over a pie was analyzed on the basis of normative axioms rather than strategic choices.  One of Nash’s major papers was on normative bargaining solutions but another paper introduced a game in the sense we discuss in the class, and showed that the outcome suggested by normative criteria would be the outcome that would result in this game.  That paper set off a whole literature called “the Nash program”: the attempt to find games that produce particular ‘desirable’ outcomes.  The last Nobel prize given to game theorists (that given to Maskin, Myerson and Hurwicz) was given for studying “mechanism design” which (in a sense) is the field that descended from Nash’s first paper.   The early pioneers of game theory – Nash, Shapley, Shubik, von Neumann, Scarf – were (and are) extraordinary minds.

    AF: The game theoretic problems approached in 159 are, by necessity, simple enough to understand and analyze “by hand.” No doubt there are vastly more complex problems and models for which one requires the aid of computer. What are the common computational packages and approaches? Are there parameter estimation algorithms that one could say are the analog of standard econometrics? What are some really big and important problems that are addressed with such things?

    BP: Here I am a bit out of my field when it comes to specific packages.  But one of the main advances in econometric economics in the last two decades has been the introduction of so-called structural models.  Typically, these models are game-theory based and the econometric techniques use features of equilibria to help identify parameters in the model.  Two major examples are the econometric models used to estimate demand first for items such as cars and second behind bids in auctions.  These econometric models have revolutionized empirical studies of industry.  Hand in hand with this is a literature on computing equilibria in (possibly complicated) games.  Again, one use is in analyzing the data.

    AF: Do you teach other classes at Yale and might they one day be available via Open Yale Courses? (Would it help if I begged?)

    BP: Well, for my sins, I am about to take over as chair of economics at Yale.  Once that is over, I would like to develop a new course and (if it works well) I would like to try to have it added to the open courses.  In the meantime, Bob Shiller’s wonderful course on Financial Markets is available [reviewed on this blog].  And we are hoping that, one day soon, John Geanakoplos’s superb course on Financial Theory will be available.  John is one of the most inspiring teachers at Yale, so I am looking forward to it.

    Comments closed
     
  • Passing the Plate for Universal Health Insurance?

    My Regular Libertarian Foil ™ (RLF – an acquaintance who often provokes me with unorthodox points of view) recently posed the following question: Why can’t all the liberals who think universal health insurance is a moral issue take up a collection and fund it themselves without raising my taxes?

    This question put me in mind of a problem I encountered in Ben Polak’s game theory course.  Suppose that a person can make an investment of ten dollars that will return fifteen dollars (net return of five dollars), only if ninety percent of everybody also invests.  If less than ninety percent invest, an individual’s return on his or her ten dollar investment is nothing.  When Professor Polak first posed this problem to his class of Yale students, only about half chose to invest, and they all lost their investments.  A second round of the game produced very few investors, and a third round almost none.  He then explained the dynamics of the game and its payoffs.  If I believe that nearly everybody will invest, then my best response to their expected play is to also invest, and the same is true for everybody else who shares the same belief.  In this case, the solution set of everybody investing is said to be a “Nash equilibrium,” because nobody can do better by not investing given their belief that others will as well.

    But there is another Nash equilibrium in this game.  If I believe that not enough people will invest, then my best response to others’ expected play is not to invest either, and the same is again true for everybody else who shares that belief.  We saw actual play converge on this Nash equilibrium as the members of the class set their expectations in response to others’ play in previous rounds.  This was unfortunate, however, because the payoffs associated with the Nash equilibrium of everybody investing are superior for everybody (technically, “Pareto superior“).  Happily, solving the coordination problem that frustrated the superior equilibrium in the initial play was as simple as explaining the game, as everybody chose to invest during a final round of play at the end of the lesson.

    It is simple to construct a similar model for funding universal health insurance through a voluntary tax on liberals.  Suppose that a voluntary tax on each liberal in the amount of X would be sufficient to fund universal health insurance, provided that ninety percent of liberals paid the tax.  Suppose also that each liberal experiences an increase in utility (happiness, moral satisfaction) equivalent to (i.e, for which they would have willingly paid) 2X when there is universal health insurance.  Clearly, each liberal is better off when all liberals choose to pay the tax than if no liberals do.  But is there a Nash equilibrium in which all liberals pay the tax?  There is not.  If I pay the tax and all the others do too, my expected payoff is X — the 2X utility from having universal health insurance less the X I paid in tax.  But if all the other liberals pay and I don’t, my payoff is 2X.  So not paying is my best response to the others’ expected play if I believe that they will pay the tax.  And of course, my best response to others’ expected play if I believe a sufficient number of them will not pay the tax is also not to pay the tax.  The game has one Nash equilibrium in which no liberal pays the tax and there is no universal health insurance, even though liberals would be collectively better off if everybody paid the tax.  It is, in short, a prisoner’s dilemma.

    One potential solution to a prisoner’s dilemma is cooperation.  We can all agree to pay the tax for our mutual benefit.  But of course, each of us has an incentive to cheat and receive the benefit without paying a share of the cost.  And studies of successive-round prisoner’s dilemma type games have shown that even players who are initially inclined to cooperate will revert to their equilibrium strategies as they perceive or fear that others are cheating.  So the expected outcome even if liberals all agree to pay the tax is eventually the same without some enforcement mechanism to prevent cheating.  We could make the liberal tax mandatory, but then we would have to identify the liberals.  Though I’m sure my RLF would rejoin that he finds them easy enough to spot, you get the picture.  Private charity, unsurprisingly, can’t solve the problem of universal health insurance.

    Comments closed
     
  • Yale’s Econ 159: Game Theory Made Fun

    There’s a reason I’ve posted a lot on game theory of late: I was taking a course on the topic, continuing my education by podcast. The Open Yale Courses (OYC) program makes it easy to turn your iPod or MP3 player into a classroom of sorts, with 15 Yale courses online and 30 more expected over the next three years. Each course includes lectures by video or audio, downloadable from the OYC website or available through iTunes.

    So far I’ve listened through Econ 252, which I reviewed previously, and Econ 159. The latter is Yale’s basic game theory course taught by Ben Polak. I don’t need to say very much about the content of the course since I’ve already given readers a large dose of basic game theory. In fact, all of the content of my game theory posts has been inspired by that of Econ 159.

    In this review I’ll comment on the style of the course and the quality of the instructor, Ben Polak. Polak is one of those virtuoso teachers. If you’re lucky you’ve had one or two such teachers in your life: the sort who entertains as he instructs, who presents his subject in such easy, digestible bites that one cannot help but learn it. In fact, Polak has won awards for his teaching. It helps that he’s also very funny and has a charming British accent.

    Game theory can’t be taught without use of a blackboard. There’s a lot of drawing of diagrams, graphs, arrays of numbers, and so forth. So one might think it impossible to learn the subject aurally by podcast. However Polak speaks what he writes so clearly I did not find it difficult to follow along and to “see” the visual in my head.

    I confess that there were details I could not keep track of (was that a “2, 1” payoff in the upper right of the array or the lower left?), but it didn’t matter. It isn’t the details that are important, it is the main concepts. Those Polak makes as plain as day, and they’re emphasized more than once so one cannot miss them.

    Polak told his students at the start of the class that the course was “moderately hard but moderately fun.” I think that’s fair. It requires a certain type of logical mind to enjoy game theory, and that is hard for some people. But Polak makes it about as fun as game theory is likely to ever be.

    In my review of Yale’s Econ 252 I wrote that its instructor, Robert Shiller, had set the bar high. Well, Ben Polak far exceeded it and Econ 159 is one of the best courses I’ve ever taken (and I’ve taken a lot of them). If you’re interested in learning basic game theory and/or if you’ve enjoyed my game theory posts, give it a try. Even if you only make it through part of the course you’ll still have learned something and probably had a good time doing so.

    Comments closed
     
  • “War of Attrition”: Interpretations and Applications

    Last week I posed the war of attrition game, and earlier this week I analyzed it. Building on that analysis, in this post I provide some interpretations and applications for the mixed strategy Nash equilibrium solution we found. As a reminder, here’s a short summary of the game in more general notation than originally posed:

    You and a competitor will battle in rounds for a prize worth V dollars. In each round each of you may choose to either fight or fold. The first one to fold wins $0. If the other player doesn’t also fold he wins the V dollar prize. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costs C dollars per round per player. Assume V > C.

    Recall that what we found in the analysis was that there was a mixed strategy Nash equilibrium to fight with probability p=V/(V+C). In the case V=$5 and C=$0.75, p=0.87. What does this mean?

    There are multiple ways to interpret mixed strategy Nash equilibria. One way is to interpret the probability as a statement about a population. Applied to the game of attrition this interpretation would say that proportion p of the population are fighters and the rest are folders. That’s certainly plausible. I bet that upon reading the statement of this problem last week some folks immediately thought “I will not fight even one round,” while other folks immediately thought, “I would fight forever.” Even if nobody actually thought the latter, experiments show that people will really fight a very long time, even to the point that the cumulative fight fees exceed the prize. There really are “fighter” and “folder” personality types in the population.

    A second interpretation is that each individual will play a mixed strategy. That is, you yourself will “roll the dice” in your head and fight with probability p and fold otherwise. Notice that each round is an independent “roll of the dice.” Past fight fees have no bearing on your probability of fighting in the current round. They are sunk costs. With probability p you will fight on, and on, and on…

    What is the probability that this fight will go to round 2? It is the probability that both you and your opponent fight in round 1, or p2. What is the probability the fight will enter round 3? It is the probability that you and your opponent both fight in round 1 and both fight in round 2. Those decisions are independent so the probability of entering round 3 is p4. In general, the probability of fighting to round n+1 is p2n. When p is large (i.e., V is large relative to C) some very long fights can occur. With each round there is hope of earning some money (if you win) so it is rational for you to continue precisely when your expected winnings of doing so are equivalent to those if you don’t. That’s exactly what p promises.

    In fact, long wars of attrition have occurred in history, in warefare, in competition between firms, and in politics. Wars of attrition also occur in auctions. Each side is rational to continue the war but not because they wish to recoup past fight fees. Those are sunk costs, they cannot be recovered, and therefore they are irrelevant to current play. Each stage is independent of the next so players fight on because the expected benefit is equivalent to not fighting (in mixed strategy Nash equilibrium play). Eventually one side’s resources are exhausted and the war of attrition comes to an end.

    My set of things to say about this game has also been exhausted so this series on the war of attrition game also ends here, at least for now.

    Comments closed
     
  • Analysis of the “War of Attrition” Game

    Last week I posed the following problem from game theory called “war of attrition.” It is a simple yet famous game that explains some strange real world behavior. More on that later; first the problem and analysis:

    You and a competitor will battle in rounds for a prize worth $5. Each round you may choose to either fight or fold. So may your competitor. The first one to fold wins $0. If the other player doesn’t also fold he wins the $5. In the case you both fold in the same round you each win $0. If you both choose to fight you both go on to the next round to face a fight or fold choice again. Moving on to each round after round 1 costs $0.75 per round per player (that is, both players pay $0.75 per round in which they both choose to fight onward). How many rounds of fighting would you be willing to go? How would your answer change with the size of the prize? With the size of the per-round fee?

    I will analyze this problem with a mostly intuitive approach, sidestepping some slightly more advanced game theoretic ideas. I claim that if this game is played under conditions of common knowledge of rationality there is a rational strategy to fight in each round with probability 0.87. That’s quite a specific claim. Let’s see if I can back it up.

    First, let’s notice a few things. If you know with certainty that your opponent will fold in any round 1-7 then it is rational for you to fight because you will win $5 and pay less than $5 in fight fees ($0.75 per round after round 1). However, if your opponent intends to fold in any round then it is only sensible for him to do so in round 1. Why pay fight fees only to fold later? One can make the same argument with the roles of the players reversed.

    Hence, if either player is not willing to fight forever he should fold in round 1. If the other player knows this to be the case, the other player should fight. It turns out these are the two pure strategy Nash equilibria (game theory jargon) in this game: (1) you fight, your opponent folds in round 1 and (2) you fold, your opponent fights in round 1. (You can think of a Nash equilibrium as a pair of strategies–one for each player–from which neither can profitably deviate. As I argued above, if one is to adopt a “fold” strategy one should do so in round 1. It is clear that the “fighter” cannot gain more by also folding.)

    In real life (as well as in the game) neither player knows for certain that the other player is going to fold. So the above analysis isn’t complete. There is another Nash equilibrium, one that mixes the two pure strategies–fight and fold–probabilistically. To find it, we can use the argument I made in The Game (Theory) within the Game: a mixed strategy Nash equilibrium is one in which the pure strategies are mixed in such a way to make the pure strategy payoffs to each player equivalent. Put another way, one mixes precisely in a way to make one’s opponent indifferent to his options.

    So, you fight with probability p such that your opponent receives the same expected payoff (expected dollar winnings) under each of his options. This is a symmetric game so your opponent is also going to fight with probability p as well. How do we find the value of p? It isn’t so hard, but it is worth doing it for a more general case than the one described in the game.

    To make things more general, let’s call the prize V (=$5 in the game) and the per-round fight fee C (=$0.75 in the game). We can find p in terms of V and C by using the property of a mixed strategy Nash equilibrium given above: p makes one’s opponent indifferent to his options. So, the solution approach is to figure out your opponent’s payoffs under each option and set them equal to each other.

    If your opponent fights in round 1 he expects to earn –C with probability p if you also fight, and he expects to earn V with probability 1-p if you do not fight. Therefore, his expected payoff if he fights is –pC+(1-p)V. On the other hand, if your opponent does not fight in round 1 he expects to earn $0 no matter what you do. Setting these two expected payoffs equal gives an equation with one unknown, p:

    -pC+(1-p)V = 0

    The solution is p=V/(V+C). Plugging in the values from the game we get that p=5/(5+0.75)=0.87 (rounded). Notice how p changes with V and C. It increases with increasing V and decreases with increasing C. This should be consistent with one’s intuition. The greater the prize the more likely one is to fight for it. The higher the fight fee the less likely one is to fight. Notice also that there is a chance that the fight could go on for a long time, even to the point that the cumulative sum of the fight fees are higher than the prize. In this fight, players are exhausting resources they can never recoup. In real life the player who runs out of resources (money to pay the fight fee) will have to fold, hence the name “war of attrition.”

    I’ll provide some interpretation of this solution and discuss applications in my next post about the war of attrition game.

    Later: Those really interested in the nitty-gritty mathematical details might want to read the comments to my original post of this problem.

    Comments closed