/
Thinking Strategically Prof. Thinking Strategically Prof.

Thinking Strategically Prof. - PowerPoint Presentation

anya
anya . @anya
Follow
66 views
Uploaded On 2023-09-18

Thinking Strategically Prof. - PPT Presentation

Yair Tauman 2 The Right Game From LoseLose to Win Win In the early 1990s the US automobile industry was locked into an all too familiar mode of destructive competition End of year rebates and dealer discounts were ruining the industrys profitability As soon as one company u ID: 1017635

equilibrium strategy bid 000 strategy equilibrium 000 bid game 100 auction suppose outcome price bob strategies choose alice probability

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Thinking Strategically Prof." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Thinking StrategicallyProf. Yair Tauman

2. 2The Right Game From Lose-Lose to Win –Win“In the early 1990s, the U.S. automobile industry was locked into an all – too familiar mode of destructive competition. End of year rebates and dealer discounts were ruining the industry’s profitability. As soon as one company used incentives to clear excess inventory at year end, others had to do the same. Worse still, consumers came to expect the rebates. As a result, they waited for them to be offered before buying a car, forcing manufacturers to offer incentives earlier in the year.Was there a way out? Would some one find an alternative to practices that were hurting all the companies? General Motors may have done just that.

3. 3In September 1992, General Motors and Household Bank issues a new credit card that allowed cardholders to apply 5% of their charges toward buying or leasing a new GM car, up to $500 per year, with a maximum of $3,500. The GM card has been the most successful credit card launch in history. One month after it was introduced, there were 1.2 million accounts. Two years later, there were 8.7 million accounts.As Hank Weed, managing director of GM’s card program explained, the card helps GM build share through the “conquest” of prospective Ford buyers and others – a traditional win–lose strategy. But the program has engineered another, more subtle change in the game of selling cars. It replaced other incentives that GM had previously offered. The net effect has been to raise the price that non cardholder – someone who intends to buy a Ford, for example – would have to pay for a GM car.

4. 4The program thus gives Ford some breathing room to raise its prices. That allows GM, in turn, to raise its prices without losing customers to Ford. The result is a win-win dynamic between GM and Ford.If the GM card is as good as it sounds, what’s stopping other companies from copying it? Not much, it seems. First, Ford introduced its version of the program with Citibank. Then Volkswagen introduced its variation with MBNA Corporation. Doesn’t all this imitation put a dent in the GM program? Not necessarily. In business it is often thought to be a killer compliment. Textbooks on strategy warn that if others can imitate something you do, you can’t make money at it. Some go even further, asserting that business strategy cannot be codified. If it could, it would be imitated and any grains would evaporate.

5. 5Yet the proponents of this belief are mistaken in assuming that imitation is always harmful. It’s true that once GM’s program is widely imitated, the company’s ability to lure customers away from other manufacturers will be diminished. But imitation also can help GM. Ford and Volkswagen offset the cost of their credit card rebates by scaling back other incentive programs. The result was an effective price increase for GM customers, the vast majority of whom do not participate in the Ford and Volkswagen credit card programs. This gives GM the option to firm up its demand or raise its prices further. All three car companies now have a more loyal customer base, so there is less incentive to compete on price. To understand the full impact of the GM card program, you have to use game theory .The key is to anticipate how Ford, Volkswagen, and other auto-makers will respond to GM’s initiative.

6. 6When you change the game, you want to come out ahead. But what about the fact that GM’s strategy helped Ford? One common mind-set seeing business as war – says that others have to lose in order for you to win. There may indeed be times when you want to opt for a win-lose strategy. But not always. The GM example shows that there are also times when you want to create a win-win situation. Although it may sound surprising, sometimes the best way to succeed is to let others, including your competitors, do well. Looking for win-win strategies has several advantages. First, because the approach is relatively unexplored, there is greater potential for finding new opportunities. Second, because others are not being forced to give up ground, they may offer less resistance to win-win moves, making them easier to implement. Third, because win-win moves don’t force other players to retaliate, the new game is more sustainable. And finally, imitation of a win-win move is beneficial, not harmful.

7. 7The term coopetition encourages thinking about both cooperative and competitive ways to change the game. It means looking for win-win as well as win-lose opportunities. Keeping both possibilities in mind is important because win-lose strategies often backfire. Consider, for example, the common – and dangerous- strategy of lowering prices to gain market share. Although it may provide a temporary benefit, the gains will evaporate if others match the cuts to regain their lost share. The result is simply to reestablish the status quo but at lower prices – a lose-lose scenario that leaves all the players worse off. That was the situation in the automobile industry before GM changed the game.

8. The Three-Door ProblemCreated by Dov SametThis is a problem presented to contestants in the TV game show Let’s make a deal , hosted by Monty Hall (1967-1990).Monty Hall

9. Let the game begin…XThere are three doors on stage. Behind one door is a car; behind the others, goats.The contestant does not know where the car is. She/he picks one of the doors.

10. The host intervenes…XThe host, who knows what’s behind the doors, opens one of the other two doors which hides a goat. The contestant can now choose to open the door which he first chose, or to open the one door left closed.If there is a car behind the door opened, he\she wins it.

11. To stick or to switch?That is the question!

12. What does it matter?From the participant’s point of view, each door has equal likelihood of revealing the car.Once the host opened an “empty” door, the car is behind one of the two remaining doors with equal likelihood.Hence there is no advantage or disadvantage in changing the initial choice.Is this so?12

13. Please meet….Marilyn vos Savant Columnist of the “Ask Marilyn” column in Parade magazine.Listed in the Guinness Book of World Records for the “highest I.Q.” (228).13

14. Marilyn’s ClaimIn her weekly column, she claimed that the participant would do better switching doors.She has received about 10,000 letters, the great majority disagreeing with her.About 1000 letters were written by mathematicians and scientists.During the heat of this debate, the New York Times published a large front page article in the July 21st, 1991, Sunday issue. 14

15. A typical letter…This was written by Robert Sachs, a professor of mathematics at George Mason University:“You blew it! Let me explain: If one door is shown to be a loser, that information changes the probability of either remaining choice - neither of which has any reason to be more likely -  to 1/2. As a professional mathematician, I am very concerned with the general public's  lack of mathematical skills. Please help by confessing your error and, in the future, being more careful.”15

16. In situations involving uncertainty, we tend to assume that the possible outcomes are equally likely. 16

17. There are two possible strategies:To stick to the door which has chosen firstTo switch, and open the other door17

18. 18Clearly if the contestant chooses the “sticky” strategy his probability to win the car is 1/3If he chooses the “switchy” strategy he will win the car with probability of 2/3

19. Why? Suppose that he chooses to switch doors. If he chooses first an empty door (happens with probability 2/3) he wins for sure: the host opens the other empty door and he then switches to the door with the car.19

20. An alternative explanation:Suppose we change the rules of the game.Following the initial selection of a door, the contestant can either open it, or open both the remaining doors and win the car if it is behind one of them.It is obvious that now the contestant should choose to open the two remaining doors. This way, the odds of winning double!But this is exactly the situation in the original game. Except that there the host is helping the contestant by opening one of the two remaining doors for him – an empty one. 20

21. 21Decision Making about Medical DiagnosisIn a certain population one out of 1,000 people carries the HIV virusTesting device is 100% accurate on HIV carriers (every HIV carrier is positively diagnosed)The testing device is 99% accurate on non-HIV carriers (99 out of 100 non-carriers are diagnosed negatively)A person chosen at random is tested positively. What is the chance (probability) that he is an HIV carrier?

22. 22A surprising answer:Less than 10%, Why?Suppose for example that the population is of 100,000 people and all of them take the test:100,00099,900100010099998,901Non-carriersCarriers+-+-Total number of plus outcome is 1,099But, only 100 of them are HIV- carriersThe probability that a person tested positive is indeed an HIV-carrier is therefore:1001,099111~~~~9%

23. Simultaneous GamesDominant and Dominated Strategies23

24. A Couple DisputeA couple: Alice an Bob decide to divorce. Their total asset is $500K. They need to reach an agreement of how to divide their asset. Each one of them has two choices: to hire a Brilliant lawyer who charges for his work $150K or to hire an Ordinary lawyer who charges a fee of $50K. If both of them hire the same type of lawyer they willsplit the asset equally. Namely each will receive $250K. A Brilliant lawyer who faces an Ordinary lawyer will achieve $375K for her client, leaving $125K for the other side. 24

25. The situation can be described by the following table:What will happen?225 , 75100 , 100200 , 20075 , 225BobAliceBOBO25

26. No matter what Bob decides to do – Alice is best offhiring a Brilliant lawyer. Similarly, no matter whatAlice does, Bob is best off also hiring a Brilliant lawyer.225 , 75100 , 100200 , 20075 , 225BobAliceBOBO*26

27. The only rational outcome is that both Alice and Bob hire Brilliant lawyers and net $100K each.Who makes the most of it?The Lawyers27

28. 28Now, let’s assume each that both Alice and Bob have the option to negotiate without a lawyerAssumptions:When one side is represented by a brilliant lawyer and the other side is not represented by a lawyer, the lawyer will get for his client 420,000 and leave the other side with 80,000When one side is represented by an average lawyer and the other side is not represented by a lawyer, the lawyer will get for his client 315,000 and leave the other side with 185,000Now, the table is as follows:

29. 21BrilliantLawyerAverageLawyerNoLawyerBrilliantLawyer100,100225,75270,80AverageLawyer75,225200,200265,185NoLawyer80,270185,265250,250Again the dominant strategy of each one of them is to choose the brilliant lawyer  Inferior outcome29

30. 30In 1950 Melvin Dresher and Merrill Flood (RAND cooperation) formulate a game that subsequently named the Prisoner’s Dilemma by Albert Tucker (Princeton). Tucker came up with the following story which motivated an equivalent version of Dresher and Flood.The Prisoners’ Dilemma

31. Two suspects in a major crime are held in custody. There is enough evidence toconvict each of them of a minor offense, but notenough evidence to convict either of them on a major crime, unless one of them acts as an informer against the other. If they both stay quiet, each will be convicted of the minor offense and spend one year in prison. 31

32. If one and only one of them admits he will be freed and used as a witness against the other, whowill spend life in prison. If both admit each willspend 15 years in prison. suspect 2suspect 1AdmitAdmitQuietQuiet15years in , prison 15years inprison free ,Life inprison Life inprison , free 1year in , prison 1year in prison 32

33. suspect 2suspect 1AdmitAdmitQuietQuiet15years in , prison 15years inprison free ,Life inprison Life inprison , free 1year in , prison 1year in prison A dominant strategy of each one of them is to admit. The only Rational Outcome is for them to spend 15years in prison.33

34. Definition: A game is a Prisoners’ Dilemma game if the following two conditions are satisfied:Every player has a strictly dominant strategy (a strategy that is strictly better than any of the other strategies, irrespective of the choice of strategies of the other players)When the players choose their dominant strategies the outcome is inferior (namely, there is another outcome that is strictly better for all players) 34

35. The Tension Between Game Theory and the Invisible Hand PrincipleAdam Smith in 1776 (The Wealth of Nations) advocated that competition between consumers who are allowed to choose freely what to buy and producers who are allowed to choose freely what and how to produce will lead to a product distribution that are beneficial to society as a whole.35

36. This term applies to any individual action that has unplanned or unintended consequence, particular those that arise from actions not orchestrated by a central planner (a government) and have a “pleasant surprise” on society. 36That is, individual efforts to maximize their own gains (a selfish behavior) in a free market will benefit society even if the ambitious has no benevolent intention. Smith refers to such phenomenon as “invisible hand”.

37. The sad reality is that most markets are not perfectly competitive and include “significant” players (firms) that their actions have non-negligible impact on all other players (imperfect competition). In such markets the distribution of products are in general not efficient. 37In a free and competitive market place where every consumer and every producer has only negligible effect on the market, individual self-interest produces the maximization of society’s total utility.

38. In environments with significant players, the players are acting strategically and when choosing their strategy must take into account the counter actions of their rivals and their counter actions to the rivals’ actions, etc.As the Prisoner's Dilemma demonstrates, the strategic outcome can be very inefficient. That is, every player might do the individually best thing, but this may ends up worst from their collective view point.38

39. Example: The Collapse of a CartelTwo countries Iran and Iraq can either produce 2 million or 4 million of barrels of crude oil a dayThe daily demand for oil is:P is the market price and Q is the total outputExtraction costs are $2 per barrel in Iran and $4 per barrel in Iraq39Q=120P + 5P=120Q- 5 equivalently

40. The following tables are for the market price and the profits of the two countries as a result of their outputs: 40 IraqIranq2= 2q2= 4q1= 2Q=4P=25Q=6P=15q1= 4Q=6P=15Q=8P=10 IraqIranq2= 2q2= 4q1= 246, 4226, 44q1= 452, 2232, 24Table 1: Market PricesTable 2: Profits (Iran, Iraq)

41. Clearly, q1= q2= 4 (Q=8) are strictly dominant strategies of each countryThe outcome of their choice is (32, 24) which is inferior to the cooperative outcome (46, 42) that is obtained when both countries trust each other and produce 2 millions of barrels eachThis cartel game is another example of the Prisoners’ Dilemma41

42. Consider the following two persons game:What are the possible values of x and y that turn this game into prisoners’ dilemma?42Example 21LRU4, 42, 6D5, 1x, y2<x<41<y<4

43. The Public Transportation DilemmaThe car ride is shorter regardless of the percentage of car drivers. Therefore, everyone uses the car rather than take the bus.Hence, the trip takes 30 + 0.5*100 = 80 minutes.If all took a bus, it would take 45 + 0.5*0 = 45 minutes.Commuters from A to B can choose between public transportation (a bus) and private cars.Denote by p the percentage of commuters by car.Traveling time is: by bus: 45 + 0.5p by car: 30 + 0.5p

44. Consider the following simultaneous two person-game G:Is the More the Better?44Increasing exclusively all payoffs of one playerDoes it make him happier? Not Necessarily! 21LRU4, 67, 7D3, 56, 4Game G The strategy U is a strictly dominant for Player 1- therefore, he will choose it Player 2 knows this and her best reply to U is to play R The outcome is (7, 7)

45. 45Suppose next that only player 1 is getting a bonusIf he plays U his payoff will increase by 1 no matter what 2 choosesIf I plays D his payoff will increase by 3, again no matter what 2 choosesWhat can 1 lose?  All his payoffs increased Is that so? The new game G1 is: 21LRU5, 68, 7D6, 59, 4Now D is a strictly dominant strategy of 1 and 2 knows this hence 2 will choose L to obtain 5 (and not 4)The outcome is (6,5)The exclusive bonus of 1 hurts him!Remark: This last game is not a prisoners’ dilemma game. Yes, the outcome (6, 5) is interior to (8,7). Yes, player 1 has a strictly dominant strategy- D. But 2 does not have a dominant strategy. 21LRU4, 67, 7D3, 56, 4Game GGame G1

46. 46Adding a dominant strategy to one player does it make him happier? Not Necessarily!Consider again the game G: 21LRU4, 67, 7D3, 56, 4Consider next the game G2 where N is a new strategy of 1 which strictly dominates the other two strategies U and D: 21LRU4, 67, 7D3, 56, 4N5, 5100, 4The only sensible outcome of G2 is (5, 5) which is inferior to (7, 7)

47. 47More on: Is the More the Better?A firm has three board members: B1, B2 & B3 One of them B1 is the CEOThe board has to choose a strategy for the firm out of the following three strategies: S1, S2 & S3The strategy is selected by a voteEach board member selects a strategy and submits his choice in a sealed envelope The strategy that obtains a majority winsIf no strategy has a majority (all three board members select different strategies) the strategy of the CEO (B1) is implementedThe CEO is therefore endowed with an extra powerExample 1:

48. 48The following table represents the ranking of the three board members over the possible strategies: B1B2B3S1EBGS2BGES3GEBE- ExcellentG- GoodB- BadTable 1The three board members act strategicallyHow would they vote?What strategy will be implemented?

49. SolutionFirst observe that it is a (weakly) dominant strategy for the CEO to vote for S1 (the best for him).If he does so S1 will win except only if both B2 and B3 vote for S2 or both of them vote for S3.In these two cases the CEO has no impact on the outcome (no matter how he votes).Consequently, B1 will vote S1.The two other board members understand this and their strategic game can be described in the following table (under the assumption that B1 votes for S1):49

50. 50 B3 B2S1S2S3S1S1S1S1S2S1S2S1S3S1S1S3Table 2The table shows the wining strategy given the decisions of B2 and B3The ranking of B2 and B3 of S1 is (B, G)- see Table 1Their ranking of S2 is (G, E) and their ranking of S3 is (E, B)We can use Table 2 to write the outcome for B2 and B3 as a function of their decisions

51. 51 B3 B2S1S2S3S1B, GB, GB, GS2B, GG, EB, GS3B, GB, GE, BTable 3Looking at Table 3 we see that S2 is a weakly dominant strategy for B3 (he can’t do better no matter what B2 does)Hence B3 votes for S2.But then the best reply action of B2 is also to vote for S2Consequently S2 is selected, which is the worst strategy of the CEOExtra power may hurt!

52. Suppose that the votes are open and taken in a sequence. First B1 announces his choice, then B2 and then B3. Claim: The winning strategy is S3.52 B1B2B3S1EBGS2BGES3GEBE- ExcellentG- GoodB- BadMore on: Is the More the Better?Example 1 – part 2 – sequential voting:

53. 53Explanation:Suppose first that B1 votes for S1.Then clearly B2 will not vote for her worst option, S1. If she votes for S3 then B3 who votes after her will vote for either S1 or S2. In both cases the outcome is S1 (which is B2’s worst option). Therefore, B2 will vote for S2 and then B3 will also vote for S2. The winning strategy will be S2, the worst option for B1.Consequently, B1 will not vote for S1 (and clearly not for S2).If B1 votes for S3 he induces B2 to vote for S3 as well, and the winning strategy is S3 (a good option for B1 and excellent for B2).Consequently, B1 will vote for S3.

54. Exercise: Show that S3 will be selected if the order in which the board members vote is (B1, B3, B2).Show that S2 will be selected if the order of the votes is (B2, B1, B3) or (B2, B3, B1) or (B3, B2, B1).Show that S1 will be selected if the order of the votes is (B3, B1, B2).54

55. 55Remark:We showed that in case the board members vote simultaneously (that is, every board member when casting his/her vote has no idea of the choices of the other two members) it is a (weakly) dominant strategy for B1 to vote for S1.Yet, in a sequential voting, where B1 announces first his choice, then B2 announces her vote and then B3 makes his choice, B1 votes for S3 and not for S1. Is there a contradiction here?The answer is NO. The explanation is a little tricky. In the simultaneous case each member has three strategies: to vote for S1 or for S2 or for S3 , and for B1 the strategy S1 dominates the other two strategies against any pair of choices of B2 and B3. However, in the sequential voting where the order is (B1, B2, B3) the situation is different and more complex.

56. 56Remark (cont.):While B1 still has the same three strategies, B2 has now 27 strategies (33), and B3 has 19,683 strategies (39). Now S1 is no longer the best choice of B1 against all (so many) possible strategy combinations of B2 and B3. We will elaborate on this point later on, when we study dynamic games. But let us provide here some intuition.Consider B2. Her strategy is now a plan of how to vote for any possible announcement of B1, who precedes her: One strategy of B2 is to vote for S1 irrespectively of B1’s announcement.Another strategy of B2 is to vote for S2 irrespectively of B1’s announcement.A third strategy of B2 is to vote for S3 irrespectively of B1’s announcement.These three strategies correspond to the three strategies S1 , S2 and S3 of B2 in the simultaneous case, where B2 can not contingent her choice on the choice of B1, simply since she does not know his choice.

57. 57Remark (cont.):Another strategy of B2 is to mimic the choice of B1. But there are many more. For instance, the strategy (S2 , S3 , S1) which means that B2 will vote for S2 if B1 votes for S1, B2 will vote for S3 if B1 votes for S2, and B2 will vote for S1 if B1 votes for S3. Any such triple defines a strategy of B2, and there are 3*3*3=27 such triples (why?).

58. Successive Elimination of Dominated StrategiesConsider the following 5x6 two person game:58 21abcdefA8, 1212, 85, 1510, 1011, 99, 11B7, 1311, 96, 149, 118, 127, 13C8, 2026, 44, 68, 197, 77, 3D19, 1412, 144, 159, 1010, 108, 19E10, 1022, 65, 1520, 018, 214, 6

59. 59 21abcdefA8, 1212, 85, 1510, 1011, 99, 11B7, 1311, 96, 149, 118, 127, 13C8, 2026, 44, 68, 197, 77, 3D19, 1412, 144, 159, 1010, 108, 19E10, 1022, 65, 1520, 018, 214, 6First note that b is strictly dominated by c and therefore eliminatedNext note that in the new game C is strictly dominated by E and therefore eliminatedNext note that in the new game a is strictly dominated by c therefore eliminated

60. 21abcdefA8, 1212, 85, 1510, 1011, 99, 11B7, 1311, 96, 149, 118, 127, 13C8, 2026, 44, 68, 197, 77, 3D19, 1412, 144, 159, 1010, 108, 19E10, 1022, 65, 1520, 018, 214, 660Next notice that D is strictly dominated by A and thus eliminatedNext notice that f is strictly dominated by c and thus eliminatedNext notice that e is strictly dominated by c and thus eliminatedNext notice that d is strictly dominated by c and thus eliminatedNext notice that A and E are both strictly dominated by BThe unique outcome is (6, 14)- namely 1 should play B and 2 should play c

61. Nash Equilibrium61

62. Nash EquilibriumAll the previous examples could be solved by elimination of dominated strategies But there are situations where none of the strategies are dominated or none of the remaining strategies (after successive elimination of dominated strategies) are dominated and the game still has more than one outcomeThe next thing to do is to use the well known concept of strategic equilibrium (Nash 1950) which is known as Nash equilibrium 62

63. Definition:A Nash equilibrium is a collection of strategies, one strategy for every player, such that no player has incentive to unilaterally deviate from his strategy (given that all other players stick to their strategies) .63

64. 64This one pagearticle byNash (1949)rewarded himthe Nobelprice in 1994

65. 65Example 1: Consider the following 3x3 two-person game: 21B1B2B3A12, 03, 34, 2A25, 54, 41, 4.5A33, 76, 62, 5

66. The only Nash equilibrium is the pair (A2, B1) and the outcome there is (5, 5)Note that (5, 5) is inferior to the outcome (6, 6) which is obtained when 1 chooses A3 and 2 chooses B2But given the choice A3 of 1, player 2 is best off deviating from B2 to B1 (to obtain 7) and therefore (A3, B2) is not a Nash equilibrium.66

67. 67Example 2: Consider the following 2x2 two-person game:Here there are two Nash equilibrium points (U, L) with the outcome (2, 2) and (D, R) with the outcome (1, 1) 21LRU2, 20, 0D0, 01, 1

68. Example 3: Stag HuntrabbitstagrabbitstagHunter IHunter IIEach of two hunters can either try to hunt a stag (an adult deer and rather large meal) or hunt a rabbit (tasty, but substantially less filling). If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. However, a hunter can easily get a rabbit by himself. Each player must choose an action without knowing the choice of the other. 2,24,40,22,0

69. Example 4: Chicken3,30,66,010-,10-chickentoughchickentoughplayer Iplayer II chicken game dove-hawk (biology) entering a junctionEquilibria:(tough, chicken)(chicken, tough)

70. Example 6: Consider the following 5x5 two-person game:There are exactly two Nash equilibrium points:(A1, B4) with the outcome (8, 10) and (A3, B3) with the outcome (10, 9) ************70 21B1B2B3B4B5A110, 55, 107, 78, 106, -3A24, 915, 34, 126, 109, 9A38, 77, 810, 97, 815, 7A414, 1112, 1310, 95, 55, 14A56, 99, 68, 87, 710, 6

71. Is there an easy way to find all Nash equilibrium points of a game? The answer is YESHere is the procedure:For every strategy of player 1 (every row) designate by * the upper right side of the square containing the highest payoff of player 2For the row A1 we designated the two squares (A1, B2) and (A1, B4) where 2 obtains 10For the row A2 we designated the square (A2, B3) where 2 obtains 12, etc.Next, for every strategy of player 2 (every column) we designated by * the upper left side of the square containing the highest payoff of 171

72. For the column B1 we designated the square (A4, B1) where 1 obtains 14For the column B2 we designated the square (A2, B2) where 1 obtains 15, etc.The squares that are designated twice represent the Nash equilibrium pointsIn the example above these squares are (A1, B4) and (A3, B3) 72

73. Example 6a: Eliminating weakly dominated strategies can delete an equilibriumαβa1,10,0b0,00,0The strategies b and β are dominated.After eliminating these strategies we are left with the strategy combination (a, α), which is an equilibrium of the original game.The combination (b, β) is an equilibrium of the original game that was eliminated.

74. 74Consider the following 3x3 two-person game: 21B1B2B3A11, 10, 10, 1A21, 01, 00, 1A31, 00, 11, 0The only Nash equilibrium here is (A1, B1) with the outcome (1, 1)This outcome is the best outcome in the table but it is not sensibleNote that A1 is (weakly) dominated by both A2 and A3 and B1 is (weakly) dominated by both B2 and B3Example 6b: Eliminating weakly dominated strategies can delete an equilibrium

75. We see that a Nash equilibrium may consists of weakly dominated strategies!No strictly dominated strategy can be part of a Nash equilibrium (why?)Let us also mention that this game has other Nash equilibrium points, but in mixed strategies (strategies that are selected by a random device) which do not use any weakly dominated strategiesEvery player uses the strategies (0, ½, ½) which means that he chooses at random one of his last two strategies, equally likely We will study mixed strategies later on75

76. Is the More the Better: Can More Information Hurt the Informed Person?Two firms IBM and JBM compete in the software marketThey consider to develop either software A or software BThey are not sure which of the two softwares fits better consumers’ need . The two softwares are equally likely to fit consumers.76

77. IBM moves first and decides which software to produceThe competitor JBM observes the decision of IBM and then makes its own decision: whether to mimic IBM or to counter IBM’s choiceIf JBM chooses to mimic IBM then each one of them will earn $2b if the software fits consumers and zero otherwiseIf JBM counters IBM the firm with the appropriate software will be a monopolist and will earn $5b while the other firm will obtain zeroAssume that when taking decisions the firms’ objective is to maximize expected profit 77

78. 78Problem 1: What are the optimal strategies of each firm?Solution: JBM can either mimic IBM or counter its choiceIf JBM mimics IBM it will earn at most $2b (actually its expected profit will be: ½*2 + ½*0 =1)If JBM counters IBM’s choice it will obtain either $5b or 0 equally likely, namely its expected profit will be $2.5bTherefore, JBM’s best (dominant) strategy is to choose the opposite software of IBMIBM must be indifferent between software A and B since it has no information which software is the appropriate one.The expected profit of each firm is therefore $2.5b.

79. 79Problem 2: Suppose next that some “Mavin” knows which software is the appropriate one. Suppose that he reveals this information only to IBM (and not to JBM). But JBM knows that now IBM knows which is the appropriate software. What are now the optimal strategies of the firms? Solution: Notice that now IBM has a (strictly) dominant strategy, namely to develop the appropriate software.If it does so it will earn at least $2b (and perhaps $5b) and if it develops the wrong software it will earn zero for sureJBM understands that IBM will choose the appropriate software and hence it should mimic IBM to obtain $2b (and not zero) Thus both firms obtain $2b. IBM ends up with lower payoff as a result of the extra information it receives

80. 80Problem 3: Suppose that IBM obtains exclusively the information about the right software. Also, suppose that JBM does not know whether or not IBM knows which is the appropriate software. Can this exclusive information hurt IBM?Solution: NoAn exclusive information has always non negative value in any strategic conflict situationWhy?This is a little bit tricky and it is beyond the capacity of this course…

81. Is the More the Better:Network Design and The Braess Paradox81

82. 2+10h2+10hProblem:Consider a road network as shown in the next diagram, on which 6,000 drivers travel from point A to point BThe travel time in minutes on AC road is 2+10h where h is the number of thousands of cars on this road (if the number of cars on AC is 2,000 then the travel time from A to C is 22 minutes)Likewise the travel time on BD is also 2+10hThe travel time on CB or AD is 48+h82BACD48+h48+hWhat is the travel timeof a car that goes from A to B when the system is in equilibrium (meaning that traveling through Ctakes the same time as through D)

83. 2+10h2+10hSolution:By symmetry 3,000 drivers will travel from A to B through C and the other 3,000 drivers will travel through DThat is h=3 and the travel time is T=83 minutesIn equilibrium the travel time through C is identical to the travel time through D, namely 83 minutes83BACD48+h48+h3333

84. 2+10h2+10hProblem:Now suppose that the two points C and D are connected through a relatively short road and the travel time on CD is 6+h (for simplicity assume that the road CD is one way and one can only travel from C to D)What is now the equilibrium travel time?84BACD48+h48+h6+h

85. 2+10h2+10hSolution:The equilibrium travel time is now T=92Every driver spends 9 minutes more on the roads Perhaps 6+h is not that efficient. What will be theequilibrium travel time if one can drive from C to D in no time (zero!)?85BACD48+h48+h6+h22244

86. 2+10h2+10hSolution:It can be shown easily (solving two linear equations with two unknowns) that this is the worst case scenarioThe travel time now is T=98.5454… 86BACD48+h48+h0

87. Example 2:Consider the following transportation network with a constant flow of 4000 cars going from A to B, either through C or through D.87 The two segments AD and CB are insensitive to congestion and it takes a car to drive 45 minutes in each one of them no matter how many cars are on these segments. The other two segments AC and DB are sensitive to congestion. If the number of cars on AC or DB is x then the driving time of every car in these segments x/100 (for example if 3000 cars go from A to C, each car will spend there 30 minutes).BACDx/100x/10045453000100030001000

88. 88 Since the network is symmetric, in equilibrium half of the flow (2000 cars) will go through C and the other half through D. The driving time is then for each one of the 4000 cars. Note that the total time on the roads of all drivers is The equilibrium here is efficient in the sense that there is no other traffic pattern that reduce the total time spent on the roads

89. 89 For instance, suppose that 3000 drivers go through C and the other 1000 drivers through D.BACDx/100x/10045453000100030001000Then every car through C spends75 minutes and every car through D spends 55 minutes. Then the total time spent isT=3000*75+1000*55=280,000 minuteswhich is higher than the equilibrium time. We say that the social cost with the equilibrium pattern is 260,000 and the social cost with the above pattern is 280,000. The pattern is the one that minimizes social cost (that is, minimizes the total time spent by all cars on the roads). In this example the equilibrium pattern where half of the cars go through C and the other half through D is optimal.

90. 90Next, suppose that C is connected to D in the most efficient way, so that it takes zero minutes to cross the road CD independently of the number of cars there.BACDx/100x/10045453000100030001000Let us first analyze the equilibrium pattern. With the new road CD, no car will choose to go through D. WHY?It will take a car 45 minutes to drive from A directly to D. On the other hand, it will take a car to drive to D through C at most 40 minutes (if all 4000 cars drive to D through C).Therefore, in equilibrium, all 4000 cars will go from A to C.For exactly the same reason no car will go directly from C to B (since through D it will take at most 40 minutes).0

91. 91 We conclude that in equilibrium the driving pattern is such that all cars go from A to C then from C to D and then from D to B. The time spent of every car is then and again adding a road hurt every driver who spends now 15 minutes more than in case the road connecting C to D is eliminated. The equilibrium social cost in this case isBACDx/100x/100454530001000300010000

92. 92This is clearly not optimal since by not allowing to drive from C to D every car spends 65 minutes on the road and the social cost then is 260,000 minutes. But this is still not the optimal pattern.To find out the pattern that minimizes the social cost, suppose that x cars (out of the 4000 cars) go from A to C and the other 4000-x cars go from A to D. Also suppose that y cars out of x cars that get to C travel from C to D.BACDx/100x/1004545x4000-x+yx-y4000-x0y

93. 93The total time spent by all cars in the network isFirst order conditions for minimization of T are BACDt=22.5t=22.5t=45t=452250225017501750t=0500By (1) and (2), we getx*=2250y*=500

94. 94The total time spent by all cars in in the optimal pattern iswhich is better than the pattern where the road connecting C to D is eliminated.To summerize, the equilibrium social cost is T=320,000 and the minimum social cost is T*=258,750.Hence if we can induce the drivers to use the optimal pattern (perhaps through proper tolls) then the road CD can be useful.

95. In a communication network, say e-mail, every bit travel from A to B through some nodes of the netIn each node there is a processor that takes an independent decision as to where to send each bit (like the driver who makes his own decision as to what road to take)In this case more links may sometimes hurt the systemA careful design of networks iscritical95BA

96. Location GamesTwo service providers have to choose their locations. The possible locations are corners on a certain straight street. Customers are uniformly distributed along the street and each one of them approaches the closest provider. If the two providers choose the same location each serves half of the customers.96

97. Location Games – Two ProvidersProblem 1: Suppose that there are three possible locations A, B and C where A and B are the two ends of the street and C is in the middle. Where would the two providers be located?Solution: It is a strictly dominated strategy for each provider to choose either A or B. The location C strictly dominates both locations A and B. Therefore, both providers will choose the location C and each one of them will serve half of the customers. ACB97

98. Problem 2: As in Problem 1 but now there are five possible locations with equal distance between any two adjacent locations. Solution: The locations A and B are both strictly dominated by the location D. Thus, these two locations can be eliminated.ACDEB98

99. After eliminating A and B, the locations C and E are strictly dominated by D. Thus, the two providers will choose location D and they will again split the market equally.With the same argument (successive elimination of strictly dominated strategies) we conclude that whenever there are odd number of locations with equal distances the two providers choose the middle location and each serves half of the market.CDE99

100. Problem 3: As in Problem 1 but now there are two possible locations:Solution: It is easy to verify that no matter where the providers are located each one of them will serve half of the market.AB100

101. Problem 4:As in Problem 1 but now there are four possible locations:Solution: The location A is strictly dominated by the location C and the location B is strictly dominated by the location D. After eliminating both locations A and B any of the other two locations C or D are equally good for both providers and each one of them will serve half of the market. Using successive elimination of strictly dominated strategies we conclude that whenever the number of locations is even then with equal distances the middle two locations are the only equilibrium points. In the four location case the four Nash equilibrium points are (C,C), (C,D), (D,C) and (D,D). CDBA101

102. Consider the case where customers are uniformly distributed on the interval [0, 1]. Suppose that there are four possible locations: A, B, C and D as in the following figure:Where would the two providers be located?Solution:First note that the location B is strictly dominated by the location A.After eliminating B, the location D becomes strictly dominated by A. Two providers with finite number of locations and arbitrary distances01/3A1B3/4C7/8D102

103. After eliminating D we have:Suppose that Provider 1 is located at C. If Provider 2 is also located at C, then Provider 1 obtains ½ and if he moves to A then he obtains 1/3+1/2(3/4-1/3)=13/24>1/2. If Provider 2 is located at A, then Provider 1 who is located at C obtains 1/4 + 1/2 * (3/4 – 1/3) = 11/24. Hence he is better off moving to A to obtain ½. Thus the two providers should choose A. Two providers with finite number of locations and arbitrary distances01/3A13/4C103

104. Suppose that there are two providers with n possible locations and with arbitrary distances between locations. Then Claim: the only location that survives successive elimination of strictly dominated strategies is the one that is the closest to the median customer (The customer located at the middle of the street). If there are two such locations (as in Problem 3 and 4) both of them are equally good for the providers. The two providers split the market equally (whether they are located in the same corner or in two adjacent corners). The proof of the claim is along the same lines as the solution of the previous examples. Two Providers- The General Case104

105. Example: Consider the following situation:D is the location of the median customer and hence the two providers will choose D as their location.Two Providers- The General Case03/8C1B6/8F5/8E4/8D1/8A105

106. Consider three providers with two locations (the end points):With three or more provider we do not have dominated strategies and the previous procedures of successive elimination of dominated strategies can not be applied. The solution concept we employ here is hence the Nash equilibrium. Is it a Nash equilibrium for all three to be located at the same point, say A? The answer is no. Suppose all three providers are located at A. Each obtains 1/3 of the market. Any one of them who unilaterally deviates to B will obtain 1/2 of the market and therefore has incentive to do so. Three or more Providers1B0A106

107. Suppose that two providers choose A and the other one chooses B:Certainly, the one in B who obtains 1/2 has no incentive to deviate to A since in A he will obtain 1/3 only. Also, no one at A can benefit from switching to B (in both cases the deviant will obtain 1/4). We conclude that the only equilibrium outcome is: two providers are located in A and the other provider is located in B or vice a versa. Three or more Providers1X0XXAB107

108. Next suppose there are three locations A, B and C with three providers, as in the next figure:Is the following an equilibrium outcome?Answer: No! The provider in A who serves 1/4 of the market is better off deviating to C. He will then obtain 1/2 * (1/2 + 1/4) = 3/8 > ¼ 1B0A1/2C1B0A1/2CXXX108101/2XXX

109. Is the following an equilibrium outcome? Answer: No! the provider in B who serves 1/4 of the market is better off deviating to C where he will serve 1/3 of the market. The only equilibrium is that the three providers choose their location at C:1B0A1/2CXXX1B0A1/2CXXX109

110. Consider the following three providers-three locations problem:Check that the only equilibrium outcome is that two providers choose A and the other provider chooses C.01/3A1B3/4C110

111. No Restriction on Locations Suppose next that a provider can choose any point in [0, 1] as his location (continuum of possible locations). Let us start with two providers.Claim: The only equilibrium with two providers is when both provider choose the middle point 1/2, as their location. Proof: First let us show that choosing the middle location for both of the providers constitutes a Nash equilibrium. If they both choose ½ as their location each serves half of the market. 111

112. Suppose that one provider deviates to x, x< 1/2:The provider at x obtains [(x + 1/2)/2]. Since x<1/2 he obtains less than 1/2 and he loses from this deviation. Similar outcome is obtained if he deviates to x>1/2.We conclude that choosing the middle point 1/2 as the location of both providers is a Nash equilibrium. Let us next show that there is no other Nash equilibrium. Suppose first that the two providers choose the same location x and x ≠ ½. Each then obtains 1/2. If x<1/2 then a provider who slightly deviates to the right of x, say to x’, obtains more than 1/2 (actually 1 – x’ + (x’ – x)/2) and will benefit from this deviation. 101/2P2xP1112101/2x'x

113. If x > 1/2 a beneficial deviation is to the left of x. A deviant to x’ will obtain (x’ + (x – x’)/2) > 1/2:We conclude that choosing the same location but other than 1/2 is not a Nash equilibrium. We are left with the case where the two provider choose different locations, say x and x’, x ≠ x’. Suppose that x < 1/2 :then the provider on x’ benefits from moving to a location very close to x and to the right of x. if x > 1/2 :then the provider on x’ has incentive to move to a point left of x and very close to x.101/2x'x101/2x101/2x113x'x'

114. What happens if there are three providers? There is no Nash equilibrium in this case! why?If all three are located at the same point, say A, each obtains 1/3. Suppose that A ≤ 1/2. Then a deviant to a point on the right of A but close to A will increase his payoff from 1/3 to a payoff very close to 1 – A ≥ 1/2. Suppose next that not all providers are located at the same point. Then there is at least one provider who is located alone at say A and there is no other provider between A and either 0 or 1:By slightly moving to the right the provider in A will increase his payoff and hence has an incentive to deviate. 10AXXX10AXXX114

115. Problem: The customers are uniformly distributed on a circle of length (circumference) 1. There are three providers. Find the Nash equilibrium in locations.Solution: We claim that the set of all Nash equilibrium points consists of three locations 1, 2 and 3 with distances a, b and c (see Figure) such that a<=1/2, b<=1/2, c<=1/2 and a+b+c=1.Note that provider 1 obtains (a+c)/2 and this does not change if 1 moves toward either 2 or 3. Hence, he may only benefit if he deviates to a point in between 2 and 3. In this case he obtains b/2. Therefore in equilibrium we must have: (a+c)/2 >=(b/2) or equivalently a+c>=b. Since a+c+b=1 we have 1-b>=b or b<=1/2. By similar argument (taking into account the deviation of 2 or 3) we have a<=1/2 and c<=1/2 123bca115

116. Political Contest – The Median VoterTwo political candidates compete for a certain job (president, prime-minister, etc.,)They are completely opportunistic and their only target is to win the job.Suppose first that there are three relevant positions: left (L), center (C), and right (R)116

117. Candidates choose position strategically and voters vote for the closest candidate to their position.If both candidates are equally close to a voter, then he chooses one at random to vote for him.A candidate with the highest number of votes wins the job.If they obtain the same number of votes, then one of them is selected at random.117

118. Example 1Suppose that there are 4 million voters: - 1 million of them prefer R - 2 million of them prefer C - other 1 million voters prefer L.What position each candidate choose? 1 2 1 L C R118

119. Table 1 - Allocation of votersWe construct two tables.In the first, we have the allocation of the 4 million voters to candidates as a result of the candidates’ strategic choices.2,21,32,23,12,23,12,21,32,2LCR L C R12119

120. Table 2 - OutcomeThe second one describes the outcome again as the result of the strategy choice of the two candidates1 for winner and 0 for the loser, and (½, ½) if both have the same chance of winning½, ½0,1½, ½ 1,0½, ½1,0½, ½0,1½, ½LCR L C R12120

121. Table 2 - OutcomeNote that C is a strictly dominant strategy for both candidates. Consequently, they both will position themselves in the center and they will have equal chance to win.½, ½0,1½, ½1,0½, ½1,0½, ½0,1½, ½LCR L C R12121

122. Example 2Suppose that there are 2 candidates and 5 million voters: - 2 million of them prefer L - 1 million of them prefer C - 2 million voters prefer R. 2 1 2 L C R122

123. Table 3 – Strategic competitionAgain C is a strictly dominant strategy and each will position himself in the center.½, ½0,1½, ½1,0½, ½1,0½, ½0,1½, ½LCR L C R12123

124. Example 3Suppose that there are 2 candidates and 4 million voters: - 1 million of them prefer L - 0.5 million of them prefer C - 2.5 million voters prefer R. 1 0.5 2.5 L C R124

125. Table 4 – Strategic competitionR is a weakly dominant strategy for each candidate. Note also that R can be obtained by successive elimination of strictly dominated strategies. The two candidates will position themselves on the Right.½, ½0,10,11,0½, ½0,11,01,0½, ½LCR L C R12125

126. Example 4Suppose there are four positions: EL (extreme left), L, C and RAnd there are 5 million voters as follows 1 1 1 2 EL L C R126

127. The corresponding game is based on the distribution of voters to candidates:2½, 2½ 1, 41½, 3½2, 34, 12½, 2½2,32½, 2½3½, 1½ 3, 22½, 2½3, 23, 22½, 2½2, 32½, 2½ Table 5 EL L C RELLCR12127

128. The strategic game is:½, ½0, 10, 10, 11, 0½, ½0, 1½, ½1, 01, 0½, ½1, 01, 0½, ½0, 1½, ½ Table 6 EL L C RELLCR12128

129. ½, ½0, 10, 10, 11, 0½, ½0, 1½, ½1, 01, 0½, ½1, 01, 0½, ½0, 1½, ½ Table 6 EL L C RELLCR12Here C is a weakly dominant strategy for each candidate. Note also that C can be obtained by successive elimination of strictly dominated strategies. Both candidate will be located in the Center.129

130. What do we learn from these examples?130

131. DefinitionA median position is a position where at least half of the voters prefer that position or positions on the left of it and at least half of the voters prefers that position or positions on the right of it.Example 1: C is the median position. 1 2 1 L C RC131

132. DefinitionA median position is a position where at least half of the voters prefer that position or positions on the left of it and at least half of the voters prefers that position or positions on the right of it.Example 2: C is again the median position. 2 1 2 L C RC132

133. DefinitionA median position is a position where at least half of the voters prefer that position or positions on the left of it and at least half of the voters prefers that position or positions on the right of it.Example 3: R is the median position. 1 0.5 2.5 L C RR133

134. DefinitionA median position is a position where at least half of the voters prefer that position or positions on the left of it and at least half of the voters prefers that position or positions on the right of it.Example 4: C is the median position. 1 1 1 2 EL L C RC134

135. In all these examples the median position is either a strictly dominant strategy for both candidates, or can be obtained by successive elimination of strictly dominated strategy.Observation135

136. PROPOSITIONConsider two candidates with any number of position that can be ranked from left to right. Suppose that the median position is unique. Then the median position is a weakly dominant strategy for any candidate and it can be obtained by successive elimination of strictly dominated strategies.136

137. Multiple MediansConsider 4 positions A, B, C and D, and 6 million voters, as follows: 1 2 2 1 A B C D Here B and C are median positions137

138. The distribution of voters is:3, 31, 52, 43, 35, 13, 33, 34, 24, 23, 33, 35, 13, 32, 41, 53, 3 Table 7 A B C D ABCD12138

139. ½, ½0, 10, 1½, ½1, 0½, ½½, ½1, 01, 0½, ½½, ½1, 0½, ½0, 10, 1½, ½ Table 8 – Strategic game A B C D ABCD12Note that A and D are strictly dominated strategies and every candidate is indifferent between B and C. Therefore, each candidate will choose either the median position B or the median position C.139

140. ½, ½0, 10, 1½, ½1, 0½, ½½, ½1, 01, 0½, ½½, ½1, 0½, ½0, 10, 1½, ½ Table 8 – Strategic game A B C D ABCD12 There are four possible sensible outcomes: (B,B), (B,C), (C,B) and (C,C) In each one of them, the two candidates have equal chance to win140

141. Bids and Auctions141

142. The Campeau Tender offerRobert Campeau tried to buy the Federated Stores(and its crown jewel, Bloomingdales). He used a two – tiered tender offer.This is a bidding strategy which offers a high priceto the first shares tendered and a lower price to thelater shares tendered. Let us be more specific.Suppose that: (1) The pre-takeover market price is $100 per share. (2) The first tier of the bid offers a higher price of $105 per share up to half of the totalshares. 142

143.  143

144.  144

145. (4) The two – tiered offer is made unconditionalon the number of shares tendered. Even if the raider does not get control (less then 50%), sharesare still purchased at $105 per share – the first tierprice.(5) If the offer attracts more than 50% of the shares Campeau can force every other share holder to sell his shares for $90, the second tier price.It is so happened that in addition to Campeau, Macy’s made a conditional offer: it offered $102 per share provided it gets the majority of the shares. In this case every share will be bought at $102 whether tendered or not.To whom should one tender? 145

146. SolutionClaim It is a dominant strategy to tender to the twotiered offer (TTO)Explanation Suppose Bob is a shareholder. Consider three cases:With Bob the TTO attracts less than 50% oftotal shares.(2) Without Bob the TTO attracts more than 50%of total shares.(3) Bob is pivot, without him the TTO attracts lessthan 50% and with him it attracts more than 50%. 146

147. In the first case if Bob tenders to the TTO he obtains $105 per share and otherwise he obtains per share either $100 (if both offers fail) or $102 (if Macy’s offer succeeds). Hence he is best off tendering to the TTO.In the second case the TTO succeeds and Campeau is the winner whether or not Bob tenders his shares. If Bob does not tender he will obtain $90 per share while if he tenders he will obtain at least $97.5 per share. In the third case under the assumption that Bob is not a massive share holder (holds less than 12.5%) he will get more than $102 per share if he tenders to the TTO and will get either $100 or $102 per share, otherwise.In all three cases Bob is best off tendering to the TTO. 147

148. Corollary Every shareholder will tender to the TTO and each one of them will obtain $97.5 per share, less than the current market price. This is another example of the Prisoner’s Dilemma game, where every player has a dominant strategy but once they use it the outcome is inferior. Lawyers view the TTO as coercive and have successfully used this as an argument to fight the raider in court. In the battle for Bloomingdales, Campeau eventually won, but with a different offer that did not include any tiered structure.148

149. RemarksThe fact that Bob tenders to the TTO in the second case is because Macy’s offer is not an unconditional offer. If it was, it is not clear what is Bob’s best reply strategy. If he believes that more than 62.5% of the shares are tendered to the TTO he is best off selling to Macy’s and our conclusion is destroyed. In the third case, if Bob holds less than 12.5% by tendering to the TTO he will increase the percentage of shares tendered to TTO to less than 62.5%. Since P(62.5%)=102 Bob will obtain more than $102 per share. 149

150. EQUILIBRIUM IN CASE MACY’S MAKES AN UNCONDITIONAL OFFERClearly in this case to tender to TTO is neither a dominant strategy nor an equilibrium strategy.If everyone tenders to TTO each obtains $97.5 per share. Bob has an incentive to deviate and to offer his shares to MACY’S in which case he will Obtain $102 per share (since MACY’S offer is un conditional on having a majority of the shares).Note also that there is no equilibrium where everyone tenders to MACY’S. In this case again Bob has incentive to deviate, this time to tender to TTO. If he does so he will obtain $105 since no one else tenders to TTO.150

151. Nevertheless, there is an equilibrium where 62.5% tender to TTO and the other 37.5% tender to MACY’S. In this case every share obtains a price of $102 and no one has an incentive to deviate 151

152. AcquisitionExample 1: Suppose you represent company A (the acquirer) which currently considering acquiring Company T (the target) by means of a tender offer. You are unsure what price to offer. The value of T depends on the outcome of a major oil exploration project it is currently undertaking. In the worse case (if the exploration fails completely) the Company T under the current management will be worth nothing ($0 per share). In the best case (a complete success) the value under the current management will be $100 per share. Assume that all share values between $0 and $100 are equally likely. Company A is known to have very skillful team and by all estimates Company T will be worth 50% more in the hands of Company A. Finally, suppose that T will accept any offer that is greater or equal to its value under its own management. You are asked to make an offer for T’s shares. This offer must be made now, before the outcome of the drilling project is known. But Company T will know the project’s outcome when deciding whether or not to accept your offer. 152

153. Solution: Suppose you bid $60 per share. If your bid is accepted then T must be worth at most $60 per share under the current management (otherwise T will reject your offer). Since all values between 0 and 60 are equally likely, on average T’s share will be worth $30 to the current owner and $45 to you. By bidding $60 per share you should expect on average a loss of $15. In general if you bid $B per share you should expect to lose B/4 per share. Why? If your offer is accepted the value of T’s share must be in between 0 and B, equally likely. Thus, on average that value is B/2 under the current management. For you this value is 50% higher, namely 3B/4. You bid $B per share and obtain, on average, 3B/4 per share, hence on average you lose B/4 per share.Consequently your best bid is zero per share. This exercise produces the winner’s curse in which any positive bid yields an expected loss to the bidder. An experiment on that example was run with MBA students at Northwestern University, with 123 students obtaining no monetary incentive and with 66 students obtaining some monetary incentives. 153

154. The table below summarizes the results: A majority bid was in the range between $50 and $75 for the experiment with no monetary incentives. Their false argument is roughly as follows: The expected value of T is $50 and this makes it worth $75 to A. Therefore I should bid somewhere between 50 and 75. This argument fails to take into account the asymmetric information. A correct analysis must calculate the expected value of T conditioned on the bid being accepted.154With Monetary Incentive, N=66No Monetary Incentive, N=123Bids8%9%029%16%1-4926%37%50-5913%15%60-6920%22%70-794%1%80+

155. Example 2: Company A considers buying company B by means of a tender offer. Company B will accept any offer of A which reflects a fair value.Company B is currently undertaking a major project. If the project is a complete failure the fair value of each share of B will be $40, and if it is a complete success the fair value of a share will be $100. The outcome of the project can vary from a complete failure to a complete success, and all outcomes are equally likely. That is, the fair value of a share of B can be any number between 40 and 100 equally likely.The management of A is significantly more skillful than that of B. Under the management of A the share price of B will be 50% higher than under the current management of B.155

156. Suppose A makes an offer to acquire B before knowing the outcome of the project. Company B, on the other hand, will decide whether or not to accept the offer of A after knowing the outcome of the project.What price per share should A offer B?Solution:Suppose A offers to purchase all the shares of B for a price b per share. Then B will accept the offer only if the fair value of a share is between 40 and b. Since all numbers in that range are equally likely the expected (average) fair value of a share of B is: 156

157. The expected fair value of a share of B under the skillful management of A is 50% higher. Namely, it is:Hence, conditioned on B accepting the offer of A, the expected profit per share for A is:But the probability that B will accept the offer b of A is: 15740100bb – 4060

158. Therefore, the expected profit per share for A if he offers b is:The graph of this profit as a function of b is: 158And it is maximized forConsequently, A is best off offering $80 per share of B. On average, A will obtain a profit per share of 6.67

159. AuctionsOne of the most striking applications of gametheory is to auction theory. William Vickery wonthe Nobel Prize in 1996 mainly for his seminal 1961 paper on auction. Auction is a strategic game. The players are the seller of an object andthe potential buyers (bidders). The strategy of theseller is the choice of the auction format. The strategy of a bidder is how much to bid, andit varies from auction to auction.159

160. Auctions serve as an efficient way to transact all kindof objects like: Government contract, houses, cars, artobjects, antiques, fresh flowers, livestock, Governmentbonds, mobile phone licenses, oil drilling rights. Themost striking auctions are the ones facilitating thetransfer of assets from public to private hands. Auctions eliminate the use of market makers (brokers,specialists), induce competition and usually award theobjects to the bidders who value the object the most.Sellers use them primarily since they are unsure aboutthe value potential buyers assign to the object.160

161. History of auctionsAuctions have been used (according to Herodotus) inBabylon (500 B.C.). The Babylonians auctioned wives.The ancient Greeks auctioned mine concessions. In 193 A.D. the entire Roman Empire put on auctionby the Praetorian Guard. The winning bid of the winner, Didius julianus, was a promise to pay 25,000sesterces per man to the Guard. Julianus was beheaded two months after – serving as probably thefirst and the most extreme example of the winner’scurse well known phenomenon.161

162. VALUATIONSThe maximum willingness to pay is the valuation a bidder assigns to an object. Namely, if the valueis $100 it means that the bidder is indifferent between paying $100 and obtaining the object andnot buying the object. He will not pay a penny more for the object. If every bidder knows his own value of the object at the time of the bidding we call this case the PRIVATE VALUES case. 162

163. The value of a bidder is known to himself but notto the other bidders. In other situations the worthof the object is not known to the bidders. If however these unknown values are equal we call the case the COMMON VALUES case. If they are not the same they are INTERDEPENDENTVALUES. A typical example of common valuesis the sale of a tract of land with unknown amount of oil underground. 163

164. Bidder may acquire different partial informationabout the value of the oil tract by having accessto different geological tests and they will formestimates about this value based on the tests results. Their bids will depend on these estimateswhile the bids in the private values case depend only on their values.164

165. Strategies Vs. ValuesThe strategy of a bidder is how much to bid in thespecific auction he participates. This depends onthe value he assigns to the object if he knows thisvalue and on his estimate of the value if he doesnot know it. Valuation and strategy are two different notions. As a bidder I may assign a value of $1,000 to a certain picture and bid only$500 if I strongly believe that the other bidders will bid below $500. 165

166. Several Well Known AuctionsWe distinguish two types of auctions: open bidauctions and sealed bid auctions.Open Bid AuctionsEnglish Ascending Auction (EAA). The price of the object is gradually raised, usually in small increments until only one bidder remains. This bidder wins the auction and pays the last price quoted. Once a bidder quits the auction he is notallowed to come back.166

167. 2. The Japanese Auction. Similar to EAA except that the price rises continuously. All interested bidders keep pressing a button and as long as at least two bidders press the button the price rises (on a screen). When the price gets too high for a bidder, she removes her finger from the button, indicating that she is out. Once only one active bidder remains, the auction is over. The winner (the last remaining bidder) pays the price at which the second last bidder dropped out.167

168. 3. The Dutch Descending Auction (DDA). This type of auction is used in the sale of flowers in the Netherlands. It works exactly opposite to the EAA: the auction starts at a very high price and then lowers the price continuously (usually by a giant clock which ticks down). The first bidder to call out “Mine”, stops the clock and wins the object at the price at which he stopped the clock. 4. On-line auction. Similar to the EAA except it is done online. 168

169. Sealed Bid AuctionsFirst Price Sealed Bid Auction (FPA). Bidders submit bids independently in sealed envelopes. The envelopes are opened simultaneously. The highest bidder wins the object and pays his bid. Such auctions are used for the sale of mineral rights in government- owned land. UK treasury securities are sold by a version of FPA (this is a multi-unit auction). 169

170. 2. Second Price Sealed Bid Auction (SPA). Defined similar to the FPA except that the winner pays the second highest bid. A versionof the SPA is used for instance for the sale ofUS treasury securities.Question: If you are a seller of a piece of antiquewould you sell it in a FPA or SPA?170

171. Answer: It is true that in a first price auction you will obtain the highest bid while in the second price auction you will obtain only the second highest bid. But remember, bidders may bid higher in the SPA since they know that if they winthey only pay the bid below the winning bid. Indeed in circumstances where bidders are riskneutral and roughly speaking have similar background (together with some other technicalassumptions) SPA out performs FPA for the seller. 171

172. HOW BIDDERS SHOULD BID?The Private Values CaseRESULT 1: It is a dominant strategy for every bidder to raise his bid and to outbid other biddersup to his value in the open bid ascending auctions:English Auction, Japanese Auction and On-lineAuction. It is a dominant strategy for every bidderto bid his valuation in the Second Price Sealed Bid Auction.172

173. By a dominant strategy we mean a strategy which is bestfor the player IRRESPECTIVE of the other players’strategies. In particular if you bid on e-Bay, think verycarefully of the maximum price you are willing to pay forthe object (your valuation), say it is $100. Then put anautomatic bid where you raise your bid as much asneeded up to your valuation and quit the auction if the price exceeds your valuation. This is not only optimalstrategy no matter what are the strategy bids of the otherbidders but also easy to implement and it is not subject to any strategic manipulation. 173

174. RESULT 2: The optimal bidding strategy in a FirstPrice Sealed Bid Auction is exactly the same as in theDutch Descending Auction. In these two auctionsbidders do not have a dominant bidding strategy and theequilibrium strategy is usually very complicated to compute as it depends (unlike the previous case) on the beliefs of the bidders about the other bidders’ valuations of the object. Actually, the general bidding strategy is: BID YOUR ESTIMATE (the expected value) OF THE SECOND HIGEST VALUATION AMONG THE BIDDERS, ASSUMING YOURS IS THE HIGHEST. But how to compute this estimate? 174

175. 175Example :suppose that there are two (n=2) bidders and it is commonly known that their valuations of the object is in between 0 to 100, equally likely. Namely, every bidder knows his own valuation and does not know the valuation of his rival. But he knows that it is a number in between 0 and 100 and all the numbers there have the same probability to occur (uniform distribution). This reflects on the fact that a bidder has no prior information about the taste of his rival. Suppose that a bidder has valuation of 60. How much should he bid? By the above rule, he should assume (technically) that his valuation is the highest. Namely his rival’s valuation is in between 0 and 60, equally likely03060100

176. 176Then the expected value of his rival must be 30 and he therefore should bid b(60)=30. Similarly, if the bidders valuation is v he should bid b(v)= ½v.Suppose next that bidders value the object to be in between a and B (instead of 0 to 100), equally likely. a(a+v)vB2Then a bidder with valuation v should bid b(v)= a/2+ v/2

177. 177Suppose next that there are n=3 bidders and their valuations are in between 0 and 100, equally likely. A bidder with a valuation of 60 when assuming that his valuation is the highest knows that the valuations of the other two bidders are in between 0 and 60 equally likely. The expected value of the highest of the two is 40 (and the lowest of the two is 20). He therefore should bidb(60)=40. If the valuations of the three bidders are in between a and B, equally likely than a bidder whose valuation is v should bid b(v)= 1/3*a+ 2/3*v 0406010020a1/3*a+ 2/3*vvB

178. 178The general case: Suppose that there are n bidders and their valuation are commonly known to be in between a and B, equally likely. Then in a First Price Sealed Bid Auction (or in a Dutch Auction) a bidder with valuation v should bid:b(v)= a + vNote that the bid is increasing in n and it approaches v when n is large, meaning that if there are sufficiently many bidders the winner will pay an amount which is very close to his valuation and the seller who has the incentive to increase the number of bidders will obtain an amount which is very close to the highest valuation. Also note that the auction is efficient in the sense that the bidder with the highest valuation wins the auction.1nn-1n

179. 179Example :There are 4 bidders and their valuations are all in between 100 and 1,000, equally likely. What should be the bid of a bidder with valuation of 700, in a FPA?Answer: n=4, a=100, B=1,000 and v=700 b(700)=(1/4)*100 + (3/4)*700=550

180. REMARK: Bidding your valuation while it is adominant strategy in the ascending open auctionsor in the second price auction it is a dominated (non-sensible) strategy in the first price auctionor in the Dutch auction. In the latter two auctionsa bidder guarantees to obtain zero with this strategy whether or not he wins the auction. 180

181. The Complete Information caseSuppose that the valuations of n bidders for a given object are v1 > v2 ≥ v3 ≥ … ≥ vnSuppose that these valuations are commonly known. In particular, every bidder knows not only his valuation but also the valuations of all other bidders

182. Let us examine the equilibrium strategies in first, second and third price auctions with the assumption that bids must be integer numbersExample: Consider four bidders (n=4) whose valuations arev1=100, v2=90, v3=50 and v4=40

183. First price AuctionConsider several possible combinations of bidsb1=95, b2=94, b3=45 and b4=35 These bids constitute an equilibrium point, but a non-sensible one; Bidder 2 uses a weakly dominated (w-dominated) strategy. She can either obtain zero or lose 4, bidding any number below her valuation is a better strategy for her.

184. (2) b1=90, b2=88, b3=45 and b4=35 This is not an equilibrium point, since bidder 1 is better off reducing his bid to 89 and still winning the auction.(3) b1=88, b2=87, b3=45 and b4=35 This is not an equilibrium point, since the bidder 2 is better off outbidding bidder 1 by bidding 89. In this case her payoff will increase from 0 to 1.

185. (4) b1=91, b2=90, b3=45 and b4=35 This is an equilibrium point, but non-sensible; Again, bidder 2 uses a w-dominated strategy (she obtains zero even if she wins, any bid below 90 dominates her bid of 90)(5) b1=90, b2=89, b3<50 and b4<40 This constitute the unique sensible equilibrium outcome. Note that here every bidder bids below his/her valuation.

186. Summary: Suppose thatv1 > v2 ≥ v3 ≥ … ≥ vnThen in a first price auctionEvery sensible Nash equilibrium must satisfy: b1=v2, b2=v2-1, b3<v3,…, bn<vnEvery Nash equilbrium (sensible or not) is of the form: b1=x, b2=x-1, b3≤x-1,…, bn ≤x-1where v2 ≤ x ≤ v1

187. (B) Second price AuctionFirst recall that in a second price auction it is a w-dominant strategy of every bidder to bid his/her valuation. Hence the only sensible equilibrium is b1=v1, b2=v2, … , bn=vn But there are many more non-sensible equilibrium points. Consider the example above where v1=100, v2=90, v3=50 and v4=40

188. The combination b1=95, b2=90, b3=50, b4=40 is also an equilibrium with the same outcome as the sensible equilibrium, namely bidder 1 wins and pays v2=90.There are other equilibrium points where bidder 2 wins

189. Let b1=80, b2=105, b3=50, b4=40 This is an equilibrium where 2 wins, pays 80 and obtains a payoff of 90-80=10. If bidder 1 outbids bidder 2, he will win but will have to pay 105 (more than his valuation).Clearly this equilibrium is not sensible since 2 bids above her valuation and 1 bids below his valuation.

190. (C) Third price AuctionIt can be easily verified that bidding below one’s valuation is a weakly-dominated strategy. Hence in every sensible equilibrium bi ≥vi must hold. Back to our example v1=100, v2=90, v3=50 and v4=40

191. Consider the following bid combination b1=100, b2=90, b3=50, b4=40 This is not an equilibrium since 2 is better off outbidding 1!! She will then win the object for the price of 50.For the same reason the following combination is not an equilibrium b1=115, b2=110, b3=60, b4=50

192. We conclude that to support an equilibrium where 1 wins, bidder 3 must bid at least v2 (90 in our example). The following combination b1=115, b2=110, b3=93, b4=60 is an equilibrium and sensible one.We can change the bid b3 of 3 to any number in between v2 and v1 (in our example, 90<b3<100) and the bids will constitute a sensible equilibrium.

193. In general, sensible equilibrium in a third price auction when v1 > v2 ≥ v3 ≥ … ≥ vn is characterized by For every i, bi≥vib1 is the highest bid The second highest bid is in between v1 and v2 (including v1 and v2 )The winner is Bidder 1 who pays the seller in between v2 and v1 for the object.

194. Any number in between v2 and v1 can be supported in a sensible equilibrium as the seller’s revenue.For instance, seller obtains the revenue of 99 from the following sensible equilibrium bid combination:b1=115, b2=110, b3=99, b4=60

195. Finally, note that there are (non-sensible) equilibrium points where Bidder 2 is the winner and seller obtains way below 90.Consider the following bid combination:b1=55, b2=110, b3=105, b4=50Here Bidder 2 wins and pays 55 to the seller. If 1 outbids 2, he becomes the winner but has to pay 105, over his valuation. Note that 1 uses a w-dominated strategy as he bids below his valuation.

196. RISK AVERSE - BIDDERSAs a seller do you wish to confront risk lover, risk neutralor risk averse – bidders? If your answer is risk loverbidders you are wrong. Risk averse – bidders bid moreaggressively in a first price auction (or in the Dutchauction) than risk neutral or risk lover – bidders. Theyraise their bids to ensure higher probability of winningthe object. The risk here is not to win the object. This is incontrast to the second price auction (or English, Japaneseor On-line auctions) where bidding truthfully (yourvaluation) is a dominant strategy for any type of bidders, irrespective of their risk attitude.196

197. THE SELLER’s REVENUE-AUCTION COMPARISONAssuming that all bidders are risk neutral together with some other standard assumptions itcan be shown that for the same number of biddersin each auction: The English Ascending AuctionOUTPERFORMS the Second Price Sealed Bid Auction. The Second Price Sealed Bid AuctionOUTPERFORMS the First Price Sealed Bid Auction and The First Price Sealed Bid AuctionIS EQUIVALENT to the Dutch Descending Auction. 197

198. That is, if you believe that the bidders on e-Bay are risk neutral it is a good idea to sell your item on their site. On-line auctions are equivalent tothe English Auctions. The above comparison isvalid for both: private and common value cases.198

199. HOWEVER If the bidders are risk averse then the first price auction in certain scenarios (independent valuations) yields the seller higher revenue than the second price auction. The reason is that bidders bid more aggressively in the former auction. Also if the seller is risk averse and the bidders are risk neutral again the seller may benefit more in the first price auction than in the second price auction.199

200. PROBLEMS with the English Ascending AuctionENTRY DETTERENCE. As it is stated above the English Auction in many circumstances is thebest auction for the seller. However, this result isbased on the assumption that the same number ofbidders participate in all auctions. Under comparison in reality the English Auction may deter entry. For instance, in the auctions for mobile phone licenses, non incumbent firms with even a small disadvantage will see no point entering an open ascending auction 200

201. Preparation for a large scale auction is usually quite costly and whatever sensible bid a non-incumbent firm makes, it is likely that an incumbent firm will outbid it. A non-incumbent firm may trapped into winner’s curse if she wins in an English Auction against incumbent firms. To ease this problem Klemperer in 1998 suggested the Anglo – Dutch Auction. For example Anglo-Dutch Auction for three licenses the price rises continuously until, say, five bidders remain (the English stage). 201

202. Then these five bidders submit sealed-bids with a minimum price which is the one clearedthe English stage. The highest three bidderswin the license and each pays the third highest price.Here “weak” bidders have incentives to enter such auction since they have a reasonablechance to win the sealed bid competition if theycan be among the five winners of the Englishstage. 202

203. 2. COLLUSION. Open Ascending Auctions aresubject to sophisticated collusion using signals.It is quite amazing that over 80% of the violations of the Anti-Trust Law are in auctions. In Germany in 1999 ten blocks of spectrum were sold by an open ascending auction. The rule wasthat any new bid on every block (except for the initial bid) should be higher by at least 10% thanthe previous bid. Two serious companies werebidding in this auction.203

204. The first one, Mannesman, put first a bid per Megahertz of 18.18m DM on the first five blocks and 20m DM on the second five blocks. The secondcompany T-Mobil interpreted the first bid of18.18m as an offer by Mannesman: Live and letlive. Namely, raise our bid of 18.18m on the first five blocks by 10% and bid 20m, but do not outbidus on the second five blocks. T-Mobile understoodthe signal well and indeed they won the first five blocks for 20m DM while Mannesman won the second five blocks also for the same price. 204

205. COMMON-VALUE and the WINNER’S CURSEA well known feature of auctions with common values is the WINNER’S CURSE. The bidders have the same value but they do not know this common value.For example, When licenses to drill for oil in undersea tracts are put on auction, the amount of oil tract is the same for everybody. But the buyers’ estimates of how much oil is likely to be in a tract will depend on their differing geological surveys. Such surveys are not only expensive, they are also very unreliable. Some buyers will therefore receive optimistic surveys, and other will receive pessimistic surveys. Who will win the auction?205

206. The announcement that a bidder has won the object should lead to a decrease in his estimate. In this sense winning bring “bad news”. If Bob treats his survey’s estimation of the value of the tract as a private value, then he will win whenever his survey is the most optimistic. But when Bob realizes that his winning the auction implies that all other surveys are more pessimistic than his, he will curse his bad luck at winning!If he had known at the outset that all the other surveys were more pessimistic than his, he wouldn’t have bid so high.A failure to ignore this effect and take it fully into account when bidders decide on their bidding strategy may result with the outcome that the winner pays more than the true value. This happens often in practice. In the example above, suppose that many oil companies compete for the drilling rights to a certain tract of land. Each company obtains an estimate of the value of this tract (the number of oil barrels that can be extracted times the oil price). On average these estimates are likely to be close to the true value.206

207. But every firm observes only its estimate and not that of the other companies. Given the difficulty in estimating the amount of oil underground the estimates may vary substantially. Even if companies bid say 15% below their estimate, in the presence of many companies it is likely that the firm whose expert provides the highest estimate wins the auction but loses money. 207The news that your estimate is the highest is worse the larger is the number of competitors. Thus, the magnitude of the winner’s curse increases with the number of bidders in the auction.BIDDERS NEED TO SHADE THEIR BID WELL BELOW THEIR INITIAL ESTIMATES TO AVOID THE WINNER’S CURSE.

208. Experimental Evidence for the Winner’s CurseA jar full of coins was auctioned off to MBA students of Boston University. Unknown to the students, each jar had $8 worth of coins. Students submitted sealed bids and were told that the auction is a first price one. Namely, the higher bidder would receive the money in the jar minus his or her bid. A total of 48 auctions were conducted, 4 in each of 12 classes. No feedback was provided until the entire experiment was completed. Students also asked to estimate the value of each jar. The mean estimate of the value of the jars was $5.13 which clearly works against the winner’s curse (this low value suggests that students were risk averse). Nevertheless the mean winning bid was $10.01208

209. Field Data evidence on bidding for oil and gas drilling rights. Consider the Gulf of Mexico leases for 1223 leases issued between 1954 and 1969:Firms suffer an average present value loss of $192,128 per lease using 12.5% discount rate.62% of all leases were dry. Consequently the winners had no revenues at all.16% of all leases were unprofitable although some production occurred. The other 22% of all leases were profitable and these leases earned only 18.74% rate of returnThese results appear to reflect excessive enthusiasm for the amount of oil likely to be found. Data surprisingly suggest that bidders make systematic errors. The rationality assumption in economics turns out to be inconsistent with bidders’ behavior.209

210. THE MAX-BID AUCTION for FUNA car of a value of $35,000 is put for sale in the following auction. A bid is not allowed to exceed$5,000 and every bidder is allowed to put as many bids as he wishes. But once the auction receives atotal of 4,000 bids it terminates. The winner is thebidder who puts the highest UNIQUE bid and paysthis bid. As an example, if two or more bids equal$5,000 then these bids are crossed out, if then twoor more bids equal $4,999 these bids too are crossedout.210

211. If there is only one bid of $4,998 then the bidder who placed this bid wins the car and pays $4,998. The seller is not a philanthropist and he charges$10 per bid. Every bidder can buy as many bids as he wishes. Furthermore, a bidder who wishes to buy a number of bids has the right to do so only if he can place all of them (namely, when adding his bids to the existing bids the total number of bids does not exceed 4,000). The seller guarantees to extract almost $45,000 and to net around $10,000. We claim that a bidder if wins the car will pay substantially less than $35,000.HOW?211

212. SOLUTION!Place 2001 consecutive bids from top down (from5,000 to 3,000) and you ensure your win1.The other 1,999 bids of the other bidders cannotcover all your bids and you guarantee to havethe unique highest bid. Your cost is 20,010 for the 2001 bids plus the winning bid – a total ofapproximately $25,000.2121If there are already more than 1,999 bids and you can not place 2001 bids- stay out

213. IncentivesThe owner of a high – tech company is trying to develop and market a new computer game.If he succeeds he will make a profit of $200,000.If he fails he makes nothing. Success or failure of this project depends on the expert employee and on the effort she exerts. With high (H) quality effort she will succeed with probability 80% and with low – quality (L) effort the figure drops to 60%. 213

214. Such an expert can be hired for $50,000but for this salary she will exert only the low effort level. To induce her to exert H- effort level, the owner has to pay her $20,000 more than a low effort obtains. Namely he has to offer her $70,000. But this is not enough. If the salary does not depend on performance, she has no incentive to exert the high effort since effort is costly.So what type of contract will provide the expert to exert high effort?214

215. Expected profitRevenue-SalarySalaryExpected RevenueChance of Success$70,000$50,000$120,00060%L- effort$90,000$70,000$160,00080%H - effortClearly the owner will do better to hire a H – effort expert at the higher salary. But there is a problem:One can’t tell by looking at the expert’s working day whether she exerts H – effort or L – effort since the project may fail in both cases with at least 20% probability. The owner has to base his reward scheme on something he can observe, namely on success (S) or failure (F). Without it the only sensible outcome is that the owner pays the employee a fixed salary of $50,000 and she exerts a L- effort.215

216. 216 Both the expert and the owner could do better with a contract that is based on S or F. As an example suppose that the owner pays the expert a bonus of 50% of the actual profit. Then if she exerts a high effort she will earn an expected amount of : 0.5*0.8*200,000 = $80,000If she exerts low effort she obtains 0.5*0.6*200,000= $60,000So she will exert H-effort. The owner will obtain:0.5*0.8*200,000 = $80,000and both of them are better off.

217. The owner should offer a larger sum upon the S outcome and a smaller sum upon the F- outcome. The difference between the two sums (the bonus for success) should just be sufficiently large so that high effort will raise the expected earnings of the expert by $20,000. Suppose that she is paid x if S and y if F. Then 0.8x+0.2y=70 (if she exerts H-effort) and 0.6x+0.4y=50 (if she exerts L-effort 217

218. IncentivesHence x-y= 100,000must hold. On the other hand the expected payoffShould be 70,000. Therefore 0.8*x+0.2*y=70,000This together with x-y=100,000 (or x=100,000+y) implies 0.8*(100,000 +y)+0.2y=70,000 y+80,000=70,000 y*=-10,000 and x*=90,000218

219. 219That is, the owner should pay the expert $90,000 if the outcome is S and the expert should pay a fine of $10,000 if the outcome is F. This leaves the owner with an expected profit of:0.8*200,000 – 70,000 = $90,000The expert obtains an expected payoff of $70,000 and both the owner and the expert are better off.

220. This is exactly what the owner could make if hecould observe quality effort by direct supervision.This incentive scheme is basically a sale of 50% ofthe firm to the expert for a price of $10,000. Hernet payoff is therefore $90,000 if S and -$10,000 if F. It is of her interest to exert H – effort in order to increase the chance of S.Note, with this scheme the worst case scenario is a profit of $10,000 to the owner and a lose of $10,000 to the expert.There maybe implementation problem with theseschemes. First, imposing a fine on an employee may not be legal or the worker may not have a capital to pay the $10,000. Expected profitRevenue-SalarySalaryExpected RevenueChance of Success$70,000$50,000$120,00060%L- effort$90,000$70,000$160,00080%H - effort220

221.  221

222. 222Suppose next that the expert must be guaranteed to be paid at least $30,000 (to pay mortgage, car insurance, basic needs). In this case y=$30,000 and since x-y=100,000 must hold x=130,000. The expected profit of the owner is 0.8*(200,000-130,000) + 0.2*(-30,000)=50,000

223. If he hires an expert and pays him a salary which is not based on performance he will obtain an expected profit of (0.6*200,000 +0.4*0)-50,000=70,000>50,000Conclusion If an expert must be guaranteed at least $30,000, the optimal strategy of the owner is to hire one for $50,000 only and to expect only a low effort performance. For an increase of 20% in success rate a contract that induces the expert to exert high effort is too expensive for the owner.223

224. PAY FOR PERFORMANCE (continuous) Example (David Kreps)Safelite Glass is the largest automobile- glass replacement company in the US. Naturally their technicians play a key role in the company. Safelite paid their technicians an hourly wage and to no one’s surprise, they found that their technicians took a lot of time to complete a job, more than necessary or reasonable. Safelite decided to motivate their technicians to ALIGN their interest with that of Safelite .A piece rate system was set which is called PPP.224

225. Safelite in addition offered a guaranteed minimum hourly wage :If the sum of the piece-rate payments earned in a given week fell below what they would be paid according to this guaranteed hourly wage, they would be paid by the hour, at least as long as they achieved a minimum amount of work in their time on the job (say, what they had accomplished before PPP).Suppose that before the PPP was set, technicians were paid $12 per hour for a 40- hour work week , giving gross pay of $480 per week. Supposed that225

226. by and large they have managed to do 10 windshields per week. Unit labor cost per windshield is therefore $48. Suppose Safelite’s national load is 5,000 windshield per week and if technicians do on average 10 windshields per weekSafelite needs to employ 500 technicians for a cost of 500*$480=$240,000 per week.Suppose next Safelite sets a piece rate of $30 per windshield. A technician must finish more than 16 windshields in a week to make more than the 226

227. the $480 guaranteed wage rate. Suppose that 100 technicians do 20 windshields a week, earning $600 each week while the other 400 technicians are unwilling to work this hard, and continue to average 10 windshields each taking their guaranteed $480. Since 5,000 windshields must be done, the 100 hardworking technicians do 2,000 leaving 3,000 for the lazier technician who on average do 10 per week each. Hence Safelite needs227

228. only 400 technicians and its wage bill is 100*$600+300*$480=204,000 per week, which is 85% of what it was before. No surprise, if part of the workers has an average unit labor cost of $48 per windshield and the rest an average of $30 per windshield the overall average is below $48.Exercise Suppose that whenever Safelite offers a per piece rate of $(30+x) the 100 hardworking employees increase their performance by (10x)%, each one of them takes care of 20(100+10x)/100 windshields while the other workers continue to 228

229. do 10 units a week. Find the optimal x for SafeliteSolution The number of units the 100 hardworking employees do a week is 100*[20(100+10x)/100], which is equal to 20(100+10x) and they are paid a total of (30+x)20(100+10x)Safelite thus keeps [5,000-20(100+10x)]/10 of other employees, paying each $480. In total the cost is C= (30+x)20(100+10x)+480[500-2(100+10x)] open the brackets and rearrange terms we haveC= 200-1600x+204,000. The total cost is minimized at x*=1600/400=4. Consequently, Safelite should offer a per piece rate of 34 and C= 204,000-3,200. 229

230. The company then hire only 320 technicians.We do not claim that 100% guarantee is best for the company. It might do better with 70% of the previous guarantee. The assumption here is that the hire is the per-piece rate is the more effort the technicians would choose to exert to be on the per-piece rate part of the compensation scheme. With 100%guarantee, a technician’s choice is to do 10 windshields per week and earn $480,or do 17 or more (at $30 piece rate) to earn more. 230

231. But with 70% guarantee, which is $336 per week the same technician only needs to increase his work rate by 20% to 12 per week to beat the guarantee. As an illustration, suppose with 70% guarantee and $30 per piece rate 100 workers will do 20 units, another 100 workers will do 15 units, another 100 will do 13 units and the rest 10 units, all per week. To do 5,000 windshields a week the company will hire a total of 325 workers: 20*100+15*100+13*100+10*20=5,000. Its total cost is 4800*$30+$336*20=$ 150,720<$204,000231

232. Even if the company raises the per-piece rate from $30 to $40, assuming no change in the effort level and performance, its total cost will be 4800*$40+$336*20= $198,720<$204,000.232

233. The Two stereo stores An advertising strategyCrazy Eddie announced: “We cannot be undersold.We will not be undersold. Our prices are the lowest – guaranteed. Our prices are insane”.His main competitor, Newmark & Lewis is no less ambitious. With any purchase, you get the store’s“Lifetime low – price guarantee”.It promises to rebate double the difference if youcan find a lower price elsewhere. They wrote:233

234. “If, after you purchase, you find the same modeladvertised or available for sale for low (confirmedprinted proof required) by any other local stockingmerchant, in this marketing area, during the lifetime of your purchase, we N&L will gladly refund (by check) 100% of the difference, plus an additional25% of the difference, or if you prefer, N&L will giveyou a 200% gift certificate refund (100% of thedifference plus an additional 100% of the difference,in gift certificates)”. 234

235. ARE THE TWO STORES REALLY COMPETITIVE?They sound very competitive. They promise to beat the rival’s price and one of them is even committed to refund customers if it is not the cheapest.SURPRISINGLY THIS CAN LEAD TO A PRICE SETTING CARTEL. HOW COME? 235

236. Suppose that the two stores sell the same model of DVD which cost them $100 wholesale and suppose both CE and N&L are selling it for $150 a piece.Also suppose that the monopoly price is $200. This is the price a monopoly will set to maximize its profit.What happens if N&L raises its price to $160? Effectively it sells a DVD for $140 (she refunds a customer for double the difference between herprice and CE’s price).Customers will switch from CE who charges $150to N&L who effectively charges only $140.236

237. To avoid loosing customers CE has no choice but to raise its price to $160 as well.At this point N&L will again raise the price say to$170 and CE will have to follow N&L until theyreach the cartel price of $200. At this point none ofthem has an incentive to change their price. Forinstance, CE has no incentive to slightly reduce the price in order to capture the market sinceit knows that this automatically will enhancea double decrease in price by N&L. 237

238. MOST FAVORED CUSTOMER CLAUSEThis clause asserts that the seller will offer to thefavored customers the best price they offer toanyone. That is, if the seller offers down the road a discount to any new customer he has to offer thesame discount to all the previous favored customers. This enables to sustain a cartel. If all cartel members are committed to this clause then none of them willhave incentive to compete by offering selective discounts to attract new customers away from theirrivals.238

239. This will enhance the same discount to all established clients. A well known antitrust case before the Federal Trade Commission involvedDu Pont, Ethyl, and other manufactures of gasolineadditives. They were charged with using the most favored customer clause. The commission ruled that this clause has an anticompetitive effect, and forbade the companies from using such clauses in their contracts with customers. 239

240. Backward reasoning and forward lookingThe inevitable truth about gambling is that one person’s gain must be another person’s loss. Thus it is especially important to evaluate a gamble from the other side’s perspective before accepting. For if they are willing to gamble, they expect to win, which means they expect you to lose. Someone must be wrong, but who? This case study looks at a bet that seems to profit both sides. That can’t be right, but where’s the flaw?240

241. There are two envelopes, each containing an amount of money; the amount of money is either $5, $10, $20, $40, $80, or $160 and everybody knows this. Furthermore, we are told that one envelope contains exactly twice as much money as the other. That two envelopes are shuffled, and we give one envelope to Ali and one to Baba. After both the envelopes are opened (but the amounts inside are kept private), Ali and Baba are given the opportunity to switch. If both parties want to switch, we let them.241

242. Suppose Baba opens his envelope and sees $20. He reasons as follows: Ali is equally likely to have $10 or $40. Thus my expexcted reward if I switch envelopes is (10+40)/2=$25 > $20. For gambles this small, the risk is unimportant, so it is in my interest to switch. By a similar argument, Ali will want to switch whether she sees $10 (since she figures that he will get either $5 or $20, which has an average of $12.50) or $40 (since she figures to get either $20 or $80, which has an average of $50).242

243. Something is wrong here. Both parties can’t be better off by switching envelopes. What is the mistaken reasoning? Should Ali and/or Baba offer to switch?A switch should never occur if Ali and Baba are both rational and assume that the other is too. The flaw in the reasoning is the assumption that the other side’s willingness to switch envelopes does not reveal any information. We solve the problem by looking deeper into what each side 243

244. thinks about the other’s thought process. First we take Ali’s perspective about what Baba thinks. Then we use this from Baba’s perspective to imagine what Ali might be thinking about him. Finally, we go back to Ali and consider what he should think about how Baba thinks Ali thinks about Baba. Actually, this all sounds much more complicated than it is. Using the example, the steps are easier to follow.Suppose that Ali opens his envelope and sees $160. In that case, she knows that she has the greater amount and hence is unwilling to244

245. participate in a trade. Since Ali won’t trade when she has $160, Baba should refuse to switch envelopes when he has $80, for the only time Ali might trade with him occurs when Ali has $40, in which case Baba prefers to keep his original $80. But if Baba won’t switch when he has $80, then Ali shouldn’t want to trade envelopes when she has $40, since a trade will result only when Baba has $20. Now we have arrived at the case in hand. If Ali doesn’t want to switch envelopes when she has $40, then there is no gain from trade when Baba finds $20 for 10$. 245

246. The only person who is willing to trade is someone who finds $5 in the envelope, but of course the other side doesn’t want to trade with him.246

247. Dynamic Games247

248. Dynamic GamesGames in a tree formIn the previous chapters we analyzed mostly simultaneous strategic interactions where players choose their actions simultaneously. We next study sequential interactions where players make alternating moves. The IBM versus JBM exam had this feature: IBM made its software decision first and JBM made her choice after observing IBM’s choice. We use trees to describe the dynamic evolution of these games.248

249. Dynamic GamesExample: A brand company B controls the market for air condition units. A new company E is about to enter this market. If E enters, B has two choices: accommodate E by accepting a lower market share or fight a price war. 249If B accommodates the entry, E will make a profit of $100,000 but if B starts a price war, it will cause E to lose $200,000. If E stays out its profit is zero. The outcome for E is described by the following tree: E0Benterstay outfightaccommodate-200,000100,000Figure 1

250. 250Suppose next that B as a monopolist, is able to make a profit of $300,000. Sharing the market with E reduces its profit to $100,000. Fighting a price war will leave B with a profit of $50,000. EBenterstay outfightaccommodate100,000 100,000Figure 2EB-200,000 50,000EB 0 300,000EBThen the tree game is: Looking ahead and reasoning back we can see that if E enters then the best reply strategy of B is to accommodate E (since 100,000 > 50,000). E takes this into account and chooses to enter to obtain 100,000.

251. 251Suppose next that there is uncertainty about the profit of B if E enters. Suppose that E believes that in case B fights, B will obtain $50,000 with probability 1/3, $200,000 with probability 1/3 and with the remaining probability of 1/3 B will obtain $60,000. If B accommodates E then it will obtain $100,000 with certainty. EBenterstay outfightaccommodate100,000 100,000EB-200,000 50,000EB 0 300,000EBThen the tree game is: -200,000 200,000-200,000 60,000N1/31/31/3-200,000 103,333 1/3Figure 3EBEBEB

252. 252If B fights it will obtain an expected profit of 1/3*50,000 + 1/3*200,000 + 1/3*60,000 = 103,333 1/3If B accommodates it obtains only $100,000.The resulting tree is: EBenterstay outfightaccommodate100,000 100,000-200,000 103,333 1/3 0 300,000EBEBEBFigure 4Hence B will fight and E stays out.

253. 253Solving dynamic games (games in a tree form) by backward manner always yield a Nash equilibrium (a claim that need to be proven!). To see this let us examine the game in Figure 2:The strategic form of this game is (the payoffs are in thousands):-200, 50100, 1000, 3000, 300BEAccFightEnterOut*****EBenterstay outfightaccommodate100,000 100,000Figure 5EB-200,000 50,000EB 0 300,000EB Perfect EquilibriumNon Perfect Equilibrium

254. 254There are here two Nash equilibrium points:(Enter, Accommodate)  (100, 100)(Out, Fight)  (0, 300)But only one of them, the first point, is obtained by what we call backward induction. The other equilibrium is obtained when B threatens to fight any entry and E the potential entrant, takes the threat seriously and retreats. Is this equilibrium sensible?We argue that it is not!The threat of B is not credible. If E enters and B fights then E obtains 50,000. But if B accommodates E it obtains 100,000. Thus E should not believe the threat of B to fight and E should enter. Notice that the strategy “Fight” of B is indeed weakly dominated by its strategy “Accommodate”. The Nash equilibrium which is obtained by backward induction manner is called perfect equilibrium (PE).

255. 5, -20255Example: Consider the following game in a tree form:The perfect equilibrium is ((r1, r2), R)  (5, -20)Are there other equilibrium points in the game?Let us reduce this game to a game in strategic form. Note that 1 has four strategies:(l1, l2)(l1, r2)(r1, l2)(r1, r2)12l1r1RL20, 7Figure 6r2l213, 64, 8

256. 2565, -205, -203, 620, 74, 820, 721LRr1(l1, l2)*(l1, r2)******12l1r1RL20, 75, -20Figure 6r2l213, 64, 8 5, -205, -205, -205, -203, 620, 74, 820, 721LR(r1, r2)(l1, l2)(l1, r2)(r1, l2)

257. 257As we can see, we have two equilibrium points ( (1) is the PE point): (1) (r1, R)  (5, -20) (2) ((l1, l2), L)  (20, 7)Even though both players prefer the equilibrium (2), it is not sensible and will not be chosen. Notice that the strategy (l1, l2) of 1 is weakly dominated (by (l1, r2)) and therefore 1 will not choose it. Also observe (Figure 6) that the equilibrium ((l1, l2), L) is based on a non credible threat of 1 to play l2 if 2 plays R. But if 2 indeed plays R why should 1 play l2? He obtains more if he plays r2.The only sensible equilibrium outcome is (5, -20).

258. 258Example: Pick Up Bricks: Five bricks have been stacked on the ground. The two players A and B take turns picking up either one or two bricks from the pile. The player who picks up the last brick loses $1. The other player wins $1.ABABABBABAAB11111111111222122221,-11,-1-1,11,-1-1,1-1,1-1,11,-1Figure 7Solving backwardly, the winning strategy of the first mover A is to pick up in his first move one brick. B losses no matter what she does (if she picks up 2 bricks A will pick up 1 and, if she picks up 1 brick A will pick up 2 bricks).

259. BAA259Can we conclude that the first mover has always the advantage?NO!Consider next the same example but now with four bricks only: AABB111111122221,-1-1,1Figure 8-1,1-1,11,-1The follower B, has a winning strategy (if A picks up 1 brick B picks up 2 bricks and if A picks up 2 bricks, B picks up 1 brick). The second mover here has the advantage!Problem: Can you analyze the general case with any number of bricks?

260. 260Example: Consider the following three-person-game in a tree form:0,1,2-2,2,12,-3,34,5,-21,2,10,0,05,0,43,4,52,0,21,0,14,2,-34,2,-2-1,4,1-1,3,02,1,03,2,0StartFigure 921233122312abcdghijkqrstlmnuvwxypofza'1The perfect equilibrium outcome is (0,1,2). The perfect equilibrium strategies are:Player 1: (a, r, n, w) , Player 2: (i, k, f, y, v) , Player 3: (d, t, p)

261. Example: Consider the following two person game in a tree form:Find the PE of this game. Taking into account that Player 2 plays f and g, if 1 chooses a the expected payoffs of the players are:(3/10)*(0,20) + (7/10)*(0,30) = (0,6) + (0,21) = (0,27) Also, if 1 chooses b the expected payoffs of the players are: (1/2)*(-10,40)+(1/2)*(30,-30) = (-5,20)+(15,-15) = (10,5)261Figure 101N22NaeU1gbD2f30,-300,300,2040,10UDh-10,4010,01/21/27/103/10

262. 262Therefore the relevant tree is:1ab(10,5)(0,27)Figure 11The perfect equilibrium is: Player 1 chooses b and the strategy of 2 is to choose f if Nature chooses U and to choose g if Nature chooses D. The actual perfect equilibrium outcome is either (-10,40) or (30,-30), equally likely. The expected payoffs are (10,5).The set of all Nash equilibrium points of this gameWe convert the game tree of Figure 10 into a game in strategic form. This is a little tricky.

263. 263Notice that Player 2 has four strategies: (e,g), (e,h), (f,g) and (f,h).Suppose that Player 1 chooses a. Let us compute the expected payoffs for each one of the four strategies: (e,g)  (3/10)*(10,0) + (7/10)*(0,30) = (3,21)(e,h)  (3/10)*(10,0) + (7/10)*(40,10) = (31,7)(f,g)  (3/10)*(0,20) + (7/10)*(0,30) = (0,27)(f,h)  (3/10)*(0,20) + (7/10)*(40,10) = (28,13)If 1 chooses b the expected payoffs are (No matter what 2 does):(1/2)*(-10,40) + (1/2)*(30,-30) = (10,5) Figure 101N22NaeU1gbD2f30,-300,300,2040,10UDh-10,4010,01/21/27/103/10

264. We obtain the following table:There are two Nash equilibrium points:(b, (e,g))  (10,5)(b, (f,g))  (10,5)*The second equilibrium is perfect, but not the first. In the first equilibrium Player 2 chooses e when Nature selects U. This is not a sensible action: choosing f is a better action for 2. Nevertheless, even if 2 chooses e and not f Player 1 is best off with b than with a, and the outcome is the same. 26428,130,2730,73,2110,510,510,510,521ab(e, g)**(e, h)(f, g)(f,h)*******

265. 265Consider the IBM vs. JBM example. Consider the case where IBM knows the “right” software and JBM does not know. IBM has the following four strategies: AB, BA, AA and BB with the interpretation that the left component of the strategy is the choice of IBM if the right software is A and the right component is IBM’s choice if the right software is B. Hence the strategy AB of IBM is to choose the right software, the strategy BA is to choose the bad software, the strategy AA is to choose software A no matter what is the right software (similarly is BB). The strategies of JBM are denoted similarly, namely AB, BA, AA, BB. But their interpretation is different. The left component is the choice of JBM if IBM chose A and the right component is the choice of JBM if IBM chose B.

266. 266The strategic form of this game is:Note that AB is a strictly dominant strategy of IBM. Namely, IBM will choose the right software and JBM will mimic IBM’s choice. 3.5, 13.5, 15, 02, 20, 2.50,2.50, 50, 02.5, 2.51, 12.5, 2.51, 11, 12.5, 2.52.5, 2.51, 1JBMIBMABAB**BAAABBBAAABB

267. The Dollar Auction Game (1)An auctioneer invites bids for a dollar auction. Bidding proceeds in steps of a quarter. The highest bidder gets the dollar and pays his bid. But also the second highest bidder pays her bid to the auctioneer and gets nothing. Suppose the current highest bid is 50 cents and you are second with 25. You will lose the 25 cents if you do not raise to 75 cent. If you do raise your bid to 75 cents , the other bidder knowing that he is about to lose his 50 cents may raise his bid to $1, etc. The logic is the same if your bid is $3.50 and your opponent’s bid is $3.75. How would you play this game?267

268. The Dollar Auction Game (2)This is an attrition game. Once you start sliding, it is hard to recover. It is better not to take the first step unless you figured out how to play the game down the slippery road. Imagine Alice and Bob, two competitors in a dollar auction. To avoid a technical problem related to indifferences we assume that a bid costs 1 cent.One equilibrium outcome is where Alice starts with a bid of 75 cents and Bob is out. This outcome does not describe the strategy of either Alice or Bob. A strategy of Alice is a decision whether or not to raise her bid and by how much following any possible decision of Bob following her starting bid of 75 cents and so forth. To support the above outcome as an equilibrium outcome, the strategy of Alice could be to start with a bid of 75 cents and to exit no matter if and how Bob reacts. Suppose Bob’s strategy is to stay out no matter what Alice does. Is this an equilibrium? The answer is NO! 268

269. The Dollar Auction Game (3)If this is the strategy of Bob then Alice is better off offering only 25 cents. In this case she will make 74 cents, better than 24 cents if she start with a bid of 75 cents. Let us try to fix it. Suppose that Bob’s strategy is to outbid Alice and raise her bid by 25 cents if she starts with 25 or 50 cents and to stay out, otherwise. Is this a well define strategy? The answer is again NO. Why? If Alice starts with 25 cents then Bob raises to 50 cents. But what will he269

270. The Dollar Auction Game (4)do if Alice raises to 75 cents. His strategy has to specify his decision in this case as well. Here is a strategy that does the job for Bob. “He will raise Alice bid by 25 cents if she starts with a bid of 25 or 50 cents. If she then outbids him – he exits. If she starts with 75 cents or more- he stays out.” Now the two strategies are well defined and they constitute a Nash equilibrium. But it can be shown that this is not a perfect equilibrium and the equilibrium outcome where Allice bids 75 cents and win may not be a perfect equilibrium. Let us analyze the strategic interaction between Alice and Bob from the last step to the start270

271. The Dollar Auction Game(5)First note that whoever bids $2 (the total budget) wins the auction (but loses $1 and change). Hence whoever bids $1.25 (which is 1$+ step or 2-1+0.25) wins the auction (and loses $0.25 and change). Why? If you bid $1.25 and your opponent outbids you, you will lose $1.25 if you are out and you will lose only $1 if you bid $2 and win. Note that your opponent whose bid is below $1.25 ($1 or less) has no incentive to jump to $2 since he is better off by at least 1 cent to stay out or exit.271

272. The Dollar Auction Game (6)Knowing that who ever bids $1.25 wins the auction, we conclude in the same manner that whoever bids $0.5 (1.25-1+0.25) wins the auction. Let me repeat the argument: if you bid $0.5 where your opponent bids either 0 or $0.25, it does not pay your opponent to jump to $1.25 and if he outbids you and bids either $0.75 or $1 you will bid $1.25 and win the auction (a fact that was established above). Note that in this case you lose $0.26 and otherwise if you stay out you lose $0.5. We conclude that who ever bids $0.5 wins the auction!272

273. The Dollar Auction Game (7)Exercises:Show that if each one of them has $2.5 in his wallet and if this is commonly known then who ever bids $0.25 wins the auction (and net $0.75).The same as (1) but now they have $3 each. Show that who ever bids $0.75 wins the auction.Suppose that Alice has $2.5 and Bob has $1.75 and this is commonly known. Show that the perfect equilibrium outcome is where Allice bids $0.25 and wins the auction.Suppose that Alice and Bob have each $2.5 and this is commonly known. Suppose that the bidding steps are in dimes ($0.1). Show that who ever bids $0.7 wins the auction. Solution to (3) Let us show that only Allice can win at $2.5, $2.25, $2, $1.75 and so forth until $0.25. This will prove our claim.273

274. The Dollar Auction Game (8)This is obvious for $2, $2.25 and $2. Suppose Bob outbids Allice with $1.75. If her previous bid is either $1.25 or $1.5 she is better off outbidding him with $2 and win. He then loses $1.75. If her previous bid is $1 then his previous bid was at most $0.75 and his decision to jump to $1.75 causes him to lose at least 1 cent (there is never a justification raising the bid by $1 as it cost him $1.01 and saves him only $1-if he wins). Hence only Allice can win with a bid of $1.75.274

275. The Dollar Auction Game (9)Suppose Bob outbids Allice with $1.5. If her previous bid is either $1 or $1.25 she is better off outbidding him with $1.75 and win (we have just proved this). He then loses $1.5. If her previous bid is $0.75 then his previous bid was at most $0.5 and his decision to jump to $1.5 causes him to lose at least 1 cent. Hence only Allice can win with a bid of $1.5, etc.275

276. 276The Centipede Game On the table there is a pot with $10.5. Alice and Bob play the following game: Alice starts and she can either stop the game in which case they allocate the money in the pot where Alice gets $10 and Bob gets $0.5. But Alice can be nice to Bob and let him decide. In this case there is a miracle and the money in the pot increases ten times to $105. If Bob decides to stop the game he takes $100 leaving $5 to Alice. But Bob can be nice to Alice and turn the decision to her. In this case the pot increases to $1,050, etc. The game ends when in the pot we have at least $1 million.

277. 277Solving the game backwardly, the only perfect equilibrium outcome is (10, 0.5). Namely, Alice obtain $10 and Bob $0.5.ABABABNice StopNSNSNSNSSAliceBob1051,000500100,00050,0000.51005010,0005,0001,000,000Moreover, it can be shown that (10, 0.5) is the only equilibrium outcome. For example, another equilibrium is:ABABABNice StopNSNSNSNSSAliceBob1051,000500100,00050,0000.51005010,0005,0001,000,000

278. There are cases where players that engaged in a simultaneous game (actions are taken simultaneously) are better off playing it sequentially.Example Consider the following two person (simultaneous) game in strategic formThe strategy U of 1 strictly dominates the strategy D. Player 2 who knows that 1 will play U will choose L. The only sensible outcome is (4 , 2). 278Simultaneous Vs. Sequential Interactions21LRU4 , 26 , 1D3 , 35 , 4*

279. 12279Solving this game backwardly Player 1 chooses D and Player 2 chooses R and the perfect equilibrium outcome is now (5, 4). That is, both players are better off compared with the outcome (4, 2) of the simultaneous game. 24, 2UDLRLR6, 13, 35, 4Suppose next that the two players agree that 1 will move first and 2 will move second after observing the choice of 1. The new game is described in the tree below:

280. 21Suppose next that the two players agree that 2 moves firstHere the perfect equilibrium outcome coincide with that of the simultaneous game. 28014, 2LRUDUD3, 36, 15, 4

281. 281Example: Consider the following game in strategic formThe only sensible outcome is (5, 2). Verify that this outcome is also the perfect equilibrium outcome of the sequential game whether 1 moves first or 2 moves first. 21LRU5 , 26 , 1D3 ,34 , 4*

282. 221282Example: Consider the following game in strategic formThe only equilibrium outcome is (6, 5). Suppose next that 1 moves first. The following is the tree of this game. 21LMRU5 , 27, 41, 3D6, 512, 24 , 4*7, 4UDLMMR1, 36, 512, 2LR5, 24, 4The perfect equilibrium outcome is (7, 4). Player 1 improves his payoff (from 6 to 7) but the payoff of Player 2 decreases (from 5 to 4).

283. Mixed Strategies283

284. The Princess Bride10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini

285. 10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini.“… I am not a great fool so I can clearly not choose the wine in front of you… But you must have known I was not a great fool; you would have counted on it, so I can clearly not choose the wine in front of me.” .

286. 10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini...“… Not remotely! Because Iocaine comes from Australia. As everyone knows, Australia is entirely peopled with criminals. And criminals are used to having people not trust them, as you are not trusted by me...”

287. 10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini...“… So, I can clearly not choose the wine in front of you.”.

288. 10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini...“… Yes! Australia! And you must have suspected I would have known the powder's origin, so I can clearly not choose the wine in front of me…”.

289. The Princess Bride10, -100-100, 10poison inV’s glass-100, 1010, -100poison in M’s glassdrinkV’s glassdrinkM’s glassMan in blackVizzini....Is there an equilibrium in this game?

290. 290Mixed StrategiesThe winner of a soccer game in a final tournament is decided by penalty kicks. Suppose for simplicity that a Kicker can either kick the ball to the left (L) or kick it to the right (R). Similarly, a Goalie can either jump to the left (L) or jump to the right (R). They make their choices simultaneously. The next table describes the probability of a goal as a function of their decisions: GKLRL0.30.9R0.80.1

291. 291Assume that this table is commonly known to the players. The kicker wishes to maximize the probability of a goal while the Goalie tries to minimize it. Equivalently, the Goalie is trying to maximize the negative numbers of the table. The strategic game between the Kicker and the Goalie is given in the next table: GKLRL0.3, -0.30.9, -0.9R0.8, -0.80.1, -0.1Note that there is no Nash Equilibrium in this game.

292. 292How should the players play this game? Let’s assume that the game is played repeatedly and let’s analyze the situation from the viewpoint of the Kicker. Suppose that his policy is to kick to the left (L) all the time. Once his opponent has had time to observe and react to this policy he too will choose L to reduce the probability of a goal to 0.3. Kicking to the right (R) all the time would be even worse – a probability of 0.1 of a goal. So the Kicker would be well advised to mix his choices.

293. 293For example, choosing L and R equally often but in a random sequence: GKLRL0.3, -0.30.9, -0.9R0.8, -0.80.1, -0.11/21/2If the Goalie jumps to the left, the probability of a goal is (1/2)*0.3 + (1/2)*0.8 = 0.55,and if the Goalie jumps to the right, the probability of a goal is (1/2)*0.9 + (1/2)*0.1 = 0.5.Hence the best reply strategy of the Goalie is to jump to the right to ensure a probability of 0.5 of a goal. So the mixed strategy (½ , ½) of the Kicker is definitely worth while as compared with the use of the pure policy of either kicking to the left or kicking to the right.

294. 294But the Kicker can do even better if he chooses L with probability 7/13 and R with probability 6/13. GKLRL0.3, -0.30.9, -0.9R0.8, -0.80.1, -0.17/136/13In this case, no matter if the Goalie jumps to the left or jumps to the right the probability of a goal is the same, namely 0.53077. Indeed, if the Goalie jumps to the left the probability of a goal is: (7/13)*0.3 + (6/13)*0.8 = 0.53077and if he jumps to the right it is: (7/13)*0.9 + (6/13)*0.1 = 0.53077

295. 295 GKLRL0.3, -0.30.9, -0.9R0.8, -0.80.1, -0.11-qq1-ppWe equate the payoffs of the Goalie in the two cases where he chooses L or chooses R:(-0.3)p + (-0.8)(1-p) = (-0.9)p + (-0.1)(1-p)0.5p – 0.8 = -0.8p – 0.11.3p = 0.7p= 7/13LRThe numbers (7/13) and (6/13) are obtained when the Kicker uses the equalization method: Suppose that the Kicker kicks to the left with probability p and kicks to the right with probability 1-p.

296. 296 GKLRL0.3, -0.30.9, -0.9R0.8, -0.80.1, -0.11-qq1-ppLet us compute the optimal strategy of the Goalie. Suppose it is (q, 1-q). By the equalization method, We equate the payoffs of the Kicker in the two cases where he chooses L or chooses R:0.3q +0.9(1-q) = 0.8q + 0.1(1-q) 0.9 – 0.6q = 0.1 + 0.7q1.3p = 0.8q= 8/13The probability of a goal is 0.3*(8/13) + 0.9(5/13) = 0.53077LR

297. 297We conclude that both the Kicker and the Goalie should choose their pure strategies L or R at random. While the optimal strategy of the Kicker is to kick to the left with probability 7/13, the Goalie should jump to the left with higher probability, namely 8/13. The probability of a goal is then 0.53077.How to implement these strategies? For instance, the kicker can pull at random a piece of paper from a hat with 13 pieces of paper where 7 of them have the letter L and the other 6 have the latter R. Note that the mixed strategies (7/13 , 6/13) of the Kicker and (8/13 , 5/13) of the Goalie constitute a Nash equilibrium of this game. The equalization method guarantees that the Goalie is indifferent between L and R when the Kicker chooses the mixed strategy (7/13 , 6/13) and has no incentive to change his strategy. Similarly the Kicker is indifferent between L and R when the Goalie chooses his mixed strategy (8/13, 5/13) and he too has no incentive to change his strategy. LR

298. 298Example: Consider the following strategic game:Find the set of Nash equilibrium points. Solution: First note that the game has no equilibrium in pure strategies. Let us use the equalization method to find the mixed strategy equilibrium. The surprising fact is that we equate the payoff of 2 when we compute the mixed strategy of 1. Namely, the equilibrium mixed strategy of 1 depends only on the payoff structure of 2 (and similarly, the equilibrium mixed strategy of 2 depends only on the payoff structure of 1). 21LRU50, -1030, 20D20, 3060, -20

299. 299 21LRU50, -1030, 20D20, 3060, -201-ppIf 2 chooses L she obtains an expected payoff -10*p + 30*(1-p) = 30 – 40pIf 2 chooses R she obtains an expected payoff20*p + (-20)*(1-p) = (-20) + 40*pIn equilibrium, 2 must be indifferent between her two strategies L and R. Why? If L yields her a payoff higher than R she should choose L for sure (an average of two different numbers is smaller than the highest of the two) .Suppose that 1 chooses the pure strategy U with probability p and he chooses the pure strategy D with probability 1-p:

300. 300Consequently the two expected payoffs should be equal:30 – 40p = -20 + 40p50 = 80pp = 5/8Namely, 1 should play U with probability 5/8 and D with probability 3/8. In this case 2 obtains an expected payoff of:30 – 40*(5/8) = 40*(5/8) – 20 = 5, whether she chooses L or R and she has no incentive to deviate. 21LRU50, -1030, 20D20, 3060, -201-pp

301. 301As for the mixed equilibrium strategy (q , 1-q) of 2 we equate the expected payoff of 1 obtained by his two pure strategies U and D.U  50q + 30(1-q) = 30 + 20qD  20q + 60(1-q) = 60 – 40q30 + 20q = 60 – 40q  60q = 30  q= 1/2 That is, 2 should equally randomize between L and R. The payoff of 1 is 30 + 20*(1/2) = 60 – 40*(1/2) = 40The unique Nash equilibrium of this game is thus((5/8 , 3/8) , (1/2 , 1/2))  (40 , 5) 21LRU50, -1030, 20D20, 3060, -201-pp1-qq

302. 302Example: The Hidden PearlThere are two large drawers, filled with clutter of miscellaneous objects. Player 1 hides a pearl in one of them. Player 2, a burglar, then has one minute to open a drawer and look for the pearl.If he opens Drawer A and the pearl is there he finds it with probability 1/2. If he opens Drawer B and the pearl is there, he gets it with probability 1/3. If he opens the wrong drawer he gets nothing of value. Finding the pearl is worth $50 to the burglar and (-$50) to 1. Failing to find the pearl is worth (-$5) to the burglar. Find the Nash equilibrium of this game. Assume that 2 has no time to open more than one drawer.

303. 303Solution: Player 1 has two strategies: “Hide in A” or “Hide in B”. Player 2 has also two strategies “Open A” and “Open B”. Let us find the expected payoffs as a function of their decisions. (1) Suppose both players choose A. Then the expected payoff of 1 is: (1/2)*(-50) + (1/2)*0 = -25The expected payoff of 2 is:(1/2)*(50) + (1/2)*(-5) = 22.5(2) Suppose both players choose B. Then 1 obtains(1/3)*(-50) + (2/3)*0 = (-50/3)Player 2 obtains(1/3)*(50) + (2/3)*(-5) = (40/3)We can summarize the above in the following table:

304. 304 21ABA-25, 22.50 , -5B0 , -5-(50/3) , (40/3)1-qq1-ppObserve that there is no pure strategy equilibrium. We use the equalization method to find the mixed strategy equilibrium (p, 1-p) and (q, 1-q) of 1 and 2 respectively.22.5*p + (-5)*(1-p) = -5p + (40/3)*(1-p) /*6135p – 30*(1-p) = -30p + 30(1-p)p = 2/5Hence 1 hides the pearl in A with probability 2/5 and hides it in B with probability 3/5. If 2 chooses AIf 2 chooses B

305. 305 21ABA-25, 22.50 , -5B0 , -5-(50/3) , (40/3)1-qq1-ppAs for 2:-25q + 0*(1-q) = 0q + -(50/3)*(1-q) /*3 -75q = -50 + 50qq = 2/5 If 1 chooses AIf 1 chooses BThat is, 2 searches the pearl in A with probability 2/5. The expected payoff of 1 is -25*(2/5) = -10 and the expected payoff of 2 is -30*(2/5) + 80*[1-(2/5)] = 36. The only equilibrium is((2/5 , 3/5) , (2/5 , 3/5))  (-10 , 36)

306. Problem* Three providers A, B and C select locations sequentially in the interval [0,1]. First, A selects location a, then B, who knows the selection of A, selects location b, and finally C, who knows a and b, selects location c.There are no restrictions on locations. Consumers are uniformly distributed on [0,1], and each buys from the closest provider. If two or three providers have the same location they share their consumers equally. Find equilibrium locations. 306Sequential Location Game with Three Providers

307. Solution We claim that the following strategy combination is a Nash equilibrium:a = ¼ , b = 1 – a, and the strategy of C is: If a = ¼ and b = 1 – a choose either a or b with probability ½ each.Else, if ¼ < a < ¾ (or ¼ < b < ¾) choose a location very close to a (or b) but between a (or b) and the edge of the interval. Else, if a < ¼ (or b > ¾) choose c = ½ .(Remark: there exists a similar equilibrium with a = ¾).A and B obtain ⅜ each and C obtains ¼.307Sequential Location Game with Three Providers

308. ExplanationIf A deviates to a’< ¼ then c = ½ and A will obtain If A deviates to a’> ¼ then c = (a – epsilon) and A will obtainThus, A is better off not deviating. B is also better off not deviating, due to a similar argument. If C deviates to c’< a (or c’>b) then he will obtain If C deviates to a < c’< b then he will obtain Thus, C has no incentive to deviate, and the strategy combination is a Nash equilibrium.308Sequential Location Game with Three Providers

309. Cyber Attack

310. ExampleOne AttackerOne DefenderThe Attacker can choose one of two targets:Strong (S) or Weak (W). The implementation of S is costly.The Defender can choose one of two defense actions: Radical (R) or No Defense (N). The radical strategy is of a high cost.

311. NR4, -5-2, -2S1, -12, -3WDefenderAttackerThe Defender cannot identify the target of the Attacker with certainty.

312. The Defender operates an Intelligence System (IS) which is accurate with some probability: α, α>½ (say α=75%)The IS will send one of two signals: s or w.The probability that the Defender chooses the target S if the signal is s is 0.75. Similarly, if the Defender observes w he knows that the target is W with probability of 0.75.

313. AttackerISISDefender observes sDefender observes w-2,-24,-5-2,-24,-52,-31,-12,-31,-1WSs0.75w0.75RNRNRNRNs0.25w0.25

314. Where (R,R) is the strategy of the defender to defend with the Radical action no matter what is the signal of the IS.The strategy (R,N) is to take the Radical action if the signal is s and to choose the action N (do nothing) if the signal is w, etc.N, NN, RR, NR, R4, -52.5, -4.25-0.5, -2.75-2, -2S1, -11.75, -2.51.25, -1.52, -3WDefenderAttacker

315. SolutionThe Attacker’s optimal strategy is to choose S with probability of 2/11 and W with probability of 9/11. The Defender’s optimal strategy is Not Defend if the signal is w and if the signal is s to choose R with probability 12/19 and N with probability 7/19. The outcome (excepted utility) is: (1.16, -1.73)7/19012/190N,NN,RR,NR,R4,-52.5,-4.25-0.5,-2.75-2,-21,-11.75,-2.51.25,-1.52,-3

316. Cooperative Games316

317. Two-sided markets: employers - employees hospitals - residentsSingle-sided markets: teaming up (roommates)Markets without goods? universities - applicants matchmaking317

318. The matching market of hospitals and medical interns in the USA318

319. Phase I: The tragedy of the doctorsThe internship was first introduced around the turn of the century.Competition by hospitals for interns manifested itself in a race to sign employment contracts earlier and earlier in a medical student's career. The date by which most internships had been finalized began to creep forward from the end of the senior year of medical school. This was regarded as costly and inefficient both by the hospitals, who had to appoint interns without knowing their final grades or class standings, and by the students and medical schools, who found that much of the senior year was disrupted by the process of seeking desirable appointments. In 1944 the standard appointment date has now been advanced on the school calendar to the beginning of the junior year. 319

320. A happy end (?)The problem was solved in 1945 when the medical schools in the Association of American Medical Colleges (AAMC) undertook not to give out before some agreed date any information to hospitals regarding students' abilities. 320

321. Phase II: The tragedy turns into a farceA new problem appeared, and manifested itself in the waiting-period between the time offers of internships were first made, and the time students were required to accept them. Basically, the problem was that a student would be inclined to wait as long as possible before accepting the position he had been offered, in the hope of eventually being offered a preferable position. 321

322. Students who were pressured into accepting offers were unhappy if they were ultimately offered a preferable position. Hospitals whose candidates waited until the last minute to reject them were unhappy if their preferred alternate candidates had in the meantime already accepted positions. Hospitals were unhappier still when a candidate who had indicated acceptance subsequently failed to fulfill his commitment after receiving a preferable offer. 322

323. For 1945, it was resolved that hospitals should allow students ten days after an offer had been made to consider whether to accept or reject it. For 1946, it was resolved that there should be a uniform appointment date (July 1) on which offers should be tendered and that acceptance or rejection should not be required before July 8. By 1949, the AAMC proposed that appointments should be made by telegram at 12:01 AM (on November 15), with applicants not being required to accept or reject them until 12:00 Noon the same day. In 1950, the resolution again included a twelve-hour period for consideration, with the specific injunction that "Hospitals and/or students shall not follow telegrams of offers of appointment with telephone calls" until after the twelve-hour grace period. 323

324. In order to avoid these problems and the costs they imposed, it was proposed in 1951, and ultimately agreed, that a more centralized matching procedure should be tried.Students would then rank in order of preference the hospital programs to which they had applied, hospitals would similarly rank their applicants, and all parties would submit these rankings to a central bureau, which would use this information to arrange a matching of students to hospitals, and inform the parties of the result.This procedure was used between 1953 and 1998, matching roughly 20,000 physicians a year. It was replaced by a new one in 1998. (The National Resident Matching Program (NRMP) Phase III: Arranged matching324

325. Welcome to the Spin-Dating ClubDaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314The participants’ preferences325

326. A Roman lady asked R. Jose b. Halafta (A famous rabbi and a student of Rabbi Akiva who lived in late first century in Zippori): ‘In how many days did the Holy One, blessed be He, create His World?” He answered: ‘In six days, as it is written, For in six days the Lord made heaven and earth, etc.’She asked further: ‘And what has He been doing since that time?’ He answered: ‘He is joining couples: “A’s wife [to be allotted] to A; A’s daughter is allotted to B; (so-and-so’s wealth is for so-and-so).”A Matchmaker’s Tale326

327. Said she: ‘This is a thing which I, too, am able to do. See how many male slaves and how many female slaves I have; I can make them consort together all at the same time.’Said he: ‘If in your eyes it is an easy task, it is in His eyes as hard a task as the dividing of the Red Sea.’He then went away and left her.What did she do? She sent for a thousand male slaves and a thousand female slaves, placed them in rows, and said to them: ‘Male A shall take to wife female B; C shall take D and so on.’A Matchmaker’s Tale327

328. She let them consort together one night.In the morning they came to her; one had a head wounded, another had an eye taken out, another an elbow crushed, another a leg broken; one said: ‘I do not want this one,’ another said: ‘I do not want this one.’. . .A Matchmaker’s Tale328

329. Claud - Carol,Barry - Betty,Adam - Ann,Dave - DaisyAdam would gladly swap Ann (3) for Daisy (1).But Daisy won’t leave Dave (2) for Adam (4).Carol, too, would swap Claud (2) for Barry (1).But Barry won’t leave Betty (1) for Carol (4).Let’s try matchmaking!DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314329

330. But Betty and Claud - that’s another story.Claud prefers Betty (1) to Carol (3).And Betty would willingly switch Barry (4) for Claud (3).Betty and Claud object to this matching!Let’s try matchmaking!Claud - Carol,Barry - Betty,Adam - Ann,Dave - DaisyDaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314330

331. A match is stable if there is no boy-girl pair that objects to it.Is there a stable match for every set of boy-girl preferences?Yes!The deferred acceptance algorithm described by Gale and Shapley (1962) yields a stable matching.A boy and girl object to a match, if they prefer each other to their matched mates.Stability of Matching331

332. Before elaborating on this algorithm, it should be noted that only two-sided markets are guaranteed to have a stable matching. One-sided markets are not.We want to make two two-person teams from this group:AlCliveBertGusTheir preferencesAl * 1 2 3GusBertCliveAlBert 2 * 1 3Clive 1 2 * 3Suppose that Gus is matched with X.Clearly, X prefers any other partner to Gus.Another person, Y, has X on the top of his list, but is not matched with him.The pair X and Y object to the matching.

333. Day one: every boy proposes to the first girl on his list.Every girl asks her favorite boy among those who proposed to come again the next day, and rejects all others.XDay two: all boys who were asked to come again do so, and rejected boys go to the next girl on their list.DAA - with boys proposingDay one: every boy proposes to the first girl on his list.Every girl asks her favorite boy among those who proposed to come again the next day, and rejects all others.DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314333

334. xDAA - with boys proposingDay one: every boy proposes to the first girl on his list.Every girl asks her favorite boy among those who proposed to come again the next day, and rejects all others.Day two: all boys who were asked to come again do so, and rejected boys go to the next girl on their list.Day three: ditto.Girls - ditto.Day two: all boys who were asked to come again do so, and rejected boys go to the next girl on their list.Girls - ditto.DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314334

335. xDAA - with boys proposingDay one: every boy proposes to the first girl on his list.Every girl asks her favorite boy among those who proposed to come again the next day, and rejects all others.Day two: all boys who were asked to come again do so, and rejected boys go to the next girl on their list.Girls - ditto.Day three: ditto.Day three: ditto.Day four: ditto.DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314335

336. DAA - with boys proposingDay one: every boy proposes to the first girl on his list.Every girl asks her favorite boy among those who proposed to come again the next day, and rejects all others.Day two: all boys who were asked to come again do so, and rejected boys go to the next girl on their list.Girls - ditto.Day three: ditto.Day four: ditto.This is a stable matching!DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314336

337. Every day, each boy proposes to his favorite girl among those who have not yet previously rejected him.Every girl who has more than one proposers, rejects everyone except the proposer she likes the best. Theorem: For any boy-girl preference, the boy-proposing process endswith a stable matching. DAA - with boys proposing Summary337

338. 338Proof: Consider the girl-proposing DAA. Suppose Mary is paired with Larry, and Jack with Katie. Suppose to the contrary that Mary and Jack prefer each other over their mates. This means that Mary, as a proposer, proposed to Jack at some point and he rejected her in favor of Katie. A contradiction.

339. Day one.Day two.DAA with girls proposingDaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314339

340. This stable matching differs from the DAA with boys proposing matching.- boys propose- girls proposeDay one.Day two.DAA with girls proposingDaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314340

341. Definition: A boy A and a girl a are feasible if there is some stable matching where they are matched. 341Theorem: All proposers in DAA (whether boys or girls) are paired to their most preferred feasible choice.In boy(girl)-proposing DAA, all girls (boys) are paired with their least preferred feasible choice

342. 342Proof: Let S0 be the stable matching of the girl proposing DAA. It is sufficient to prove that if Mary prefers Jack on what she got, then Jack is not feasible for her. Suppose to the contrary that Jack is feasible to Mary. Then Mary proposed to him and was rejected in favor of, say, Katie. Suppose this is the first rejection. Namely, there was no other rejections prior to this one.Since Jack is feasible for Mar, there is another matching, S1 , where they are matched. Suppose Katie in S1 is matched to Larry. S1 =<MJ, KL,…>. Since S1 is stable and J prefers K to M (remember, he rejected M in favor of K) it must be that K prefers L to J.Since K ends up with J in the stable matching S0 it must be that K previously proposed to L and L rejected her. A contradiction proving that J is not feasible for M.

343. Boys proposing and girls proposing yield the same stable matching. There is a single stable matching.What if BP-DAA and GP-DAA overlap?343

344. If there is only one stable matching, it’s clear that both procedures yield the same matching - the only stable one.Suppose that both procedures yield the same matching, in which Brenda and Dylan are paired.Since this is the result of BP-DAA, Brenda is Dylan’s favorite girl of those feasible for him.Since this is the result of GP-DAA, Brenda is Dylan’s least favorite girl of those feasible for him.Therefore, Brenda is the only feasibly girl for Dylan.Hence, they are paired in every stable matching.What if BP-DAA and GP-DAA overlap?

345. Lonely Hearts ClubSuppose that the number of boys is n, the number of girls is m, and n > m.In any stable matching, n-m boys will remain unmatched.Does the fate of the unmatched boys depend on the matching? Might an unmatched boy be paired in a different stable matching?Alas for the bachelors, the answer is:No!The unmatched boys are the same in every stable matching.345

346. To show this we take a boy who is unmatched in BP-DAA.We’ve seen that a girl who rejects a boy in this process isn’t paired with him in any stable matching.The boy in question was rejected by all the girls.Therefore, in any stable matching none of the girls are paired with him.Hence: the group of n-m boys that aren’t paired in BP-DAA is the group of boys not paired in any stable matching.Lonely Hearts Club346

347. Don’t be so picky…Who is to blame if a person remains unmatched?Maybe he is to blame. Maybe if changed his preferences he would find a mate.Perhaps an unemployed physicist would find a job, if he only put the municipal sanitation department, rather than Motorola, at the top of his list.No. A change of preferences will not change his position in any stable matching.347

348. Unemployed cannot help it Denote by B the set of all boys who are not matched in any stable matching for a preference table T.Let T’ be the preference table which differs from T only in the preferences of boys in B. In the GP-DAA according to T, no boy in B is visited by any girl.Therefore, in this process, preferences of boys in B do not come into play at all. Hence, the GP-DAA according to T’ is exactly the same as the one according to T. That is, the boys in B are unmatched in the GP-DAA according to T’ as well. Therefore they are unmatched in any stable matching. 348

349. Spin dating - exercise Show that in the spin-dating example there is exactly one stable matching in addition to the GP- and BP-DAA. 349

350. solution Spin dating -In a stable matching,Adam is matched with either Ann or Carol, and Barry is matched with either Carol or Daisy.Suppose that Barry is matched with Daisy.Since Daisy prefers Dave to Barry, Dave must be matched with Ann.Therefore, Adam is matched with Carol. This is the outcome of BP-DAA.Suppose that Barry is matched with Carol. Adam is then matched with Ann.This can be completed to either the GP-DAA or by matching Claud-Betty, Dave-Daisy.DaveClaudBarryAdamAnnBettyCarolDaisy41234123431234124132412341232314- boys propose- girls propose350

351. Exercise1. All boys prefer Cleo to all other girls. Show that in any stable matching Cleo is matched to the same boy. Who is the lucky one?2. Assume that there is the same number of boys and girls. Xantipah is last on every boy’s preference list. Show that in any stable matching Xantipha is matched to the same boy.351

352. ExerciseAll boys have the same preference over girls. Show that there exists a unique stable matching. 352

353. Kidney ExchangeHouse SwappingThe Top Trading Cycles Algorithm

354. Kidney ExchangeKidney exchange enables transplantation where it otherwise could not be accomplished. It overcomes the frustration of a biological obstacle to transplantation.A wife may need a kidney and her husband may want to donate, but they have a blood type incompatibility that makes donation impossible. Now they can do an exchange.

355. Kidney ExchangeThe National Organ Transplant Act forbids the creation of binding contracts for organ transplant. So procedures have to be performed roughly simultaneously. Two pairs of patients means four operating rooms and four surgical teams acting in concert with each other. Three pairwise exchange is even less feasible. A 12-party (six donors and six recipients) kidney exchange was performed in April 2008.Exchanges are also made in which a donor-patient pair makes a donation to someone on the queue for a cadaver kidney, in return for the patient in the pair receiving the highest priority for a compatible cadaver kidney when one becomes available.

356. Kidney ExchangeBefore kidney exchange was feasible, a patient identified a healthy donor (spouse) and, if the transplant was feasible on medical ground (appropriate blood types and absence of “positive crossmatch” antibodies), it was carried out. Otherwise, the patient entered the queue for a cadaver kidney, while the donor returned home.For direct exchange of living donor-recipient pairs Roth extended Gale’s Top Trading Cycles (TTC) Mechanism. The ranking of a patient over all donors (including his/her and including cadavers) comes from maximizing the probability of a successful transplant. One can choose w (waiting for cadavers) as part of the ranking of a patient.

357. The House Swapping GameThere are n traders A, B, C, …, N.A owns house aB owns house bC owns house c Etc.The traders can transfer ownership amongst themselves in any way they please but at the end each has to have only one house.Each trader has a simple ranking of all the houses (no ties). No side payments are allowed (barter economy).

358. The House Swapping GameExample (6 houses) A: c e f a b d B: b a c e f d C: e f c a d b D: c a b e d f E: d c b f e a F: b d e f a cNote: B likes her house the best. The others have possibilities of “trading up” to something better. For example, E and F would each move up a notch if they exchanged houses.Claim: There is always a re-allocation of the houses such that no coalition of traders could have done better for all of its members by trading only among themselves.

359. The Top Trading Cycles AlgorithmStep 1: Make a directed graph with each trader represented by a vertex from which an edge points to the owner of his/her top-ranked house. A: c e f a b d B: b a c e f d C: e f c a d b D: c a b e d f E: d c b f e a F: b d e f a cAFBCDE

360. The Top Trading Cycles AlgorithmStep 2: Find the top trading cycle(s) by starting at any vertex and following the arrows until the path loops back on itself. [CED] , [B]Step 3: Delete from the ranking table all traders appearing in the TTC discovered in Step 2 and return to Step 1 if any traders are left. A: f, a F: f aAF[F]A=>A: a[A]

361. The Top Trading Cycles AlgorithmStep 4: When every trader has been assigned to a TTC, execute all the indicated trades.The final allocation a = < Aa, Bb, Ce, Dc, Ed, Ff >

362. ExampleA: e d f b a cB: d f c e a bC: c a b e f dD: f e c d a bE: c e a d b fF: e c f d b aFirst iteration [C]. Let us delete C and c.A: e d f b aB: d f e a bD: f e d a bE: e a d b fF: e f d b aAECBFDAEBFD

363. Second iteration [E]. Let us delete E and e.A: d f b aB: d f a bD: f d a bF: f d b aThird iteration [F]. A: d b aB: d a bD: d a bABFDABD

364. Fourth iteration [D]A: b aB: a bFifth iteration [A,B]The allocation isa = (Ab, Ba, Cc, Dd, Ee, Ff)AB

365. Market Crashes365

366. Market crash is usually considered as an indication that the fundamentals of the economy have changed and recession is around the corner. This is not necessarily true. For instant in October 1987 Well Street lost over 20% of its value in one day, and this was not followed by a recession. Moreover, in the days before the crash there were no significant external events or “bad news” that could justify the dramatic price fall.366

367. We show that market crashes (or market bubbles) may well be the result of information processing by traders – and nothing else. Moreover, in terms of market observables (prices, volume, etc.) it looks as if nothing is changing. Still, underneath the surface, there is gradual updating of information by the traders. Then, at a certain point of time, this causes a sudden change of behavior.367

368. The phenomenon described here has to do with the step-by-step advance in levels of “mutual knowledge” (what one knows about what the other knows, and so on). Each trading day increases the level of information through the daily market observables – prices, quantity traded and so forth – which are common knowledge. The behavior of the traders can however be discontinuous. They can behave the same way for all levels of information up to a certain level – where jump occurs.368

369. In our example except for the daily observation of the market, there is no new information and there is no communication nor any coordination among the participants – whether expressed or tacit. In addition the stationary unchanging behavior of the market for arbitrarily long periods of time is no sign that nothing is happening. Underneath the surface, completely unobservable, information is being processed by the traders – which ultimately leads to a sudden change of behavior.Before we present our basic example let us introduce the concept of knowledge or information.369

370.  370

371.  371

372.  372

373.  373

374.  374

375. Suppose that the true outcome is 1. It seems likely that in this case it is common knowledge for Alice and Bob that 9 is not the true outcome. Even though Alice knows this, Bob knows this, Alice knows that Bob knows this and Bob knows that Alice knows this, still this is not common knowledge. Why?Let be the set of outcomes in where Alice knows that the event occurred ( stands for knowledge, for Alice and for event). 375

376. The event – “9 did not occur” is the same event as (namely the outcome is any integer between 1 and 8). First note thatIf the true outcome is either 7 or 8 (or 9) Alice will be informed of . Since she will not know for sure that occurred. Next is the set of outcomes where Bob knows that Alice knows that occurred. By  376

377.  377

378.  378

379. Since Bob can never be sure that the true outcome is either 1 or 2 or 3 (if the true outcome is 1, 2, or 3 Bob will know that it is 1, 2, 3 or 4 and he will not be able to exclude 4). Equality means that there is no outcome at which Bob knows that Alice knows that Bob knows that Alice knows that 9 did not occur (or that occurred). 379

380. The Basic ExampleWe use Example 2 as our benchmark. We interpret an outcome as a “state of the world”. To know the state of the world is to know everything relevant to the decisions of Alice and Bob (for instance, the interest rate, the inflation rate, etc.). If Alice knows the true state of the world (the true outcome) she faces no uncertainty. is the set of all possible states of the world and the information partitions areIn addition, it is assumed that every state of the world appears with probability 1/9 (a uniform distribution over ) and this is common knowledge. 380

381. Let be the event consisting of bad outcomes (for instance, the company earning will go down). Suppose that each one of the two traders behaves each day according to the following ruleThe relevant probability is always computed given the current information.Finally, assume that the true state of the world is 1. 381

382. 382Initially, Alice assesses the probability of to be 1/3 (since at state 1 she knows that the true state of the world is either 1, or 2 or 3). Only 1 belongs to and since each state has the same probability the probability that Alice assigns to , given her information isTherefore, Alice gives a SELL order. Bob assesses the probability of given his information to be ¼Therefore, Bob gives a BUY order. 

383. 383So a transaction takes place. We will show that this will happen not only on the first day but also on each one of the first four days. The assessments of Alice and Bob for the probability of remains 1/3 and 1/4, respectively. On the fifth day, however, there is a sudden and major change: both assessments becomes 1/3 and both traders give order to sell. So, a “crash” occurs after four seemingly “quiet and normal” days. 

384. 384On day , it is common knowledge that Bob bought on the previous day. Therefore it is common knowledge that 9 did not occur. (If 9 occurred then Bob would know it for sure and he would sell and not buy). So in day 2 the set of the states of the world isAnd the information partitions are therefore 

385. 385The probability that Alice and Bob assign to the bad event given their information is still the same, even under .Again, Alice sends a SELL order and Bob sends a BUY order. A transaction takes place and this is common knowledge. 

386. 386On day , it is common knowledge that 7 or 8 did not occur (otherwise, Alice would be informed that the event occurred and since she would assign zero probability that occurred and she would send a BUY order (not a SELL order). Thus the new set of the state of the world in isAnd the information partitions are thereforeWe have now 

387. 387Again Alice sends a SELL order and Bob sends a BUY order. A transaction takes place and this is common knowledge. Therefore in Day 4 it is common knowledge that 5 or 6 did not occur (otherwise, Bob would be informed of in which case and Bob would send a SELL order not a BUY order). 

388. 388So in the set of states of the world is The information partitions are nowAgainAgain, Alice sends a SELL order while Bob sends a BUY order, a transaction takes place and this is common knowledge. Therefore in Day 5 it is common knowledge that 4 did not occur (since otherwise Alice would have sent a BUY order, not a SELL order). 

389. 389The set of states of the world at is thusand the information partitions are now But nowand both Alice and Bob send a SELL order – CRASH. 

390. What is happening in this example is the following: Initially, both Alice and Bob know that the true state is 1, 2, 3 or 4 (Alice knows even more – she knows that the true state is not 4). But this is not common knowledge between them. For example, from Bob’s point of view the state could as well be 4, in which case Alice would have known that it is either 4, 5 or 6. So Bob does not know that Alice knows that it is 1, 2, 3 or 4. As time goes by, the trading increases the hierarchy of information toward common knowledge, until, on Day 5, it reaches its conclusion: it is common knowledge that the true state is 1, 2 or 3.390

391. Remarks: 1. For any number of days, a similar example consisting of (instead of 9) states will yield days where transactions occur, and a “crash” on day . So a “crash” can be proceeded by arbitrary many periods where trading occurs normally and nothing seems to change.2. We assumed that the true state of the world is 1, that is a “bad” state (a state in ) is the true state. In the end (Day 5) both traders indeed want to sell. But exactly the same behavior would have resulted if the true state were 2 or 3 – which are “good” states (not in ). Also note that if the true state is 4, both traders send a BUY order in Day 4. 391

392. Exercise (1) Show that if the true state is 5 then a “crash” occurs in Day 3.(2) Show that if the true state is 4 then both traders send a BUY order in Day 4.(3) Show that if the true state is 7 (or 8) both traders send a BUY order in Day 2.392

393. ExerciseA “bubble” is the case where all traders send BUY orders while the state of the world is bad. Suppose and both Alice and Bob has uniform distribution about and is the set of “bad” states. Show that if the true state is 4 (bad) then in Day 3 a bubble bursts. 393

394. SolutionIn Day 1 Alice is informed about and Bob about Hence Alice buys and Bob sells and a transaction takes place. This is common knowledge, implying that the states 1, 2, 3, 10, 11, 12 did not occur (if either 1, 2, or 3 occurred Alice would have sold. If either 10, 11 or 12 occurred Bob would have bought). 394

395. So in Day 2NowAgain Alice buys and Bob sells. It becomes common knowledge that 9 did not occur (otherwise, Alice would have bought). 395

396. So in Day 3andNowand both Alice and Bob sends a BUY order, a bubble bursts. 396