University of Kansas, Fall 2006
Philosophy 666: Rational Choice Theory
Ben Egglestoneggleston@ku.edu

Solutions to problems from Resnik’s Choices

section 5-1:

  1. games of perfect information?
    1. yes
    2. no
    3. yes
    4. no
    5. yes
  2. (Here’s one answer; other, contrary, ones can be good, too.) From the point of view of game theory, basketball is a game with two players (namely, the opposing teams). An actual basketball game is a single game, and the games are strictly competitive. Communication between the players (i.e., between the opposing teams) plays no role (though there is, of course, communication within each team).
  3. (Here’s one answer; other, contrary, ones can be good, too.) The players are the members of the political party selecting a nominee, the game is not strictly competitive, and the players can make agreements to distribute the payoffs.
  4. (Here’s one answer; other, contrary, ones can be good, too.) Prohibiting communication makes no difference to the outcome of a strictly competitive game among perfectly rational players because in a strictly competitive game, communication (if it were to occur) would not have any effect on perfectly rational players. Perfectly rational players would ignore their opponents’ announcements, threats, etc., and just focus on the payoffs.

section 5-2:

section 5-3:

  1. proof that if a game has a dominant row and column, that row and column determine an equilibrium pair for the game:
# claim justification
1 Suppose row i is a dominant row and column j is a dominant column. assumption for proving conditional
2 Row i is a dominant row. 1
3 vij is at least as large as any other value in column j. 2
4 Column j is a dominant column. 1
5

vij is at least as small as any other value in row i.

4
6

(Ri, Cj) is an equilibrium pair.

3 and 5, minimax equilibrium test
7

If row i is a dominant row and column j is a dominant column, then (Ri, Cj) is an equilibrium pair.

1–6
  1. proof that every strategy pair that passes the equilibrium test is in equilibrium in the originally defined sense, and conversely:
# claim justification
1 Suppose (Ri, Cj) is a strategy pair that passes the equilibrium test. assumption for proving conditional
2 The payoff determined by (Ri, Cj) equals the minimal value of its row and the maximal value of its column. 1, definition of equilibrium test
3 The payoff determined by (Ri, Cj) equals the minimal value of its row. 2
4 The column player cannot do better by unilaterally changing his strategy. 3
5

The payoff determined by (Ri, Cj) equals the maximal value of its column.

2
6

The row player cannot do better by unilaterally changing his strategy.

5
7

Neither player can do better by unilaterally changing his strategy.

4 and 6
8

(Ri, Cj) is in equilibrium in the originally defined sense.

7, original definition of equilibrium
9

If (Ri, Cj) is a strategy pair that passes the equilibrium test, then it is in equilibrium in the originally defined sense.

1–8
10

Suppose (Ri, Cj) is in equilibrium in the originally defined sense.

assumption for proving conditional
11

Neither player can do better by unilaterally changing his strategy.

10, original definition of equilibrium
12

The row player cannot do better by unilaterally changing his strategy.

11
13

The payoff determined by (Ri, Cj) equals the maximal value of its column.

12
14

The column player cannot do better by unilaterally changing his strategy.

11
15

The payoff determined by (Ri, Cj) equals the minimal value of its row.

14
16

The payoff determined by (Ri, Cj) equals the minimal value of its row and the maximal value of its column.

13 and 15
17

(Ri, Cj) passes the equilibrium test.

16, definition of equilibrium test
18

If (Ri, Cj) is in equilibrium in the originally defined sense, then it passes the equilibrium test.

10–17
19

If (Ri, Cj) is a strategy pair that passes the equilibrium test, then it is in equilibrium in the originally defined sense, and conversely.

9 and 18
  1. an example of a game with an equilibrium pair whose row is dominated:
  C1 C2
R1 1 2
R2 1 3

(R1, C1) is an equilibrium pair, and yet row 1 is dominated.

  1. No, R cannot be half of an equilibrium pair for G.
    1. One way to see this is via the original definition of equilibrium: since every entry in row S is greater than its correspondent in R, the row player can always do better—regardless of what column he’s in—by unilaterally changing his strategy from R to S. The original definition of equilibrium thus means that R cannot be part of a equilibrium pair.
    2. Another way to see this is via the equilibrium test: since every entry in row S is greater than its correspondent in R, no entry in row R is the maximal value of its column. The equilibrium test thus means that R cannot be part of an equilibrium pair.

section 5-3a:

top of p. 135—tables 5-16 and 5-17:

(Instead of verifying that the mentioned strategy pair is the equilibrium strategy pair, we’ll just verify that the mentioned strategy pair is an equilibrium strategy pair.)

table 5-16:

To verify that [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair, we need to verify that neither the row player nor the column player can do better by unilaterally changing his strategy.

First consider the game from the row player’s point of view. Consider, in particular, the row player’s pure strategies, R1 and R2. If the column player is playing strategy (½ C1, ½ C2), then the expected utility of R1 is (½)(1) + (½)(–1), or ½ – ½, or 0. And—still assuming that the column player is playing strategy (½ C1, ½ C2)—the expected utility of R2 is (½)(–1) + (½)(1), or –½ + ½, or 0. Because the expected utilities of R1 and R2 are both 0, the expected utilities of all their mixtures is 0, too, and so any mixture of R1 and R2 is as good (for the row player) as any other. Thus, the strategy (½ R1, ½ R2) is as good (for the row player) as any other. So, the row player cannot do better by unilaterally changing his strategy.

Now consider the game from the column player’s point of view. Consider, in particular, the column player’s pure strategies, C1 and C2. If the row player is playing strategy (½ R1, ½ R2), then the expected utility of C1 is (½)(1) + (½)(–1), or ½ – ½, or 0. And—still assuming that the row player is playing strategy (½ R1, ½ R2)—the expected utility of C2 is (½)(–1) + (½)(1), or –½ + ½, or 0. Because the expected utilities of C1 and C2 are both 0, the expected utilities of all their mixtures is 0, too, and so any mixture of C1 and C2 is as good (for the column player) as any other. Thus, the strategy (½ C1, ½ C2) is as good (for the column player) as any other. So, the column player cannot do better by unilaterally changing her strategy.

Since neither the row player nor the column player can do better by unilaterally changing his or her strategy, [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair.

To compute the value of the game, we find the expected utility of the playing of [(½ R1, ½ R2), (½ C1, ½ C2)], by computing the weighted average of the values of the four possible outcomes of the game: (½)(½)(1) + (½)(½)(–1) + (½)(½)(–1) + (½)(½)(1) = ¼ – ¼ – ¼ + ¼ = 0.

table 5-17:

To verify that [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair, we need to verify that neither the row player nor the column player can do better by unilaterally changing his strategy.

First consider the game from the row player’s point of view. Consider, in particular, the row player’s pure strategies, R1 and R2. If the column player is playing strategy (½ C1, ½ C2), then the expected utility of R1 is (½)(22) + (½)(–18), or 11 – 9, or 2. And—still assuming that the column player is playing strategy (½ C1, ½ C2)—the expected utility of R2 is (½)(–18) + (½)(22), or –9 + 11, or 2. Because the expected utilities of R1 and R2 are both 2, the expected utilities of all their mixtures is 2, too, and so any mixture of R1 and R2 is as good (for the row player) as any other. Thus, the strategy (½ R1, ½ R2) is as good (for the row player) as any other. So, the row player cannot do better by unilaterally changing his strategy.

Now consider the game from the column player’s point of view. Consider, in particular, the column player’s pure strategies, C1 and C2. If the row player is playing strategy (½ R1, ½ R2), then the expected utility of C1 is (½)(22) + (½)(–18), or 11 – 9, or 2. And—still assuming that the row player is playing strategy (½ R1, ½ R2)—the expected utility of C2 is (½)(–18) + (½)(22), or –9 + 11, or 2. Because the expected utilities of C1 and C2 are both 2, the expected utilities of all their mixtures is 2, too, and so any mixture of C1 and C2 is as good (for the column player) as any other. Thus, the strategy (½ C1, ½ C2) is as good (for the column player) as any other. So, the column player cannot do better by unilaterally changing her strategy.

Since neither the row player nor the column player can do better by unilaterally changing his or her strategy, [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair.

To compute the value of the game, we find the expected utility of the playing of [(½ R1, ½ R2), (½ C1, ½ C2)], by computing the weighted average of the values of the four possible outcomes of the game: (½)(½)(22) + (½)(½)(–18) + (½)(½)(–18) + (½)(½)(22) = 22/4 – 18/4 – 18/4 + 22/4 = 8/4 = 2.

problems on p. 137:

  1. If the column player has k pure strategies, the general form of his mixed strategies is (x1 C1, x2 C2, . . . xk Ck) where x1 + x2 + . . . xk = 1. (See Resnik, top of p. 134.)
  2. Although we normally take the instruction ‘Show’ to call for a proof, in this case we’ll just provide the relevant calculations.

To verify that [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair, we need to verify that neither the row player nor the column player can do better by unilaterally changing his strategy.

First consider the game from the row player’s point of view. Consider, in particular, the row player’s pure strategies, R1 and R2. If the column player is playing strategy (½ C1, ½ C2), then the expected utility of R1 is (½)(a) + (½)(–b), or ½(ab). And—still assuming that the column player is playing strategy (½ C1, ½ C2)—the expected utility of R2 is (½)(–b) + (½)(a), or ½(–b + a), or ½(ab). Because the expected utilities of R1 and R2 are both ½(ab), the expected utilities of all their mixtures is ½(ab), too, and so any mixture of R1 and R2 is as good (for the row player) as any other. Thus, the strategy (½ R1, ½ R2) is as good (for the row player) as any other. So, the row player cannot do better by unilaterally changing his strategy.

Now consider the game from the column player’s point of view. Consider, in particular, the column player’s pure strategies, C1 and C2. If the row player is playing strategy (½ R1, ½ R2), then the expected utility of C1 is (½)(a) + (½)(–b), or ½(ab). And—still assuming that the row player is playing strategy (½ R1, ½ R2)—the expected utility of C2 is (½)(–b) + (½)(a), or ½(–b + a), or ½(ab). Because the expected utilities of C1 and C2 are both ½(ab), the expected utilities of all their mixtures is ½(ab), too, and so any mixture of C1 and C2 is as good (for the column player) as any other. Thus, the strategy (½ C1, ½ C2) is as good (for the column player) as any other. So, the column player cannot do better by unilaterally changing her strategy.

Since neither the row player nor the column player can do better by unilaterally changing his or her strategy, [(½ R1, ½ R2), (½ C1, ½ C2)] is an equilibrium pair.

  1. (skip)

section 5-3b:

  1. We’ll disregard the instruction ‘Then solve the two games.’. Here’s one right answer to the problem that remains:
  C1 C2
R1 2 1
R2 1 2
  1. To show that EU’(p, q) = EU(p, q) + k, we’ll derive the latter from the former:
    EU’(p, q)
    = (a + k)pq + (b + k)p(1 – q) + (c + k)(1 – p)q + (d + k)(1 – p)(1 – q)
    = apq + kpq + bp(1 – q) + kp(1 – q) + c(1 – p)q + k(1 – p)q + d(1 – p)(1 – q) + k(1 – p)(1 – q)
    = apq + bp(1 – q) + c(1 – p)q + d(1 – p)(1 – q) + kpq + kp(1 – q) + k(1 – p)q + k(1 – p)(1 – q
    = EU(p, q) + k[pq + p(1 – q) + (1 – p)q + (1 – p)(1 – q)]
    = EU(p, q) + k[p(q + 1 – q) + (1 – p)(q + 1 – q)]
    = EU(p, q) + k[p(1) + (1 – p)(1)]
    = EU(p, q) + k[p + 1 – p]
    = EU(p, q) + k(1)
    = EU(p, q) + k
  2. (skip)

section 5-3c:

  1. two tables:

table 5-25:

EU(C1) = EU(C2)
p(3) + (1 – p)(–7) = p(1) + (1 – p)(4)
3p – 7 + 7p = p + 4 – 4p
10p – 7 = –3p + 4
13p = 11
p = 11/13

EU(R1) = EU(R2)
q(3) + (1 – q)(1) = q(–7) + (1 – q)(4)
3q + 1 – q = –7q + 4 – 4q
2q + 1 = –11q + 4
13q = 3
q = 3/13

EU(game)
= (11/13)(3/13)(3) + (11/13)(1 – 3/13)(1) + (1 – 11/13)(3/13)(–7) + (1 – 11/13)(1 – 3/13)(4)
= (11/13)(3/13)(3) + (11/13)(10/13)(1) + (2/13)(3/13)(–7) + (2/13)(10/13)(4)
= (33/169)(3) + (110/169)(1) + (6/169)(–7) + (20/169)(4)
= 99/169 + 110/169 – 42/169 + 80/169
= 247/169
= 19/13

table 5-26:

EU(C1) = EU(C2)
p(4) + (1 – p)(5) = p(20) + (1 – p)(–3)
4p + 5 – 5p = 20p – 3 + 3p
p + 5 = 23p – 3
–24p = –8
p = 8/24
p = 1/3

EU(R1) = EU(R2)
q(4) + (1 – q)(20) = q(5) + (1 – q)(–3)
4q + 20 – 20q = 5q – 3 + 3q
–16q + 20 = 8q – 3
–24q = –23
q = 23/24

EU(game)
= (1/3)(23/24)(4) + (1/3)(1 – 23/24)(20) + (1 – 1/3)(23/24)(5) + (1 – 1/3)(1 – 23/24)(–3)
= (1/3)(23/24)(4) + (1/3)(1/24)(20) + (2/3)(23/24)(5) + (2/3)(1/24)(–3)
= (23/72)(4) + (1/72)(20) + (46/72)(5) + (2/72)(–3)
= 92/72 + 20/72 + 230/72 – 6/72
= 336/72
= 14/3

  1. (skip)

section 5-3d:

p. 141:

To verify that [(2/3 mountain, 1/3 plain), (4/9 mountain, 5/9 plain)] is an equilibrium pair, we need to verify that neither the row player nor the column player can do better by unilaterally changing his strategy.

First consider the game from the row player’s point of view. Consider, in particular, the row player’s pure strategies, mountain and plain. If the column player is playing strategy (4/9 mountain, 5/9 plain), then the expected utility of mountain (for the row player) is (4/9)(–50) + (5/9)(100), or –200/9 + 500/9, or 300/9, or 100/3. And—still assuming that the column player is playing strategy (4/9 mountain, 5/9 plain)—the expected utility of plain (for the row player) is (4/9)(200) + (5/9)(–100), or 800/9 – 500/9, or 300/9, or 100/3. Because the expected utilities of mountain and plain are both 100/3, the expected utilities of all their mixtures is 100/3, too, and so any mixture of mountain and plain is as good (for the row player) as any other. Thus, the strategy (2/3 mountain, 1/3 plain) is as good (for the row player) as any other. So, the row player cannot do better by unilaterally changing his strategy.

Now consider the game from the column player’s point of view. Consider, in particular, the column player’s pure strategies, mountain and plain. If the row player is playing strategy (2/3 mountain, 1/3 plain), then the expected utility of mountain (for the column player) is (2/3)(–50) + (1/3)(200), or –100/3 + 200/3, or 100/3. And—still assuming that the row player is playing strategy (2/3 mountain, 1/3 plain)—the expected utility of plain (for the column player) is (2/3)(100) + (1/3)(–100), or 200/3 – 100/3, or 100/3. Because the expected utilities of mountain and plain are both 100/3, the expected utilities of all their mixtures is 100/3, too, and so any mixture of mountain and plain is as good (for the column player) as any other. Thus, the strategy (4/9 mountain, 5/9 plain) is as good (for the column player) as any other. So, the column player cannot do better by unilaterally changing her strategy.

Since neither the row player nor the column player can do better by unilaterally changing his or her strategy, [(2/3 mountain, 1/3 plain), (4/9 mountain, 5/9 plain)] is an equilibrium pair.

We also have to find the value of this game. We do that as follows:
EU(game)
= (2/3)(4/9)(–50) + (2/3)(1 – 4/9)(100) + (1 – 2/3)(4/9)(200) + (1 – 2/3)(1 – 4/9)(–100)
= (2/3)(4/9)(–50) + (2/3)(5/9)(100) + (1/3)(4/9)(200) + (1/3)(5/9)(–100)
= (8/27)(–50) + (10/27)(100) + (4/27)(200) + (5/9)(–100)
= –400/27 + 1000/27 + 800/27 – 500/27
= 900/27
= 100/3

problems on pp. 143–144

  1. Sure—if one uses a mixed strategy, and faithfully employs it using a chance mechanism, then irrational biases have no way to creep in.
  2. Yes, because even if there is no threat of spies, a opponent might be able to predict his choices by assuming he aims to maximize his expected utility. A mixed strategy makes one’s behavior less predictable.
  3. No—you should proceed on the basis of the best information you have. For example, if you believe your opponent will assume that you are equally likely to play either of your rows (and then will maximize her expected utility on the basis of that assumption), and that allows you to anticipate that she will play a particular pure strategy, then you should assume that she will play that pure strategy, even though it may be ill-advised for her to do so, from the point of view of game theory.

section 5-4a:

  1. (skip)
  2. If the two players coordinated their weekend by flipping a fair coin, then they would essentially be choosing to play a lottery with a 1/2 chance of (2, 1) and a 1/2 chance of (1, 2). For the row player, this amounts to a lottery with a 1/2 chance of 2 and a 1/2 chance of 1, obviously yielding an expected utility of 3/2. Analogously, for the column player, the joint lottery amounts to a lottery with a 1/2 chance of 1 and a 1/2 chance of 2, obviously yielding an expected utility of 3/2. So, this lottery has a value of 3/2 for each of them.
  3. proof that if Able plays (1/2 R1, 1/2 R2), then Baker can do better than 1/2 by playing C2:
# claim justification
1 Suppose Able plays (1/2 R1, 1/2 R2). assumption for proving conditional
2 The expected utility, for Baker, of playing C2 is (1/2)(0) + (1/2)(2) 1, table 5-29
3 The expected utility, for Baker, of playing C2 is 1. 2, simplified
4 1 is greater than 1/2. math
5

The expected utility, for Baker, of playing C2 is greater than 1/2.

3 and 4
6

If Able plays (1/2 R1, 1/2 R2), then Baker can do better than 1/2 by playing C2.

1–5
  1. representation of power-failure situation as a clash of wills:
  do not call call
do not call –1, –1 2, 1
call 1, 2 0, 0

section 5-4c:

  1. proof that if the column player’s sentences are as before but the row player is sentenced to fifteen years if both confess, five years if he does but the column player does not, sixteen years if he does not but the column player does, and six years if neither does, then the prisoner’s dilemma arises:
  confess do not
confess –15, –10 –5, –25
do not –16, –1 –6, –2

 

# claim justification
1 If the row player confesses, then confessing is better for the column player. table, row 1
2 If the row player does not confess, then confessing is better for the column player. table, row 2
3 Confessing is the dominant strategy for the column player. 1 and 2
4 If the column player confesses, then confessing is better for the row player. table, column 1
5

If the column player does not confess, then confessing is better for the row player.

table, column 2
6

Confessing is the dominant strategy for the row player.

4 and 5
7

(confess, confess) is an equilibrium pair.

3 and 6
8

The payoff, for each player, of (confess, confess) is worse than the payoff of (do not, do not).

table, cells for (confess, confess) and (do not, do not)
9

The game is a prisoner’s dilemma.

7 and 8
  1. Here is how games of the prisoner’s dilemma can be stated in nonnumerical terms: Two players each have two strategies, W and not-W, such that for each player, W dominates not-W, and yet, for each player, the payoff of (W, W) is worse than the payoff of (not-W, not-W).
  2. final store example:

The situation can be set up as follows:

  $2.00 $1.90 $1.80 $1.70 $1.60
$2.00 $25, $25 $0, $40 $0, $30 $0, $20 $0, $10
$1.90 $40, $0 $20, $20 $0, $30 $0, $20 $0, 10
$1.80 $30, $0 $30, $0 $15, $15 $0, $20 $0, $10
$1.70 $20, $0 $20, $0 $20, $0 $10, $10 $0, $10
$1.60 $10, $0 $10, $0 $10, $0 $10, $0 $5, $5

Consider the following dominance considerations:

  1. For each player, $2.00 is dominated (by $1.90, and also by $1.80, for that matter). So neither player will play $2.00
  2. For each player, if the row and column for $2.00 is ruled out, $1.90 is dominated (by $1.80, and indeed $1.70). So neither player will play $1.90.
  3. For each player, if the rows and columns for $2.00 and $1.90 are ruled out, $1.80 is dominated (by $1.70). So neither player will play $1.80.
  4. For each player, if the rows ands columns for $2.00, $1.90, and $1.80 are ruled out, $1.70 is dominated (by $1.60). So neither player will play $1.70.
  5. For each player, if the rows and columns for $2.00, $1.90, $1.80, and $1.70 are ruled out, then all that is left is $1.60. That leads them to split the lowest total profit.
  1. chicken:

First, we have this descriptive matrix:

  hold fast veer
hold fast dead hero, dead hero live hero, chicken
veer chicken, live hero no change, no change

Second, we have the following utility assignments:

Combining these two pieces of information, we have this matrix:

  hold fast veer
hold fast –1, –1 1, –2
veer –2, 1 0, 0

For each player, hold fast dominates, and so (hold fast, hold fast) is the equilibrium pair, and yet the payoff of (hold fast, hold fast) is worse, for each player, than the payoff of (veer, veer). So this game is a variant of the prisoner’s dilemma.

section 5-4e:

  1. Yes, those outcomes are Pareto optimal.
  2. Yes, there can be Pareto-optimal outcomes in a zero-sum game. In fact, all of the outcomes in a zero-sum game are Pareto optimal.
  3. Even though the problem says ‘Show’, we’ll just provide an explanation rather than a proof.

First, we have this descriptive matrix:

  move sack stay home
move sack mule is fed by both of us, mule is fed by both of us mule is fed by me, mule is fed by the other guy
stay home mule is fed by the other guy, mule is fed by me mule is not fed, mule is not fed

Second, we have the following utility assignments:

Combining these two pieces of information, we have this matrix:

  move sack stay home
move sack 1, 1 0, 2
stay home 2, 0 –1, –1

There are two equilibrium strategy pairs—(move sack, stay home) and (stay home, move sack)—but one is better for one player, and the other is better for the other players. So we have a clash of wills.

  1. Again, we’ll provide an explanation rather than a formal proof.

Again, we have this descriptive matrix:

  move sack stay home
move sack mule is fed by both of us, mule is fed by both of us mule is fed by me, mule is fed by the other guy
stay home mule is fed by the other guy, mule is fed by me mule is not fed, mule is not fed

Second, we have the following utility assignments:

Combining these two pieces of information, we have this matrix:

  move sack stay home
move sack 1, 1 –1, 2
stay home 2, –1 0, 0

For each player, there is a dominant strategy—stay home—but the outcome of (stay home, stay home) is worse, for each, than the outcome of (move sack, move sack). So we have a prisoner’s dilemma.

  1. expected utilities of remaining a cheater and of becoming a cooperator:

EU(remaining a cheater)
= (1 – p)u + p[q(1) + (1 – q)u]
= upu + p[q + uqu]
= upu + pq + pupqu
= u + pqpqu
= u + pq(1 – u)

EU(becoming a cooperator)
= (1 – p)[(1 – q)u + q(0)] + p[r(u’) + (1 – r)u]
= (1 – p)[uqu] + p[ru’ + u ru]
= uqupu + pqu + pru’ + pu pru
= uqu + pqu + pru’ – pru
= uqu(1 – p) + pr(u’ – u)
= u + pr(u’ – u) – qu(1 – p)

  1. The expected gain from remaining a cheater is pq(1 – u), and the expected gain from becoming a cooperator is pr(u’ – u) – qu(1 – p). So, if we have the following values

    p = 1/2
    q = 3/4
    r = 1/4
    u = 1/4
    u’ = 3/4

    then the expected gain from remaining a cheater is

    (1/2)(3/4)(1 – 1/4)
    = (3/8)(3/4) = 9/32

    and the expected gain from becoming a cooperator is

    (1/2)(1/4)(3/4 – 1/4) – (3/4)(1/4)(1 – 1/2)
    = (1/8)(2/4) – (3/16)(1/2)
    = 2/16 – 3/32
    = 4/32 – 3/32
    = 1/32

    Since the expected gain from remaining a cheater is more than the expected gain from becoming a cooperator, it is more rational to remain a cheater. (This is not surprising, considering that there is a fairly high value for q (the possibility that a cheater can exploit a cooperator) and a low value for r (the probability of being able to cooperate with a fellow cooperator).

    Note, by the way, that the foregoing approach just compares the utility gains, relative to the status quo, of entering in the specified scenario as either a cheater or a cooperator. You can also compare the total utilities (not just utility gains) to be expected from either course of action; if you did that, then you would just add the value of u to each of the two values. Since u = 1/4, the total utility of remaining a cheater is 9/32 + 1/4, or 9/32 + 8/32, or 17/32; and the total utility of becoming a cooperator is 1/32 + 1/4, or 1/32 + 8/32, or 9/32.

section 5-5:

  1. Your graph would consist of the triangle whose vertices are (0, 0), (2, 1), and (1, 2). The triangle would be filled in.
  2. Your graph would consist of the quadrilateral whose vertices are (–25, –1), (–2, –2), (–1, –25), and (–10, –10). The quadrilateral would be filled in. It would look like a boomerang pointing to the northeast.
  3. Skipping the question in parentheses, we just have to prove that the set of Pareto optimal points constitutes the northeastern boundary of the polygon. Instead of providing a formal proof, we’ll let the following considerations suffice.
    1. If a Pareto-optimal point were not on the northeastern boundary, then it would be to the south or west of the boundary. If it were to the south, then the column player could do better if the point were moved to the north—which, since it would not be a move to the west, would not harm the row player. If it were to the west of the boundary, then the row player could do better if the point were moved to the east—which, since it would not be a move to the south, would not harm the column player. Either way, the point could be moved within the polygon in such a way as to benefit one player without harming the other. And yet then it would not really be Pareto-optimal.
    2. If a point on the northeastern boundary were not Pareto optimal, then it could be moved in such a way as to benefit the row player without harming the column player and/or benefit the column player without harming the row player. This means that it could be moved to the east without moving south, or to the north without moving west. Either way, it would move to the northeast (understood to include anything between due north and due east). And yet, if this were possible, then the northeast boundary would not really be the norheast boundary after all.

section 6-1:

  1. No, it does not, because an SWF yields a social ordering, not just a first choice. An ordering includes all of the options, rather than just designating one as the best. (See condition O4 on p. 23.)
  2. rolling the die
    1. No, this method will not necessarily yield an SWF, because it will not necessarily yield a social ordering, because there is no guarantee that the transitivity condition will be satisfied. (One person might rank a above b, another might rank b above c, and a third might rank c above a.)
    2. Yes, this method could yield a dictatorial SWF, because this method could yield a social ordering matching the ordering of any of the individuals.

section 6-2a:

  1. proof that every dictatorial SWF must satisfy condition P:
# claim justification
1 Suppose some method—call it M—is dictatorial. assumption for proving conditional
2 There is some citizen—call her C—whose ordering M always yields as the social ranking (regardless of the other aspects of the profile containing C's ordering). 1, definition of dictatorship
3 Suppose that M does not satisfy the Pareto condition. assumption for deriving a contradiction
4 There exists some profile—call it P—in which every citizen ranks one alternative—call it x—above another—call it y—and for which M does not yield a social ordering that ranks x above y. 3, definition of Pareto condition
5 In P, C ranks x above y. 4
6

For P, M yields a social ordering that ranks x above y.

5 and 2
7

We have a contradiction.

4 and 6
8

We can reject the assumption that M does not satisfy the Pareto condition.

3 and 7
9 M satisfies the Pareto condition. 8
10 If some method is dictatorial, then it satisfies the Pareto condition. 1–9
  1. (skip)
  2. SWF generating a social ordering based on the sum of the values assigned to alternatives by (or for) citizens:
    1. Would this SWF necessarily violate condition I? Yes, as shown in the following example.
      1. Consider the following profile.
        1. Jack ranks carrots above spinach by a score of 100 to 0.
        2. Janet also ranks carrots above spinach by a score of 100 to 0.
        3. Chrissy ranks spinach above carrots by a score of 500 to 0.
        4. Then the social ordering would rank spinach above carrots (500 to 200).
      2. Now consider the following profile.
        1. As before, Jack and Jane rank carrots above spinach by a score of 100 to 0 (each).
        2. Chrissy now ranks spinach above carrots by a smaller margin, 50 to 0.
        3. Then the social ordering would rank carrots above spinach (200 to 50).
      3. This is a violation of condition I, since each citizen ranks carrots and spinach from one profile to the next, and yet the social ordering does not.
    2. Would this SWF necessarily violate condition D? No, because every individual is treated equally by this SWF. No person can be singled out to be a dictator, or for any other purpose.
    3. Would this SWF necessarily violate condition CS? No, because for every alternative x and every alternative y, there is some set of numbers that can be assigned to the alternatives, by (or for) the citizens, such that the sum of the scores for x will exceed the sum of the scores for y. And that would ensure that, in the social ordering, x is ranked higher than y.
  3. proof that the following implies condition I: if P1 and P2 are two profiles and S is any subset of the set of alternatives, then if the citizens’ relative rankings of the members of S are the same for P1 and P2, the SWF places the members of S in the same relative positions in both P1 and P2:

    (We will assume that the last phrase of this means the same as ‘places the members of S in the same relative positions in the social orderings it yields for both P1 and P2’.)

    There are a couple of ways to prove this. Here is a direct way:

# claim justification
1 Suppose, for some method M, that if P1 and P2 are two profiles and S is any subset of the set of alternatives, then if the citizens’ relative rankings of the members of S are the same for P1 and P2, then M places the members of S in the same relative positions in its social orderings for both P1 and P2. assumption for proving conditional
2 If P1 and P2 are two profiles and S is any two-alternative subset of the set of alternatives, then if the citizens’ relative rankings of the members of S are the same for P1 and P2, then M places the members of S in the same relative positions in its social orderings for both P1 and P2. 1
3 M satisfies condition I. 2
4 The condition stated in line 1 implies condition I. 1–3

And here is an indirect way (based on proof by contradiction):

# claim justification
1 Suppose, for some method M, that if P1 and P2 are two profiles and S is any subset of the set of alternatives, then if the citizens’ relative rankings of the members of S are the same for P1 and P2, then M places the members of S in the same relative positions in its social orderings for both P1 and P2. assumption for proving conditional
2 Suppose M does not satisfy condition I. assumption for deriving a contradiction
3 There exists some alternatives x and y, and some profiles F1 and F2, such that each citizen ranks x and y in the same order in F1 and F2, but M yields social orderings for F1 and F2 in which x and y are not in the same order. 2, definition of condition I
4 There exists some subset of the set of alternatives such that the citizens’ relative rankings of the members of that subset are the same for F1 and F2 but M does not place the members of that subset in the same relative positions in its social orderings for both F1 and F2. 3
5 We have a contradiction. 1 and 4
6

We can reject the assumption that M does not satisfy condition I.

2
7

M satisfies condition I.

6
8

The condition stated in line 1 implies condition I.

1–7
  1. (skip)

section 6-2b:

  1. simple majority rule (supplemented as in the preceding text)
    1. just two alternatives (call them x and y)
      1. CS is satisfied because you can easily generate a profile that will cause x to be ranked above y, and one that will cause y to be ranked above x.
      2. D is satisfied because with majority rule, there is no dictator.
      3. I is satisfied because majority rule just considers alternatives in a pairwise fashion.
      4. PA is satisfied because if the social ordering ranks x above y, that means x is preferred to y by a majority, and it is obvious that x will remain ranked above y in the social ordering if some additional citizens join that majority and make it larger.
      5. U is satisfied because majority rule always says x P y or y P x or x I y; and with just two options, no problem of intransitivity can arise. So we have completeness and transitivity.
      6. Just for good measure, let’s observe that P is satisfied because majority rule is certainly unanimity-respecting.
    2. With three alternatives and three citizens, condition U is no longer satisfied, because of the possibility of cyclical social preferences (as in the Condorcet paradox).
  2. almost decisive and decisive
    1. how a set of citizens can be almost decisive for some alternative over another without being decisive for it, due to PA not holding
      1. profile 1
        1. A’s ranking is x, y, z.
        2. B's ranking is z, y, x.
        3. C’s ranking is y, x, z.
        4. The social ranking is x, y, z.
      2. profile 2
        1. A’s ranking is x, y, z (unchanged from before).
        2. B’s ranking is z, y, x (unchanged from before).
        3. C’s ranking is x, y, z.
        4. The social ranking is y, x, z.
      3. the five conditions
        1. D: satisfied
        2. PA: violated
        3. CS: no evidence of violation
        4. U: no evidence of violation
        5. I: no violation
      4. A could be almost decisive for x over y, as illustrated in profile 1, but clearly A is not decisive for x over y, because of profile 2. So these two profiles could be part of a large array of profiles that would show how A can be almost for x over y without being decisive for x over y.
    2. how a set of citizens can be almost decisive for some alternative over another without being decisive for it, due to I not holding: still lacking a satisfactory answer to this question