University of Kansas, Fall 2005
Philosophy 666: Rational Choice Theory
Ben Egglestoneggleston@ku.edu

Final exam: answer key

Following are the questions and correct answers to them. After some answers, parenthetical remarks are provided. These did not need to be stated in order for full credit to be awarded; they are stated here in order to provide additional information about the questions and answers.

  1. (10 points) Either prove that t(x) = x + 3 is an ordinal transformation or give a counter-example showing that it’s not.

Here’s a proof that it is:

no. claim justification
1 w ≥ v assumption for proving first half of biconditional
2 w + 3 ≥ v + 3 1, add 3 to each side
3 t(w) ≥ t(v) 2, definition of t(x)
4 if w ≥ v, then t(w) ≥ t(v) 1–3
5 t(w) ≥ t(v) assumption for proving second half of biconditional
6 w + 3 ≥ v + 3 5, definition of t(x)
7 w ≥ v 6, subtract 3 from each side
8 if t(w) ≥ t(v), then w ≥ v 5–7
9 w ≥ v if and only if t(w) ≥ t(v) 4 and 8

Question 2 refers to two of the three conditions defined as follows (in the glossary in Allingham’s Choice Theory):

  1. (5 points) Specify a set of no more than five choices violating the revelation condition but not the expansion condition (or, if there is no such set, say so).

From AB, choose A.
From ABC, choose B.

(This is just one of infinitely many possible correct answers.)

Question 3 refers to the following decision table:

  S1 S2 S3
A1 6 4 12
A2 2 11 8
A3 10 1 5
A4 9 7 3
  1. (5 points) Which act(s) would be selected by the maximin rule?

A1.

  1. (5 points) Give an example of an ordinal transformation that is not a linear transformation, by giving a short list of utilities and a corresponding list that is an ordinal transformation of the first list but is not a linear transformation of the first list.
x t(x)
2 3
1 1
0 0

(This is just one of infinitely many possible correct answers.)

  1. (5 points) Suppose Bobby has $1,000 to invest. He believes that one investment opportunity has a 3/5 chance of increasing his investment to $1,500 and a 2/5 chance of reducing it to $500. He believes that a second investment opportunity—a savings account insured by the U.S. government—has a probability p of increasing his investment to $1,200 and a probability 1 – p of reducing it to $0. What number must p be greater than in order for the second investment opportunity to have a higher expected monetary value (EMV) than the first?

We need to solve the inequality ‘EMV(second) > EMV(first)‘ for p:

EMV(second) > EMV(first)
(p)($1,200) + (1 – p)($0) > (3/5)($1,500) + (2/5)($500)
  $1,200p                      >       $900      +     $200
  $1,200p                      >               $1,100
            p                      >         $1,100/$1,200
            p                      >               11/12

  1. (5 points) Suppose Helen is deciding whether buy (and ultimately sell) some property that might or not not contain oil that can be extracted and sold for a profit. Unfortunately, the current owners will not let her test the property for the presence of oil before buying, so Helen is deciding between (A1) buying the property, looking for oil (possibly finding that there is none), extracting any oil that is there, and selling it and (A2) just skipping the whole thing. Regarding strategy A1, she believes (A) that there’s a 2/5 chance that there’s oil on the property and that she can buy the property, find any oil that might be there, extract the oil, sell the property, and end up with a total profit of $50,000, and (B) that there’s a 3/5 chance that there’s no oil on the property and that buying it, finding that there is no oil, and selling the property will cost a total of $10,000. She also believes (C) that strategy A2 will result neither in a gain nor a loss, regardless of whether there is oil on the property. What is the EMV of strategy A1?

Let’s start with the following table:

  oil
(2/5)
no oil
(3/5)
A1: buy, etc. $50,000 –$10,000
A2: just skip the whole thing $0 $0

The EMV of A1 is just (2/5)($50,000) + (3/5)(–$10,000) = $20,000 – $6,000 = $14,000.

  1. (10 points) Now suppose Helen (from the previous question) learns that a geologist is willing to sneak onto the property without the consent of the current owners, test for oil, and tell Helen the results, so that she can buy if the test is positive and not buy if the test is negative. Helen believes A, B, and C from above; in addition, she believes this rogue geologist is astonishingly bad. Specifically, she believes (D) that if the property does contain oil that she can extract and sell (leading to that total profit of $50,000), there is only a 1/4 chance that the geologist will say so, and a 3/4 chance that he’ll mistakenly report the absence of oil, and (E) that if the property does not contain oil, then there is no chance that the geologist will say so, and a 100-percent chance that he’ll mistakenly report the presence of oil. How much should Helen be willing to pay the geologist for his services?

Let’s start with the following table:

  oil and “oil”
(2/5 * 1/4 = 1/10)
oil and “no oil”
(2/5 * 3/4 = 3/10)
no oil and “oil”
(3/5 * 1 = 3/5)
no oil and “no oil”
(3/5 * 0 = 0)
A1 $50,000 $50,000 –$10,000 –$10,000
A2 $0 $0 $0 $0
A3: Follow geologist’s advice. $50,000 $0 –$10,000 $0
A4: Disobey geologist’s advice. $0 $50,000 $0 –$10,000

From above, we know that the EMV of A1 is just $14,000. This is obviously Helen’s best non-information-dependent strategy (since it is better than strategy A2, with its EMV of $0).

The EMV of A3 is (1/10)($50,000) + (3/10)($0) + (3/5)($–10,000) + (0)($0) = $5,000 + $0 – $6,000 + $0 = –$1,000.

The EMV of A4 is (1/10)($0) + (3/10)($50,000) + (3/5)($0) + (0)(–$10,000) = $0 + $15,000 + $0 + $0 = $15,000.

Helen’s best information-dependent strategy (A4) has an EMV of $15,000, which is $1,000 more than the $14,000 mentioned above. So, Helen should be willing to pay the geologist up to $1,000.

(Notice that she should be willing toi pay him up to $1,000 so that she can then do the opposite of whatever he recommends!)

  1. (10 points) Describe a game (1) each of whose prizes is finite but (2) whose EMV is infinite (8 points). (You do not have to show that its EMV is infinite.) If the game you describe is often referred to by a particular name, provide that name (2 points).

A coin is tossed repeatedly, and no prize is awarded until the coin lands showing heads, at which point the game ends with a payoff of $2n, where n is the number of tosses of the coin. This is known as the St. Petersburg game.

  1. (5 points) Let F be the proposition that Lindsey will go to trial, let G be the proposition that Lindsey will impress her client, and let H be the proposition that Lindsey will either go to trial or impress her client (or both). (So, H is the disjunction of F and G.) Suppose Jimmy believes the probability of F is 30 percent, the probability of G is 40 percent, and the probability of H is 80 percent. If Eugene wants to make a Dutch book against Jimmy, what proposition(s) should he bet for and/or against? (Do not worry about the stakes that should be assigned to any proposition(s).)

Eugene should be for F, for G, and against H.

Question 10 refers to the following table:

  p not p
p not p for against for against
T T

1 – a

–(1 – a)

1 – b –(1 – b)
T F

1 – a

–(1 – a)

–b b
F T –a a 1 – b –(1 – b)
F F

–a

a –b b
  1. (10 points) Suppose there is some proposition p and some real numbers a and b such that Jimmy believes that the probability of p is a, that the probability of not p is b, and that these probabilities sum to some number less 1. Using the foregoing table, explain why the strategy of betting for p and for not-p is a way for Eugene to make a Dutch Book against Jimmy. (You do not have to give a proper proof, with numbered lines and a separate justification for each line. But your explanation should convey essentially the same information as a proper proof would.)

The first and fourth lines of this table represent impossibilities (both p and not p having the same truth value), so we can disregard those. The payoff of the specified bet in the second line is (1 – a) + (–b); and the payoff of the specified bet in the third line is (–a) + (1 – b). Each of these sums to 1 – a – b, or 1 – (a + b), which we know is positive since a + b < 1.

  1. (5 points) Consider the lottery L(a, (3/4, X, B), (4/5, B, W)), where 0 ≤ a ≤ 1. What inequality must a satisfy in order for the lottery just specified to yield more than a 70-percent chance at B?

We need to solve the inequality ‘chance at B > 7/10’, for a, as follows:
chance at B > 7/10
(a)(1 – 3/4) + (1 – a)(4/5) > 7/10
(a)(1/4)      + 4/5 – (a)(4/5) > 7/10
10a           + 32  –  32a     > 28
                          –22a     > –4
                               a     < 4/22
                               a     < 2/11

  1. (5 points) Confining yourself to the basic prizes A, B, and C, along with any lotteries containing no other basic prizes than A, B, and C,
    1. give an example of a set of preferences violating the better-chances condition.

A P B
L(1/3, A, B) P L(2/3, A, B)

  1. give an example of a set of preferences violating the reduction-of-compound-lotteries condition.

L(1/2, L(1/2, A, B), L(1/2, A, B)) P L(1/2, A, B)

  1. (5 points) Assuming that B is defined as a prize than which there is none better, prove (using the rationality conditions) that there is no number a or basic prize x for which L(a, x, B) P L(a, B, B).
no. claim justification
1

Suppose there were a number a or basic prize x for which L(a, x, B) P L(a, B, B).

assumption for indirect proof
2 x P B 1, via better-prizes condition
3 Line 2 is impossible. B is defined as a basic prize that is at least as good as any other basic prize.
4

Line 1 leads to an impossibility.

3
5

There is no number a or basic prize x for which L(a, x, B) P L(a, B, B).

4
  1. (10 points) Prove (using the rationality conditions) that if xPy and yPz and a > b, then L(a, x, z) P L(b, y, z).
no. claim justification
1 xPy and yPz assumption for proving conditional
2 xPy 1, via conjunction elimination
3 yPz 1, via conjunction elimination
4 L(a, x, z) P L(a, y, z) 2, via better-prizes condition
5

L(a, y, z) P L(b, y, z)

3, via better-chances condition
6

L(a, x, z) P L(b, y, z)

4 and 5, via ordering condition, part O5
  1. (5 points) What is the positive linear transformation that converts {70, 80, 100} to {70, 75, 85}?

u’ = (1/2)(u) + 35

  1. (5 points) Suppose Lucy prefers torts to contracts, contracts to civil procedure, and civil procedure to criminal procedure. Lucy prefer torts to contracts twice as strongly as she prefers contracts to civil procedure, which she prefers one third as strongly as she prefers civil procedure to criminal procedure. What is an interval scale that can be used to represent Lucy’s preferences?
prize utility
torts 6
contracts 4
civil procedure 3
criminal procedure 0
  1. (10 points) If we measure an agent’s preferences on an interval scale, does it make sense to say that the agent prefers one prize twice as strongly as another? (You may answer this question by explaining why positive linear transformations do, or why they do not, preserve the relationships among the utilities assigned to a couple of prizes that might lead one to think, in a specific case, that an agent prefers one prize twice as strongly as another.)

No, it does not make sense to say, of an agent whose preferences are measured on an interval scale, that he or she prefers a given prize twice as much as another. This is because an interval scale is unique (i.e., appropriate to an agent) only up to positive linear transformations. So, a scale might suggest that an agent prefers one prize twice as strongly as another by assigning utilities of 20 and 10 to these two prizes, but this scale could legitimately be transformed, using the function u’ = (1/10)u + 155, into one that assigns utilities of 157 and 156 to these two prizes. And no one would look at those two utilities—157 and 156—and think that they were attached to prizes one of which was twice as preferred as the other.

(For more on this point, see the answer to problem 4 in section 4-3 of Resnik.)

  1. (10 points) Suppose Ellenor is an expected-utility maximizer who prefers more money to less, but who is risk averse. Suppose also that we are representing Ellenor’s preferences with a utility function, and that we begin by assigning the utilities 70 and 100 to the prizes $400 and $500. Suppose, finally, that we want to assign a utility to the prize $480. What is the range within which the utility we assign to $480 must fall?

We want to find a lottery of the form L(a, $400, $500) that has an EMV of $480.
So, we need to solve the following equation for a: a*$400 + (1 – a)*$500 = $480. So:
$400a + $500 – $500a = $480
–$100a = –$20
a = $20/$100
a = 1/5

So, the lottery L(1/5, $400, $500) is what we need to work with. Its utility is the weighted average of the utilities of its prizes:

u(L(1/5, $400, $500))
= (1/5)*u($400) + (1 – 1/5)*u($500)
= (1/5)*u($400) + (4/5)*u($500)
= (1/5)*70 + (4/5)*100
= 14 + 80
= 94

Now we know that Ellenor’s being risk averse means that she prefers $480 to any lottery with an EMV of $480. Since we just assigned a utility of 94 to a lottery with an EMV of $480, then u($480) > 94.

We also know that Ellenor prefers more money to less. Since $480 < $500, it follows that u($480) < u($500); that is, u($480) < 100. So, the utility we assign to $480 must fall in the range between 94 and 100.

  1. (5 points) Suppose (A) that Rebecca prefers arguments to continuances and continuances to sentences and (B) that she prefers continuances to L(1/2, arguments, sentences). If we want to represent Rebecca's preferences using a utility function with the expected-utility property, and we begin by assigning a utility of 0 to sentences and a utility of 1 to continuances, what is the range (of the form x < u(arguments) < y) within which our assignment of a utility to arguments must fall?

We need to solve the inequality implied by B for the variable u(arguments):
u(continuances) > u(L(1/2, arguments, sentences)
           1           > (1/2)*u(arguments) + (1/2)*u(sentences)
           1           > (1/2)*u(arguments) + (1/2)(0)
           1           > (1/2)*u(arguments)
           2           > u(arguments)
   u(arguments) < 2

Since (by A) arguments are preferred to continuances and sentences, u(arguments) must be greater than u(continuances) and u(sentences). So, u(arguments) must be greater than 1 and 0.

So, we have 1 < u(arguments) < 2.

  1. (5 points) Suppose (A, same as in question 19) that Rebecca prefers arguments to continuances and continuances to sentences and (C) that she prefers L(2/5, arguments, sentences) to continuances. If we want to represent Rebecca's preferences using a utility function with the expected-utility property, and we begin by assigning a utility of 0 to sentences and a utility of 1 to continuances, what inequality (of the form u(arguments) > z) must our assignment of a utility to arguments satisfy?

We need to solve the inequality implied by C for the variable u(arguments):
u(L(2/5, arguments, sentences) > u(continuances)
(2/5)*u(arguments) + (3/5)*u(sentences) > 1
(2/5)*u(arguments) + (3/5)(0)                 > 1
(2/5)*u(arguments)                               > 1
       u(arguments)                               > 5/2

  1. (5 points) If A and B (from question 19) imply that u(arguments) falls within a range that is entirely below the lower bound on u(arguments) implied by A and C (from question 20), then is it possible that Rebecca is an expected-utility maximizer, or can we conclude that she is not? Why or why not?

We can conclude that she is not, since we are supposing that (and it is true that) our assumption that she is an expected-utility maximizer led us to contradictory claims regarding u(arguments), and this means that our assumption that she is an expected-utility maximizer must have been mistaken.

  1. (10 points) How is Newcomb’s problem a situation in which the dominance principle and the principle of expected-utility maximization give conflicting recommendations?

The dominance principle says to take both boxes, since doing so leads to an extra $1,000 whether there is money in the box or not. In contrast, the principle of expected-utility maximization says to take just the one box (the one that might or might not have $1,000,000 in it), since agents who do that tend to end up richer than those who don’t.