Skewed Noise David Dillenberger 1 Uzi Segal 2 1 University of Pennsylvania 2 Boston College and WBS
Introduction Compound lotteries (lotteries over lotteries over outcomes): 1 1 4 3 4 1 2 1 2 1 2 1 2 1 1 3 2 3 3 10 7 10 7 10 3 10 100 0 100 100 0 100 0 100 0 Same probability distribution over final outcomes. Same objects in a standard model = all Reduction of compound lotteries
Motivation Reduction is often violated in experiments (Kahneman and Tversky, 1979; Bernasconi and Loomes, 1992; Camerer and Ho, 1994; Harrison et al., 2012; Abdellaoui et al., 2013) Timing of resolution of uncertainty may matter (e.g. Kreps and Porteus, 1978; Dillenberger, 2010). An individual may enjoy the suspense that builds up as a lottery yields a prize that is another lottery, or he may find it worrying. (Hope, fear, anxiety, disappointment) Individuals may simply like or dislike not knowing the exact values of the probabilities (Camerer and Weber, 1992). Betting on a known probability p betting on a known distribution over the value of that probability with mean probability p
Motivation Halevy (2007) and Miao and Zhong (2013): individuals are averse to introducing symmetric noise, that is, symmetric mean-preserving spread into the first-stage lottery 8 10 8 10 2 10 2 10 100 0 100 0 7 10 7 10 1 7 3 10 1 7 3 108 10 5 7 8 10 2 109 10 100 0 100 0 100 0 100 0 100 0 100 0 5 7 2 10 1 7 1 7 9 10 1 10 1 10 Figure 3: Symmetric noise around 8 10 Figure 3: Symmetric noise around 8 10 Possible explanation: realizations of symmetric noise that cancel out each other create confusion/complexity
Motivation Ellsberg two-urn experiment Urn I: 50 red, 50 blue balls. We ll pick one ball at random. Do you prefer A or B? Red Blue A 100 0 B 0 100 Urn II: 100 balls, each is either red or blue. We ll pick one ball at random. Do you prefer C or D? Red Blue C 100 0 D 0 100 Typical answer: A B, C D, but A C and B D
Motivation Based on symmetry arguments, it is plausible to assume that #Red= #Blue in urn II. But: Urn II is ambiguous; the exact distribution is unknown Urn I is risky; the probabilities are known An ambiguity averse DM who dislikes the ambiguity of not knowing the exact values of the probabilities will prefer to bet on the risky urn
Ambiguous urn as a compound lottery q 0 q 1 q i q 100 1 99 i 0 1 1 i 100 100 100 100 1 0 100 0 100 0 100 0 100 0 The average belief should be 1 2, that is, 100 i=0q i i 100 = 1 2 It is plausible that the distribution over the composition of Urn II is symmetric, so the probability of (72, 28) should be the same as that of (28, 72) Yet, the DM prefers the simple lottery ( 100, 1 2 ; 0, 1 ) 2
Motivation n-colors experiment But sometimes individuals actually prefer not to know the exact probabilities Betting on the number i drawn from: A risky urn containing 100 balls numbered 1 to 100. An ambiguous urn containing 100 balls, each marked by a number from {1, 2,..,100}, but in an unknown composition. Which option do you prefer?
Motivation n-colors experiment But sometimes individuals actually prefer not to know the exact probabilities Betting on the number i drawn from: A risky urn containing 100 balls numbered 1 to 100. An ambiguous urn containing 100 balls, each marked by a number from {1, 2,..,100}, but in an unknown composition. Which option do you prefer? Low probability of success in both options. Noise in the ambiguous urn is typically asymmetric. The distribution of possible values for the probability of the good outcome is positively skewed (mode outcomes are 0.01).
Motivation n-colors experiment But sometimes individuals actually prefer not to know the exact probabilities Betting on the number i drawn from: A risky urn containing 100 balls numbered 1 to 100. An ambiguous urn containing 100 balls, each marked by a number from {1, 2,..,100}, but in an unknown composition. Which option do you prefer? Low probability of success in both options. Noise in the ambiguous urn is typically asymmetric. The distribution of possible values for the probability of the good outcome is positively skewed (mode outcomes are 0.01). How about betting that the ball drawn has a number that is different from i?
Motivation Compound-risk aversion/seeking Three investment options Option A: the probability of success is 0.2 for sure Option B (negatively-skewed around 0.2): 90% likely to result in 0.22 probability of success 10% likely to result in 0.02 probability of success Option C (positively-skewed around 0.2): 90% likely to result in 0.18 probability of success 10% likely to result in 0.38 probability of success Robust evidence: many subjects are skew sensitive; they prefer C to A and A to B.
Experimental evidence Boiney (1993): individuals prefer positively skewed distributions over probabilities of good events to their average values, and dislike negatively skewed such distributions Masatlioglu, Orhun, and Raymond (2014): strong preference for positively skewed noise over negatively skewed ones Viscusi and Chesson (1999), Kocher et al. (2015): ambiguity aversion for moderate/high likelihood events, but ambiguity seeking for unlikely events Abdellaoui, Klibanoff, and Placido (2013), Abdellaoui, l Haridon, and Nebout (2015): aversion to compound risk is an increasing function of p Consistent with a greater aversion to negatively skewed noises around high probabilities than to positively skewed noises around small probabilities
Agenda Analyze lotteries over the value of p in the lottery (x, p; x, 1 p), x > x Is there a connection between attitude towards symmetric noise and skewed noise? 1. Define (and characterize) a notion of skewed distributions 2. Using the recursive model without reduction (Segal, 1990), outline conditions under which if The DM rejects (small) symmetric noise then The DM will also reject negatively skewed noise but The DM might seek positively skewed noise 3. Applications: Allocation mechanisms, Ambiguity seeking...
Recursive utility Consider a two-stage lottery: q 0 q 1 q i q 100 1 99 i 0 1 1 i 100 100 100 100 1 0 100 0 100 0 100 0 100 0
Recursive utility q 0 q 1 q i q 100 0 1 1 99 i 1 i 100 100 100 100 1 0 100 0 100 0 100 0 100 0 c i 1. Find ( certainty ) equivalents of final branches, W c ( ), 1 = W ( ), for some W over lotteries 2. Replace each final branch with its certainty equivalent
Recursive utility q 0 q 1 q i q 100 c 0 c 1 c i c 100 1. Find ( certainty ) equivalents of final branches, W c ( ), 1 = W ( ), for some W over lotteries 2. Replace each final branch with its certainty equivalent 3. Use V over lotteries (not necessarily equals W ) to calculate the value of the original lottery (now viewed as a single-stage lottery over the certainty equivalents)
Remark The value of a compound lottery is thus the V value of the simple lottery over the second-stage certainty equivalents (that were calculated using W ) V = W and is Expected Utility indifference to noise V = W but both EU (Kreps and Porteus, 1978) if reject symmetric noise then reject all noise V = W and is Cautious EU (Cerreia-Vioglio et al., 2015) reject all noise (PORU; Dillenberger, 2010) Therefore, we need something different
Preliminaries: Recursive utility Underlying lottery p = (x, p; x, 1 p), x > x Noise around p is any two-stage lottery p 1, q 1 ;...; p n, q n, that yields with probability q i the lottery (x, p i ; x, 1 p i ), i = 1, 2,..., n, and satisfies i p i q i = p. Domain: L 2 = { p 1, q 1 ;...; p n, q n : p i, q i [0, 1], i = 1, 2,..., n, and i q i = 1} is a preference relation over L 2, which is represented by U ( p 1, q 1 ;...; p n, q n ) = V (c p1, q 1 ;...; c pn, q n ), where V over one-stage lotteries is monotonic w.r.t FOSD and continuous, and c is a certainty equivalent function.
Preliminaries: Quasi concave V. Let [x, x] be an interval of monetary prizes Denote by F, G, H (CDFs of) simple lotteries over [x, x]. F is set of all lotteries The function V over F is quasi concave if for any F, G F and λ [0, 1], V (F ) V (G ) = V (λf + (1 λ) G ) V (G ). Preference for randomization or diversification ( deliberately stochastic )
Preliminaries: Machina s local expected utility analysis V is smooth (Fréchet differentiable): for every F there exists a continuous, local utility function, u F (x) s.t V (G ) V (F ) = x x u F (x) d (G (x) F (x)) + o ( G F ) In ranking differential shifts from an initial distribution F, the DM acts precisely as would an EU maximizer with local utility function u F (x) q F p
Smooth preferences Second-order risk aversion x if E 45 o y if E 1st order
Preliminaries: Machina s Hypothesis II (fanning out) Hypothesis II (Fanning out): G > FOSD F u G (x) F (x) u (x) u F (x) u G If G dominates F by first order stochastic dominance, then u G is more risk aversion than u F b p(b) Increasing preferences m p(w) w
Summary of assumptions Recursive evaluation. For fixed x > x and p = (x, p; x, 1 p) U ( p 1, q 1 ;...; p n, q n ) = V (c p1, q 1 ;...; c pn, q n ), ( ) where W c ( ), 1 = W ( ) V is quasi-concave: for all F and G, and λ [0, 1], V V (F ) V (G ) = V (λf + (1 λ) G ) V (G ) is smooth (Fréchet differentiable): there exists a continuous u F (x) s.t V (G ) V (F ) = x x u F (x) d (G (x) F (x)) + o ( G F ) Hypothesis II: G > FOSD F u G (x) F (x) u (x) u F (x) u G
Skewed to the left distributions For a Cumulative Distribution Function F on [x, x] with expected value µ and for δ > 0, let η 1 (F, δ) = µ δ F (x)dx x η 2 (F, δ) = x [1 F (x)]dx µ+δ Definition: The lottery X with distribution F on [x, x] whose expected value is µ is skewed to the left if for every δ > 0, η 1 (F, δ) η 2 (F, δ), that is, if the area below F between x and µ δ is larger than the area above F between µ + δ and x Recall that for δ = 0, the two areas are the same
Skewed to the left distributions X with distribution F on [x, x] whose expected value is µ is skewed to the left if the area below F between x and µ δ is larger than the area above F between µ + δ and x η 2 (F, δ) η 2 (F, δ) η η 1 (F, δ) 1 (F, δ) x x µ δ µ µ + δ µ δ µ µ + δ x x Figure 2: η 1 (F, δ) η 2 (F, δ) Figure 2: η 1 (F, δ) η 2 (F, δ)
Relation to other definitions of Left-Skewness A possible definition is that X with the distribution F and expected value µ is skewed to the left if x x (y µ)3 df (y) 0 Claim: If X with the distribution F and expected value µ is skewed to the left, then for all odd n x x (y µ)n df (y) 0 But the converse is not true ( ) F = 10, 0.1; 2, 0.5; 0, 35 4 ; 7, 2 7 [ E (x µ) 3] = 6 < 0 But η 1 (F, 5) = 1 2 < 4 7 = η 2(F, 5) µ = 0
Main behavioral result Definition: rejects symmetric noise if for all p (0, 1), for all α min{p, 1 p}, and for all ε 1 2, p, 1 p α, ε; p, 1 2ε; p + α, ε. Definition: rejects negatively (resp., positively) skewed noise if for all p (0, 1), p, 1 p 1, q 1 ;...; p n, q n whenever i p i q i = p and the distribution of (p 1, q 1 ;...; p n, q n ) is skewed to the left (resp., right).
Main behavioral result Definition: rejects symmetric noise if for all p (0, 1), for all α min{p, 1 p}, and for all ε 1 2, p, 1 p α, ε; p, 1 2ε; p + α, ε. Definition: rejects negatively (resp., positively) skewed noise if for all p (0, 1), p, 1 p 1, q 1 ;...; p n, q n whenever i p i q i = p and the distribution of (p 1, q 1 ;...; p n, q n ) is skewed to the left (resp., right). Theorem Suppose (i) V is quasi concave, Fréchet differentiable, and satisfies Weak Hypothesis II; and (ii) rejects symmetric noise. Then rejects negatively skewed noise, but not necessarily positively skewed noise.
Remarks The result does not restrict the location of the skewed distribution, but it is reasonable to find skewed to the left distributions of beliefs over the true probability p when p is high, and skewed to the right distributions when p is low The second part distinguishes our model from other known preferences over compound lotteries that cannot accommodate rejections of all symmetric noise with acceptance of some positively-skewed noise Example V (c p1, q 1 ;...; c pn, q n ) = E[w(c p )] E[c p ], where w(x) = sx xs s 1 and c p = αp + (1 α)p t, satisfies all the conditions of the main theorem, and there is an open neighborhood of (α, s, t) R 3 for which for every p > 0 there exists a sufficiently small q > 0 s.t p, q; 0, 1 q pq, 1
Sketch of proof: characterization of skewed distributions Recall that X with distribution F on [x, x] whose expected value is µ is skewed to the left if the area below F between x and µ δ is larger than the area above F between µ + δ and x. Definition: Lottery Y is obtained from lottery X by a left symmetric split if Y is the same as X except for that one of the outcomes x of X such that x µ, was split into x + α and x α each with half of the probability of x.
Sketch of proof: characterization of skewed distributions Recall that X with distribution F on [x, x] whose expected value is µ is skewed to the left if the area below F between x and µ δ is larger than the area above F between µ + δ and x. Definition: Lottery Y is obtained from lottery X by a left symmetric split if Y is the same as X except for that one of the outcomes x of X such that x µ, was split into x + α and x α each with half of the probability of x. Theorem If Y = (y 1, p 1 ;... ; y n, p n ) with expected value µ is skewed to the left, then there is a sequence of lotteries X i, each with expected value µ, such that X 1 = (µ, 1), X i Y, and X i+1 is obtained from X i by a left symmetric split. Conversely, any such sequence does converge, and the limit distribution is skewed to the left.
Skewed to the left distributions Example Let X = (3, 1) and Y = (0, 1 4 ; 4, 3 4 ) and obtain X = (3, 1) (2, 1 2 ; 4, 1 2 ) (0, 1 4 ; 4, 3 4 ) = Y Example Let X = (5, 1) and Y = (0, 1 6 ; 6, 5 6 ) and obtain X = (5, 1) (4, 1 2 ; 6, 1 2 ) (2, 1 4 ; 6, 3 4 ) (0, 1 8 ; 4, 1 8 ; 6, 3 4 )...(0, 1 2 n i=14 i ; 4, 1 2 4 n ; 6, 1 2 + n i=14 i )...(0, 1 6 ; 6, 5 6 ) = Y proof
Application (Matching/Allocation problem) Many goods are allocated by lotteries (public schools/course schedules/dormitory rooms to students; shifts/offices/tasks to workers; jury and military duties to citizens...) In the fair allocation literature there are results showing the equivalence (same distribution over assignments) of different randomized mechanisms (Abdulkadiroğlu and Sönmez, 1998; Pathak and Sethuraman, 2011). We demonstrate that agents with certain non-standard preferences may systematically prefer one mechanism to another even though ex-ante they are equivalent.
Setting N = {1, 2,.., n} individuals. n goods of two types, x and y, to be allocated (#x + #y = n). p=proportion of x. A proportion q of people prefer x to y. Normalize u (worst) = 0 = 1 u (best).
Two mechanisms Serial dictatorship (SD) The order of agents is randomly determined: the probability of person i to be in place j is 1 n. Agents then choose the goods according to this order.
Two mechanisms Serial dictatorship (SD) The order of agents is randomly determined: the probability of person i to be in place j is 1 n. Agents then choose the goods according to this order. Top cycle (TC) Stage 1: The allocation of the goods among the agents is randomly determined (probability of person i to hold a unit of type x is p) Stage 2: Those who did not get their desired outcome trade: If k people want to trade x for y and l < k want to trade y for x, then the latter l will trade and get their desired outcome, while l out of the former k will be selected at random and get their preferred option.
Large Economies Let p = x x+y. Assume, wlog, p 1 2 q is the proportion of individuals who prefer x to y If p < q TC: If prefer y, get it for sure If prefer x, face a lottery Y 1 = ( 1, p; p(1 q) q(1 p), 1 p ) SD : If prefer y, get it for sure ( ) If prefer x, get it iff rank is less than p q, so X 1 = p q, 1 p 1 2 implies Y 1 is skewed to the left. Therefore, both groups will prefer SD to TC.
Large Economies Let p = x x+y. Assume, wlog, p 1 2 q is the proportion of individuals who prefer x to y If q < p TC: If prefer x, get it for sure If prefer y, face a lottery Y 2 = SD : If prefer x, get it for sure ( 1, 1 p; q(1 p) ) p(1 q), p If prefer y, get it iff rank is less than 1 p 1 q, so X 2 = ( 1 p 1 q, 1 ) p 1 2 implies Y 2 is skewed to the right. Theorem 1 does not tell us which of the two is better. For large p and small q, TC may be preferred (e.g., with the preferences in the example).
First-order risk aversion Let E ( x) = 0. Let t x = (tx 1, p 1 ;...; tx n, p n ) and define π (t) by w π (t) w + t x. Definition (Segal and Spivak 1990): The preferences represent first-order risk aversion at w if π (t) t=0 + > 0 and they represent second-order risk aversion at w if π (t) t=0 + = 0 but π (t) t=0 + > 0
Nonsmooth preferences First-order risk aversion. x if E 45 o y if E First order risk aversion induce kinky indifference curves Application: first-order risk averse decision makers buy full insurance even if there is some marginal loading 2nd order
First-order risk aversion Claim: Let represent first-order risk aversion preferences and let ϑ be a differentiable function such that ϑ (0) = 0. Let E [ x] = 0, and let π (t) be the risk premium the DM is willing to pay to avoid the lottery (...; ϑ(tx i ), p i ;...). Then π (t) t=0 + > 0
First-order risk aversion Claim: Let represent first-order risk aversion preferences and let ϑ be a differentiable function such that ϑ (0) = 0. Let E [ x] = 0, and let π (t) be the risk premium the DM is willing to pay to avoid the lottery (...; ϑ(tx i ), p i ;...). Then π (t) t=0 + > 0 In our case, the DM is facing the noise ( a, ε, 0; 1 2ε, a, ε) This noise is transformed to the lottery (c (p a), ε; c (p), 1 2ε; c (p + a), ε) If the certainty equivalent of (x, r; 0, 1 r) is a differentiable function of r (in most models it is), then for a sufficiently small a the DM will reject the noise
First-order risk aversion What is nice about this model is that it imposes no restrictions on large noise. Therefore, it is possible to obtain different types of preferences for really skewed noises Example, the rank dependent model (RDU): Order the prizes in the support of the lottery p, with x 1 < x 2 <... < x n. The functional form for RDU is: n 1 V (p) = u(x n )f (p (x n ) ) + i=1 u(x i )[f ( n j=i p (x j )) f ( n j=i+1 where f : [0, 1] [0, 1] is strictly increasing and onto, and u : [w, b] R is increasing This model represents first order attitude towards risk p (x j ))]
First-order risk aversion In the paper, there is RDU example with concave utility u and convex probability transformation function f that satisfies all our requirements It represents risk aversion For all p and a, it prefers the known p to all symmetric noise of the form (p a, ε, p; 1 2ε, p + a, ε) It rejects all skewed-to-the-left noise For small values of p, it accepts some skewed-to-the-right noise. The analysis of first order risk aversion was relatively easy because the rejection of small noise requires only risk aversion. As we saw, things are a lot more complicated when indifference curves are smooth
Proof of main theorem Fix x > x, and let c p be the certainty equivalent of the lottery (x, p; x, 1 p) The two-stage lottery (p α, ε; p, 1 2ε; p + α, ε), concerning the true value of p, translates in the recursive model into the one-stage lottery (c p α, ε; c p, 1 2ε; c p+α, ε)
Proof of main theorem Fix x > x, and let c p be the certainty equivalent of the lottery (x, p; x, 1 p) The two-stage lottery (p α, ε; p, 1 2ε; p + α, ε), concerning the true value of p, translates in the recursive model into the one-stage lottery (c p α, ε; c p, 1 2ε; c p+α, ε) Let δ cp be the distribution yielding c p with probability 1. Since for every ε 1 2 δ p (δ p α, ε; δ p, 1 2ε; δ p+α, ε), the local utility u δcp satisfies u δcp (c p ) 1 2 u δcp (c p α) + 1 2 u δcp (c p+α) By Hypothesis II, for every r > p, u δcr (c p ) 1 2 u δcr (c p α) + 1 2 u δcr (c p+α) (1)
Consider the lottery over the probabilities in (x, p; x, 1 p) given by Q = p 1, q 1 ;...; p n, q n, where q i p i = p
Consider the lottery over the probabilities in (x, p; x, 1 p) given by Q = p 1, q 1 ;...; p n, q n, where q i p i = p Q is skewed to the left there is a sequence Q i = p i,1, q i,1,..., p i,ni, q i,ni Q s.t Q 1 = p, 1 and Q i+1 is obtained from Q i by a left symmetric split. Let Qi c be the corresponding sequence of certainty equivalents.
Consider the lottery over the probabilities in (x, p; x, 1 p) given by Q = p 1, q 1 ;...; p n, q n, where q i p i = p Q is skewed to the left there is a sequence Q i = p i,1, q i,1,..., p i,ni, q i,ni Q s.t Q 1 = p, 1 and Q i+1 is obtained from Q i by a left symmetric split. Let Qi c be the corresponding sequence of certainty equivalents. Suppose p i,j is split into p i,j α and p i,j + α. Since p i,j < p, by (1), E[u δcp (Qi c )] = q i,j u δcp (c pi,j ) + q i,n u δcp (c pi,n ) n =j 1 2 q u (c i,j p δcp i,j α) + 1 2 q u (c i,j p δcp i,j +α) + q i,n u δcp (c pi,n ) = n =j E[u δcp (Qi+1)] c
Consider the lottery over the probabilities in (x, p; x, 1 p) given by Q = p 1, q 1 ;...; p n, q n, where q i p i = p Q is skewed to the left there is a sequence Q i = p i,1, q i,1,..., p i,ni, q i,ni Q s.t Q 1 = p, 1 and Q i+1 is obtained from Q i by a left symmetric split. Let Qi c be the corresponding sequence of certainty equivalents. Suppose p i,j is split into p i,j α and p i,j + α. Since p i,j < p, by (1), E[u δcp (Qi c )] = q i,j u δcp (c pi,j ) + q i,n u δcp (c pi,n ) n =j 1 2 q u (c i,j p δcp i,j α) + 1 2 q u (c i,j p δcp i,j +α) + q i,n u δcp (c pi,n ) = n =j E[u δcp (Qi+1)] c Continuity implies u δcp (c p ) E[u δcp (Q c )] By Fréchet Diff. ε V ( εq c ) + (1 ε)δ cp ε=0 0 Quasi concavity now implies that V (δ cp ) V (Q c ), or p, 1 Q theorem
Informal example Underlying lottery p = (x, p; y, 1 p), x > y
Informal example Underlying lottery p = (x, p; y, 1 p), x > y The DM rejects symmetric noise if he prefers (p, 1) to (p a, ε; p, 1 2ε; p + a, ε), that is, if he prefers to know that the probability is p rather than to face the symmetric noise around p for all (or maybe only for sufficiently small) a and ε Or (c (p), 1) (c (p a), ε; c (p), 1 2ε; c (p + a), ε)
Informal example Underlying lottery p = (x, p; y, 1 p), x > y The DM rejects symmetric noise if he prefers (p, 1) to (p a, ε; p, 1 2ε; p + a, ε), that is, if he prefers to know that the probability is p rather than to face the symmetric noise around p for all (or maybe only for sufficiently small) a and ε Or (c (p), 1) (c (p a), ε; c (p), 1 2ε; c (p + a), ε) We want to learn from these preferences how he ll react to the lottery over probabilities (p 1, q 1 ;... ; p m, q m ), where q i p i = p
Informal example The purpose of the example is to illustrate that under some conditions, rejection of all symmetric noises implies rejection of negatively skewed noise as well For instance, if these conditions hold, then the DM will prefer to know that the probability is p rather than face the lottery over probabilities (p 0.3, 1 4 ; p + 0.1, 3 4 ). 0.6 0.4 1 4 3 4 x 0 0.3 0.7 0.7 0.3 x x 0 0 Negatively skewed noise around p = 0.6
Example cont. Preferences over lotteries are quasi-concave if F G αf + (1 α) G G If preferences are quasi concave and for some α (0, 1) then F G F αf + (1 α) G,
Example cont. Preferences over lotteries are quasi-concave if F G αf + (1 α) G G If preferences are quasi concave and for some α (0, 1) F αf + (1 α) G, then F G So if we assume quasi concavity and show that (c(p), 1) is preferred to (1 2ɛ) (c(p), 1) + 2ɛ (c(p 0.3), 1 4 ; c(p + 0.1), 3 4 ) = = (c(p 0.3), 1 2 ε; c(p), 1 2ε; c(p + 0.1), 3 2 ε) for some ε > 0, then we know that (c(p), 1) is preferred to as we want. (c(p 0.3), 1 4 ; c(p + 0.1), 3 4 )
Example cont. We know that the DM prefers not to replace p with (p 0.1, ε; p, 1 2ε; p + 0.1, ε) 1 ε 1 2ε ε p p 0.1 p p+0.1
Example cont. We thus will be done if we show that in (p 0.1, ε; p, 1 2ε; p + 0.1, ε), the DM prefers not to replace the ε probability of getting p 0.1, with 1 2 ε chances to p 0.3 and p + 0.1 each ε 1 2ε ε = ε 2 ε ε 2 1 2ε ε 2 1 2ε 3ε 2 p 0.1 p p+0.1 p 0.3 p+0.1 p p+0.1 p 0.3 p p+0.1
Example cont. By rejection of symmetric noise, we know that when he believes that the probability is p 0.1, he ll prefer not to replace ε of it with 1 2ε chances to p 0.3 and p + 0.1 each 1 ε 2 1 ε ε 2 p 0.1 p 0.3 p 0.1 p+0.1
Example cont. Machina s Hypothesis II: If F dominates G by FOSD, then local approximations around F represent higher degree of risk aversion than around G
Example cont. Machina s Hypothesis II: If F dominates G by FOSD, then local approximations around F represent higher degree of risk aversion than around G Note that the lottery (p, 1) dominates (p 0.1, 1) by FOSD
Example cont. Machina s Hypothesis II: If F dominates G by FOSD, then local approximations around F represent higher degree of risk aversion than around G Note that the lottery (p, 1) dominates (p 0.1, 1) by FOSD Therefore by Machina s Hypothesis II, when he believes that the probability is p, he ll prefer to replace ε of it with p 0.1, rather than with 1 2ε chances to p 0.3 and p + 0.1 each
Example cont. Conclusion: By rejection of symmetric noise, the DM prefers not to replace p with (p 0.1, ε; p, 1 2ε; p + 0.1, ε) And by Hypothesis II he prefers not to replace the p 0.1 with probability ε with (p 0.3, 1 2 ε; p + 0.1, 1 2 ε) By transitivity, he prefers to know that the probability is p rather than facing the noise (p 0.3, 1 2 ε; p, 1 2ε; p + 0.1, 3 2 ε) By quasi-concavity, he prefers p to the negatively skewed noise (p 0.3, 1 4 ; p + 0.1, 3 4 )
Example cont. Conclusion: By rejection of symmetric noise, the DM prefers not to replace p with (p 0.1, ε; p, 1 2ε; p + 0.1, ε) And by Hypothesis II he prefers not to replace the p 0.1 with probability ε with (p 0.3, 1 2 ε; p + 0.1, 1 2 ε) By transitivity, he prefers to know that the probability is p rather than facing the noise (p 0.3, 1 2 ε; p, 1 2ε; p + 0.1, 3 2 ε) By quasi-concavity, he prefers p to the negatively skewed noise (p 0.3, 1 4 ; p + 0.1, 3 4 ) But we don t know whether he prefers p or the positively skewed noise (p 0.1, 3 4 ; p + 0.3, 1 4 )