Some exercises on coherent lower previsions

Similar documents
Sklar s theorem in an imprecise setting

arxiv: v1 [math.pr] 9 Jan 2016

A gentle introduction to imprecise probability models

CONDITIONAL MODELS: COHERENCE AND INFERENCE THROUGH SEQUENCES OF JOINT MASS FUNCTIONS

A survey of the theory of coherent lower previsions

BIVARIATE P-BOXES AND MAXITIVE FUNCTIONS. Keywords: Uni- and bivariate p-boxes, maxitive functions, focal sets, comonotonicity,

CHAPTER I THE RIESZ REPRESENTATION THEOREM

Imprecise Bernoulli processes

Imprecise Probability

Durham Research Online

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

The fundamental theorem of linear programming

Journal of Mathematical Analysis and Applications

Continuity. Matt Rosenzweig

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

Notes on Ordered Sets

Optimality Conditions for Constrained Optimization

Lecture 8. Strong Duality Results. September 22, 2008

Review of measure theory

Course 212: Academic Year Section 1: Metric Spaces

Set, functions and Euclidean space. Seungjin Han

MATH 101, FALL 2018: SUPPLEMENTARY NOTES ON THE REAL LINE

Serena Doria. Department of Sciences, University G.d Annunzio, Via dei Vestini, 31, Chieti, Italy. Received 7 July 2008; Revised 25 December 2008

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

DYNAMIC DUALIZATION IN A GENERAL SETTING

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

MAT1000 ASSIGNMENT 1. a k 3 k. x =

5.5 Deeper Properties of Continuous Functions

A Study of the Pari-Mutuel Model from the Point of View of Imprecise Probabilities

REAL VARIABLES: PROBLEM SET 1. = x limsup E k

Extension of coherent lower previsions to unbounded random variables

2.2 Some Consequences of the Completeness Axiom

JUHA KINNUNEN. Harmonic Analysis

Definition 6.1. A metric space (X, d) is complete if every Cauchy sequence tends to a limit in X.

A BEHAVIOURAL MODEL FOR VAGUE PROBABILITY ASSESSMENTS

Constrained Optimization and Lagrangian Duality

Stat 451: Solutions to Assignment #1

Only Intervals Preserve the Invertibility of Arithmetic Operations

Important Properties of R

REVIEW OF ESSENTIAL MATH 346 TOPICS

CHAPTER 7. Connectedness

2. The Concept of Convergence: Ultrafilters and Nets

3 Measurable Functions

The Lebesgue Integral

1 Review of last lecture and introduction

Lecture 7 Monotonicity. September 21, 2008

6.2 Deeper Properties of Continuous Functions

Annalee Gomm Math 714: Assignment #2

Analysis II Lecture notes

Conservative Inference Rule for Uncertain Reasoning under Incompleteness

Summer School: Semidefinite Optimization

Thus, X is connected by Problem 4. Case 3: X = (a, b]. This case is analogous to Case 2. Case 4: X = (a, b). Choose ε < b a

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Metric Space Topology (Spring 2016) Selected Homework Solutions. HW1 Q1.2. Suppose that d is a metric on a set X. Prove that the inequality d(x, y)

Topology Homework Assignment 1 Solutions

Hence, (f(x) f(x 0 )) 2 + (g(x) g(x 0 )) 2 < ɛ

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

On John type ellipsoids

1. Supremum and Infimum Remark: In this sections, all the subsets of R are assumed to be nonempty.

MATH 117 LECTURE NOTES

Lagrangian Duality and Convex Optimization

Maths 212: Homework Solutions

Lagrange Relaxation and Duality

(a) We need to prove that is reflexive, symmetric and transitive. 2b + a = 3a + 3b (2a + b) = 3a + 3b 3k = 3(a + b k)

IE 521 Convex Optimization Homework #1 Solution

Signed Measures. Chapter Basic Properties of Signed Measures. 4.2 Jordan and Hahn Decompositions

Chapter 4. Measurable Functions. 4.1 Measurable Functions

Lecture 2. Econ August 11

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 1: Preliminaries

Lecture 2: A crash course in Real Analysis

Real Analysis Notes. Thomas Goller

Structure of R. Chapter Algebraic and Order Properties of R

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lebesgue Integration: A non-rigorous introduction. What is wrong with Riemann integration?

Sequences. We know that the functions can be defined on any subsets of R. As the set of positive integers

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

(U) =, if 0 U, 1 U, (U) = X, if 0 U, and 1 U. (U) = E, if 0 U, but 1 U. (U) = X \ E if 0 U, but 1 U. n=1 A n, then A M.

MATH 4200 HW: PROBLEM SET FOUR: METRIC SPACES

Lecture 5 - Hausdorff and Gromov-Hausdorff Distance

Lecture 6 - Convex Sets

Topological properties

Week 2: Sequences and Series

Chapter 2 Convex Analysis

Definition 2.1. A metric (or distance function) defined on a non-empty set X is a function d: X X R that satisfies: For all x, y, and z in X :

Due date: Monday, February 6, 2017.

4TE3/6TE3. Algorithms for. Continuous Optimization

THE REAL NUMBERS Chapter #4

Proof. We indicate by α, β (finite or not) the end-points of I and call

Optimization Theory. A Concise Introduction. Jiongmin Yong

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Regular finite Markov chains with interval probabilities

THE INVERSE FUNCTION THEOREM

Chapter 3 Continuous Functions

MT804 Analysis Homework II

Chapter 1. Sets and Numbers

Bounded uniformly continuous functions

Linear Models Review

Advanced Linear Programming: The Exercises

Transcription:

Some exercises on coherent lower previsions SIPTA school, September 2010 1. Consider the lower prevision given by: f(1) f(2) f(3) P (f) f 1 2 1 0 0.5 f 2 0 1 2 1 f 3 0 1 0 1 (a) Does it avoid sure loss? Answer. Yes. It suffices to see that there is a linear prevision that dominates P on its domain. The prevision P given by P (f) = f(2) satisfies this. (b) Is it coherent? Answer. No. Since P (f 3 ) = 1 = max{f 3 (1), f 3 (2), f 3 (3)}, the only linear prevision that dominates P is precisely P (f) = f(2) for all f. But P does not coincide with P on all gambles: P (f 1 ) = 1 > 0.5 = P (f 1 ). Since P is not the lower envelope of the set M(P ), we deduce that it is not coherent. 2. Let P be the lower prevision on L({1, 2, 3}) given by P (f) = Is it coherent? min{f(1), f(2), f(3)} 2 + max{f(1), f(2), f(3)}. 2 Answer. No. Since P is defined on a linear space, a necessary condition for coherence is that it is supper-additive, meaning that P (f +g) P (f)+ P (g) for any pair of gambles f, g. To see that this does not hold, consider f, g given by f(1) = 1, f(2) = 1, f(3) = 1 and g(1) = 1, g(2) = 1, g(3) = 1. Then P (f) = P (g) = 0, while P (f + g) = 1. 3. Vacuous lower previsions. Let A be a non-empty subset of a (not necessarily finite) set X. Say we only know that the lower probability of A is equal to 1. This assessment is embodied through the lower prevision P defined on the singleton {I A } by P (A) = 1 (again, recall that we denote P (I A ) by P (A)). 1

(a) Show that the vacuous lower prevision relative to A, defined by P A (f) := inf x A f(x) for any f L(X ), is a coherent lower prevision on L(X ). Answer. The domain of P A is a linear space, so it suffices to check that (i) P A (f) inf x X f(x) for any f L(X ), (ii) P A (λf) = λp A (f) for any f L(X ) and any λ > 0, and (iii) P A (f + g) P A (f) + P A (g) for any f, g L(X ). (b) Prove that the natural extension E of P is equal to the vacuous lower prevision relative to A: for any f L(X ). E(f) = P A (f) = inf x A f(x), Answer 1: Primal Approach. E(f) is equal to the supremum achieved by the free variable γ subject to the constraint f γ λ(i A 1) (0.1) with variable λ 0. Note that γ = inf x A f(x) and λ = inf x A f(x) inf x X f(x) yields a feasible solution of Eq. (0.1). Therefore, E(f) inf x A f(x). Let γ and λ constitute any feasible solution of Eq. (0.1). Then, since inf x A is monotone, we find in particular that inf f(x) γ inf λ(i A(x) 1) x A x A Note that the right hand side is zero. Hence, γ inf x A f(x). Therefore, also E(f) inf x A f(x). But, we already proved that E(f) inf x A f(x), and hence, E(f) = inf x A f(x). Answer 2: Dual Approach. Again consider the set M(P ) of linear previsions on L(X ) that dominate P. Now, if we can show that ext M(P ) = {P x : x A}. (0.2) then the claim is established, since in that case E(f) = inf Q(f) = inf Q(f) = inf P x (f) = inf f(x). Q M(P ) Q ext M(P ) x A x A 2

We shall prove Eq. (0.2) in case that X is a finite set (it holds in general case, but the proof becomes a bit more complex). Assume that X is a finite set. To see that Eq. (0.2) holds, let Q M(P ). Observe that for any x A, and hence Q({x}) = 1 Q(X \ {x}) 1 Q(A) = 0 x A Q({x}) = x X Q({x}) = 1. This implies that any Q M(P ) is a convex mixture of P x for x A (see exercise on linear previsions): Q(f) = x X Q({x})f(x) = x A Q({x})f(x) = x A Q({x})P x (f). Since all P x are linearly independent, Eq. (0.2) follows. Answer 3: Least Committal Extension. The statement is established if we show that P A is the point-wise smallest coherent lower prevision on L(X ) which dominates P on {I A }. Suppose Q is another coherent lower prevision on L(X ) which dominates P on {I A }, that is, Q(A) = 1. Let f L(X ). It is easy to check that f P A (f) + [P A (f) P X (f)](i A 1), where P X (f) = inf x X f(x). Since Q is coherent, this implies that Q(f) Q ( P A (f) + [P A (f) P X (f)](i A 1) ) = P A (f) + [P A (f) P X (f)]q(i A 1) ) = P A (f), since Q(I A 1) = Q(A) 1 = 0. This establishes the proof. (c) Extra exercise. Each one of the questions (b), (c) and (d) can be solved in three different ways, either using (i) the primal form combinations of desirable gambles, (ii) the dual form sets of probability measures, or (iii) the properties of coherence and natural extension, invoking the result proven in the preparatory exercise (a). Invoke each one of the methods (i), (ii) and (iii) to answer each one of the questions (b), (c) and (d). You may cheat when solving question (d) using method (ii): it is much easier if you assume that X is finite. 3

4. Consider an urn with 10 balls, of which 3 are red, and the other 7 are either blue or yellow. (a) Determine the set M of linear previsions that represent the possible compositions of the urn. Answer. The set of possible compositions of the urn is given by the following table: Red Blue Yellow 3 0 7 3 1 6 3 2 5 3 3 4 3 4 3 3 5 2 3 6 1 3 7 0 It produces the following sets of linear previsions (we give the probability mass functions which are their restrictions to events): P i Red Blue Yellow P 1 3 0 7 P 2 3 1 6 P 3 3 2 5 P 4 3 3 4 P 5 3 4 3 P 6 3 5 2 P 7 3 6 1 P 8 3 7 0 (b) Let f be a gamble given by f(blue) = 2, f(red) = 1, f(yellow) = 1. Which is the lower prevision of f? Answer. The lower prevision of f is the lower envelope of the set {P 1 (f), P 2 (f),..., P 8 (f)}, which in this case is equal to P (f) = 0.3 1 0.7 1 = 0.4. (c) Do the same for an arbitrary gamble g. 4

Answer. Again, we have P (f) = min{p 1 (f), P 2 (f),..., P 8 (f)}; since P 2,..., P 7 are convex combinations of P 1, P 8, it follows that P (f) = min{p 1 (f), P 8 (f)} = min{0.3f(blue) + 0.7f(yellow), 0.3f(blue) + 0.7f(red)} = 0.3f(blue) + 0.7 min{f(yellow), f(red)}. 5. Let X = {1, 2, 3}, and consider the following sets of desirable gambles: (a) Are K 1, K 2 coherent? K 1 := {f : f(1) + f(2) + f(3) 0} K 2 := {f : max{f(1), f(2), f(3)} 0}. Answer. To see that the set of gambles K 1 is coherent, we check that if verifies axioms (D1) (D5) (note that because we are dealing with finite spaces, we can use maximum instead of supremum and minimum instead of infimum): (D1) Take a gamble f such that max f < 0; then f(1) + f(2) + f(3) 3 max f < 0, whence f / K 1. (D2) Conversely, if f 0 then f(1) + f(2) + f(3) 0, whence f K 1. (D3) If f, g K 1, then f(1) + f(2) + f(3) 0 and g(1) + g(2) + g(3) 0. As a consequence, (f + g)(1) + (f + g)(2) + (f + g)(3) = f(1) + f(2) + f(3) + g(1) + g(2) + g(3) 0, and this means that f + g K 1. (D4) Consider f K 1 and λ > 0; then since f(1)+f(2)+f(3) 0, we deduce that (λf)(1)+(λf)(2)+(λf)(3) = λ(f(1)+f(2)+f(3)) 0. This implies that λf K 1. (D5) Assume that f + ɛ belongs to K 1 for every ɛ > 0; then (f + ɛ)(1) + (f + ɛ)(2) + (f + ɛ)(3) = f(1) + f(2) + f(3) + 3ɛ 0 for all ɛ > 0, whence f(1) + f(2) + f(3) 3ɛ ɛ > 0 and therefore f(1) + f(2) + f(3) 0. This means that f K 1. To see that the set K 2 is not coherent, it suffices to note that it does not satisfy axiom (D3): consider the gambles f, g given by f(1) = 1, f(2) = f(3) 2; g(2) = 1, g(1) = g(3) = 2; then it holds that max f = max g = 1, whence both f and g belong to K 2 ; however, (f + g)(1) = (f + g)(2) = 1, (f + g)(3) = 4, whence f + g / K 2. (b) If they are, which is the lower prevision they induce on the gamble f given by f(1) = 2, f(2) = 3, f(3) = 1? 5

Answer. We only need to verify it for K 1. Taking into account the correspondence between sets of desirable gambles and coherent lower previsions, we have that P (f) = sup{µ : f µ K 1 } = sup{µ : (f µ)(1) + (f µ)(2) + (f µ)(3) 0} = sup{µ : f(1) + f(2) + f(3) 3µ 0} = sup{µ : 4 3µ 0} = 4 3. 6. Consider X = {1, 2, 3} and let D be the gambles given by D := {(1, 1, 0), (0, 1, 1), (1, 0, 1)}. (a) Calculate the set E induced by D. Answer. Using the formula for the natural extension for a set of desirable gambles, we obtain E ={g : δ > 0 λ 1, λ 2, λ 3 0 s.t. g λ 1 f 1 + λ 2 + λ 2 + λ 3 f 3 δ}; if we denote every gamble f as the vector (f(1), f(2), f(3)), we obtain that E ={g : δ > 0 λ 1, λ 2, λ 3 0 s.t. g (λ 1 + λ 3 ), (λ 2 λ 1 ), ( λ 2 λ 3 ) δ}; now, note that we can assume without loss of generality that λ 3 = 0, by considering λ 1 = λ 1 + λ 3, λ 2 = λ 2 + λ 3, λ 3 = 0; this allows us to express the set E as E = {g : δ > 0 λ 1, λ 2 0 s.t. g (λ 1, λ 2 λ 1, λ 2 ) δ}; finally, remark that we can also make δ = 0 in the above equation, because the set of gambles D := {(λ 1, λ 2 λ 1, λ 2 ) : λ 1, λ 2 0} = {(α 1, α 2, α 3 ) : α 1 0, α 3 0, α 1 + α 2 + α 3 = 0} is closed in the Euclidean topology, and as a consequence if g +δ dominates a gamble in this set for every δ > 0 then so does g. This implies that E = {g : λ 1, λ 2 0 s.t. g (λ 1, λ 2 λ 1, λ 2 )}. (b) Determine the set M and the lower prevision P induced by E. 6

Answer. Using the correspondence between coherent sets of desirable gambles and sets of linear previsions, we obtain that M E := {P : P (g) 0 g E}. This means that M E is the set of linear previsions P satisfying P (1)λ 1 + P (2)(λ 2 λ 1 ) + P (3)( λ 2 ) 0 λ 1, λ 2 0, or, equivalently, λ 1 (P (1) P (2)) + λ 2 (P (2) P (3)) 0 λ 1, λ 2 0; as a consequence, M E := {P : P (1) P (2) P (3)}, and the lower prevision P induced by E is the lower envelope of M E. (c) What is the lower prevision of the gamble (0, 1, 2)? Answer. For every prevision P in M E, we have that P (f) = f(1)p (1) + f(2)p (2) + f(3)p (3) = P (2) + 2P (3); if we take the prevision P given by P (1) = 1, P (2) = P (3) = 0, we get P (f) = 0, and since this P belongs to M E we deduce that P (f) P (f) = 0; but since from coherence P (f) min f = 0, we deduce that in this case P (f) = 0. 7