Universal Confidence Sets for Solutions of Optimization Problems

Size: px
Start display at page:

Download "Universal Confidence Sets for Solutions of Optimization Problems"

Transcription

1 Universal Confidence Sets for Solutions of Optimization Problems Silvia Vogel Abstract We consider random approximations to deterministic optimization problems. The objective function and the constraint set can be approximated simultaneously. Relying on concentration-of-measure results we derive universal confidence sets for the constraint set, the optimal value and the solution set. Special attention is paid to solution sets which are not single-valued. Many statistical estimators being solutions to random optimization problems, the approach can also be employed to derive confidence sets for constrained estimation problems. Keywords: random optimization problems, universal confidence sets, convergence rate, constrained estimation MSC2000: 90C15, 90C31, 62F25, 62F30 1 Introduction Random approximations of deterministic or random optimization problems come into play if unknown quantities are replaced with estimates or for numerical reasons. Qualitative stability results, which make assertions on the (semi-)convergence of the constraint sets, the optimal values, and the solution sets, are available for convergence almost surely, in probability and in distribution, c.f. [11], [16], [7], [6], [18], [2]. Furthermore, there are quantitative results which estimate the distance between the optimal values and/or solutions sets by suitable probability metrics, see [13] for an overview. Confidence bounds for optimal values and solution sets provide valuable additional information. In the traditional way confidence bounds are derived from the distribution of the statistic under consideration. However, the exact distribution is available only in rare cases. Therefore, often knowledge about the limit distribution is used as a surrogate. Hence qualitative stability 1

2 results for convergence in distribution can be employed to derive asymptotic confidence sets, see [11] and [2]. In [12] Pflug suggests an approach which can be used to derive even nonasymptotic confidence sets without knowledge about the exact distribution. The starting point are assertions on convergence in probability plemented with a suitable convergence rate and tail behavior. In [12] uniform convergence with known convergence rate and tail behavior of the objective functions together with a growth condition are assumed and results on inner approximations of the solution sets are proved. In this way non-asymptotic confidence sets can be obtained if the solution set is single-valued. We will pursue the way proposed in [12] farther, take into account also the approximation of the constraint set, and show how one can proceed if the solution set is not single-valued. The results are formulated in a general way, allowing e.g. for ε n -relaxed constraint sets and ε n -optimal solutions. We will also assume suitable assertions on the (one-sided) uniform convergence in probability of the objective functions and/or the constraint functions with a convergence rate and tail function are available, albeit these conditions are rather restrictive. It is, however, also possible to derive similar, yet more technical assertions if one assumes some kind of continuous convergence in probability with convergence rate and tail behavior. This so-called pointwise approach opens the possibility to employ directly suitable concentration inequalities. This method will be investigated elsewhere. The results will be illustrated by three examples. Firstly, we discuss a simple example in order to show how one can deal with the uniform convergence assumptions. Secondly, we consider the approximation for a chance constraint. The third example was chosen to emphasize the applicability of our results in statistics. Many statistical estimators being solutions to random optimization problems, our assertions can be employed to derive confidence bounds for estimators, even if one does not have full knowledge about the distribution of the underlying statistic. We will provide universal confidence bounds for quantiles, allowing for distribution functions which are not continuous. The paper is organized as follows. In Chapter 2 we introduce the mathematical model and show how universal confidence sets can be derived. In Chapter 3 and Chapter 4 we prove the needed convergence assertions for the constraint sets, the optimal values, and the solutions sets. Chapter 5 contains the examples. 2

3 2 Universal confidence sets Let (E, d) be a complete separable metric space and [Ω, Σ, P ] a complete probability space. We assume a deterministic optimization problem (P 0 ) min x Γ 0 f 0 (x) is approximated by a sequence of random problems (P n ) min x Γ n (ω) f n(x, ω), n N. Additionally to (P n ), for a given ε > 0, we consider so-called ε-relaxations (P ) min x Γ (ω) f (x, ω), n N. The relaxed problems offer the possibility to deal with approximate constraint sets, objective functions and/or solution sets. Consequently, the approach can be applied, e.g., to constraint sets and functions which are obtained by Monte Carlo methods (c.f. [14], [10]) or to methods which use plug-in estimators. Moreover, the relaxed problems occur in a natural manner if we aim at deriving outer approximations of constraint sets and solution sets. The following results will be formulated for (P ). The problem (P n ) is then regarded as a special case of (P ) with objective functions and constraint sets do not depend on ε. Γ 0 is a nonempty closed subset of E, and f 0 E R 1 is a lower semicontinuous function. Γ Ω 2 E are closed-valued measurable multifunctions, and f E Ω R 1 are lower semicontinuous random functions which are posed to be (B(E) Σ, B 1 )-measurable. B(E) denotes the Borel-σ-field of E and B 1 the σ-field of Borel sets of R1. The measurability conditions imposed here do not have the weakest form. We use them for sake of simplicity. They are satisfied in many applications and guarantee all functions of ω needed in the following have the necessary measurability properties. Furthermore, it should be mentioned the lower semicontinuity assumption of the objective functions f can be dropped. Imposing this condition, however, we can omit some technical details in the proofs. Eventually, we assume all objective functions are (almost surely) proper functions, i.e. functions with values in (, + ] which are not identically. Our main concern will be with the solution sets Ψ 0 of (P 0 ) and Ψ of (P ). We aim at proving assertions of the form ε > 0 : P {ω : Ψ (ω) \ U β Ψ 0 } K(ε) (1) 3

4 and ε > 0 : P {ω : Ψ 0 \ U β Ψ (ω) } K(ε). Here (β ) is a sequence of nonnegative numbers which tends to zero for each ε > 0 and K R + R + is a function with lim K(ε) = 0. U α X ε denotes an open neighborhood of the set X with radius α: U α X := {x E : d(x, X) < α}. Because of the similarity with inner approximations in probability of sequences of random sets (c.f. [7], [19]) we will call a sequence (Ψ ) fulfilling the first relation an inner approximation in probability to Ψ 0 with convergence rate β and tail behavior function K (in short, an inner (β, K)- approximation). Correspondingly a sequence (Ψ ) fulfilling the second relation will be called an outer approximation in probability to Ψ 0 with convergence rate β and tail behavior function K (in short, an outer (β, K)- approximation). In order to derive universal confidence sets to the level ε 0, i.e. a sequence of random sets (C n ) with P {ω : Ψ 0 \ C n (ω) } ε 0, we can n n 0 proceed as follows: Suppose an outer (β, K)-approximation (Ψ ) to Ψ 0 is available and choose, to ε 0 > 0, an ε > 0 such K(ε) ε 0. The set C n := U β Ψ has the desired property. Of course, one is interested in small confidence sets, hence (β ) should go to zero as fast as possible and K should converge to zero as fast as possible if ε tends to infinity. If one considers approximating problems (P n ) without relaxation, then under reasonable conditions one obtains inner approximations only. This is satisfying if all approximating problems (P n ) have solutions which are uniformly bounded and the solution set to the problem (P 0 ) is single-valued, because in this case inner approximations are also outer approximations and we can proceed as above. What can be done if the solution set to (P 0 ) is not single-valued? Taking into account we need knowledge about convergence rates anyway, we can exploit this knowledge to determine suitable relaxing sequences (κ ), which tend to zero for each ε > 0 and consider κ -optimal solutions Ψ κn. 4

5 The problem,, without relaxing, only inner approximations can be obtained is also apparent for the constraint sets. There are problems, where κ -relaxations are needed to obtain outer approximations of the constraint sets. An example will be considered in Chapter 5. In order to obtain reasonable confidence sets one would like to have lim n β(i) = 0 and lim ε K i 0 for all sequences (β (i) ) and functions K i which occur in the following. As mentioned, in order to obtain small confidence sets, the limits should go to zero as fast as possible. These properties are, however, not needed to prove the results in Chapter 3 and Chapter 4. We only assume throughout the paper the sequences (β ) (i) are non-increasing sequences of positive numbers and the functions K i R + R + are non-increasing. As a by-product, from the results presented in the following, one can also derive confidence sets for the optimal values, proceeding as described above for the solution sets. Hence the approach can help to assess the quality of a solution to an approximate optimization problem and may be of interest also for model selection. 3 Approximation of the constraint set In this section we consider constraint sets, which are given by inequality constraints, and their approximations. Results of kind are of course needed if approximation of the constraint set is inherent in the problem under consideration. Moreover, the statements will be employed to derive assertions on the behavior of the solution sets, regarding the difference between the true objective function and the optimal value as constraint function. As the objective values for problems which κ -relaxed constraint sets may depend on ε, the resulting constraint function may depend on ε, too. Furthermore, when applying Theorem 1 below to the solution sets, the multifunctions Q n will be interpreted as constraint sets, hence we have to allow they depend on ε, too. Consequently, we have to make sure, even the results for the constraint sets hold for relaxed constraint functions and multifunctions. Let Q 0 a closed non-empty subset of E and J = {1,..., j M } a finite index set. We consider functions g0 E j R 1, j J, which are lower semicontinuous in all points x E, and define Γ 0 := {x : g0(x) j 0, j J} Q 0. Γ 0 is assumed to be nonempty. For each ε > 0, the set Q 0 is approximated by a sequence (Q ) of 5

6 closed-valued measurable multifunctions, and the functions g j 0, j J, are approximated by sequences (g j ) of functions g j E R 1, j J, which are (B(E) Σ, B 1 )-measurable. Furthermore, we assume the functions g (, ω) are lower semicontinuous for all ω Ω. Also these measurability and semicontinuity properties could be weakened. Eventually Γ is defined by Γ (ω) := {x E : g j (x, ω) 0, j J} Q (ω). Under our assumptions Γ is a closed-valued measurable multifunction. If we have Q 0 = Q (ω) E, we will use the denotation ˆΓ 0 and ˆΓ, respectively: ˆΓ0 = {x E : g j 0(x) 0, j J} and ˆΓ (ω) := {x E : g j (x, ω) 0, j J}. In the following we use functions ν, µ and λ. We assume they map R + into R +, fulfill ν(0) = µ(0) = λ(0) = 0, are increasing and non-constant. By the erscript 1 we denote their right inverses: ν 1 (y) := inf{x R : ν(x) > y}. Theorem 1 (Inner Approximation of the Constraint Set) Assume the following conditions are satisfied: (CI1) There exist a function K 1 and to all ε > 0 a sequence (β (1) P {ω : Q (ω) \ U Q (1) β 0 } K 1 (ε). ) (CI2) There exist a function K 2 and to all ε > 0 a sequence (β P {ω : inf (g(x, j ω) g0(x)) j β } K 2 (ε) j J x UQ 0 \Γ 0 for a suitable neighborhood UQ 0. ) such such (CI3) There exists a function ν such for all ε > 0 U ε Γ 0 U ν(ε) Q 0 U ν(ε)ˆγ0. (CI4) There exists a function µ such for all ε > 0 x U ε Q 0 \ U εˆγ0 j J : g j 0(x) µ(ε). Then for all ε > 0, β = max{ν 1 (β ), (1) ν 1 (µ 1 (β ))}, and n 0 (ε) = min{k : U β Q 0 UQ 0 } the relation k,ε 6

7 P {ω : Γ (ω) \ U Γ β 0 } K 1 (ε) + j M K 2 (ε) Proof. Assume for given ε > 0, n n 0 (ε), and ω Ω the relation P {ω : Γ (ω) \ U Γ β 0 } Then there is x (ω) Γ (ω) which does not belong to U β Γ 0. Because of (CI3) we have x (ω) / U ν(β ) Q 0 or x (ω) U ν(β ) Q 0 and x (ω) / U ν(β )ˆΓ 0. In the first case we obtain Q \ U Q ν(β 0. Hence, because of ν(β ) β, (1) ) Q (ω) \ U Q (1) β 0, and we can employ (CI1). In the second case we obtain by (CI4) for at least one j J, g0(x j (ω)) µ(ν(β )) β, hence, because of U Q ν(β 0 UQ 0, inf (g ) (x, j ω) g0(x)) j β. It remains x UQ 0 \Γ 0 to employ (CI2). The conditions (CI2) and (CI4) can be replaced using strict inequalities in the following form without changing the conclusion of Theorem 1. This holds correspondingly for further assertions. (CI2-s) There exist a function K 2 and to all ε > 0 a sequence (β ) such P {ω : j J for a suitable neighborhood UQ 0. inf (g(x, j ω) g0(x)) j < β } K 2 (ε) x UQ 0 \Γ 0 (CI4-s) There exists a function µ such for all ε > 0 x U ε Q 0 \ U εˆγ0 j J : g0(x) j > µ(ε). If the set Q 0 remains fixed in all approximations, i.e. Q Q 0, the assumptions in Theorem 1 can be weakened. Corollary 1.1 Assume the following assumptions are satisfied: (CI2-W) There exist a function K 2 and to all ε > 0 a sequence (β P {ω : inf (g(x, j ω) g0(x)) j β } K 2 (ε). j J x Q 0 \Γ 0 ) such 7

8 (CI4-W) There exists a function µ such for all ε > 0 x Q 0 \ U εˆγ0 j J : g0(x) j µ(ε). Then for all ε > 0 and β = µ 1 (β ) the relation P {ω : Γ (ω) \ U β Γ 0 } j M K 2 (ε) Proof. We can choose ν(ε) = ε. (CI1) is satisfied with K 1 0 and an arbitrary β. (1) As we are interested in small values of β, we use the form given in the assertion. If we want to employ Theorem 1 for solution sets we have to deal with one constraint function only. The following corollary is a specialization of Theorem 1 to j M = 1 and g0 1 =: g 0, g 1 =: g. Additionally, we replace (CI4) with a growth condition. Corollary 1.2 Assume (CI1), (CI2), (CI3), and the following condition (Gr-g 0 ) are satisfied. (Gr-g 0 ) There exist an increasing function ψ 1 R + R + and constants c 1 > 0, δ 1 > 0, and θ 1 > 0 such x UQ 0 : g 0 (x) ψ 1 (d(x, ˆΓ 0 )) and 0 < θ < θ 1 : ψ 1 (θ) c 1 θ δ 1. Then for all ε > 0, β = max{ν 1 (β (1) n 0 (ε) = min{k : U β k,ε ), ν 1 (( β δ 1 )}, and Q 0 U θ1 Q 0 UQ 0 } the relation c 1 ) 1 P {ω : Γ (ω) \ U β Γ 0 } K 1 (ε) + K 2 (ε) If the functions g do not depend on ε, Q Q 0 and β = ε γ n instead of (CI2) the following condition can be used: (CI2 ) There exist a function K 2 and a sequence (γ n ) such P {ω : γ n ( inf (g n (x, ω) g 0 (x)) ε} K 2 (ε). x Q 0 \Γ 0 In this case Corollary 1.2 can be simplified in the following way: holds, 8

9 Corollary 1.3 Assume (CI2 ) and (Gr-g 0 ) with Q 0 instead of UQ 0 are satisfied. Then for all ε > 0 the relation P {ω : Γ n (ω) \ U ε Γ 1 0 } K 2 (c 1 ε δ 1 ) (γn) δ 1 Proof. With Corollary 1.1 and Corollary 1.2 we obtain P {ω : Γ n (ω) \ U Γ β 0 } K 2 (ε) with β = ( ε c 1 γ n ) 1 δ 1. Replacing ε with c 1 η δ 1 yields the conclusion. As mentioned in the introduction, in general, the sequence (Γ n ) approximates a subset of Γ 0 only. In order to obtain outer approximations, additional assumptions have to be imposed. Qualitative stability theory usually assumes the condition Γ 0 cl{x Q 0 : g0(x) j < 0, j J} is fulfilled where cl denotes the closure. In order to obtain also a convergence rate and a tail function, we impose a quantified version of this assumption, see (CO3) below. Unfortunately, a condition of kind is useless if one intends to employ the result for the solution set, because (CO3) can not be satisfied by g n := f n Φ n where Φ n denotes the optimal value to the problem (P n ). Hence we will provide a second approach which considers inequality constraints of the form g(x, j ω) < κ, j J, where (κ ) is a suitable sequence of positive reals with lim κ = 0 ε > 0. n For the formulation of (CO3) we need the ε-interior of Γ 0. Let, for a given ε > 0, CI(ε) := Γ 0 \ U ε (E \ Γ 0 ). Ū ε denotes the closure of U ε. Theorem 2 (Outer Approximation of the Constraint Set, Inner Point) Assume the following conditions are satisfied: (CO1) There exist a function K 1 and to all ε > 0 a sequence (β (1) P {ω : Q 0 \ U Q (1) β (ω) } K 1 (ε). ) such (CO2) There exist a function K 2 and to all ε > 0 a sequence (β P {ω : (g(x, j ω) g0(x)) j β } K 2 (ε). j J x Γ 0 ) such 9

10 (CO3) There exist ε > 0 and a function µ such for all 0 < ε ε Γ 0 ŪεCI(ε) and x CI(ε) j J : g j 0(x) µ(ε). Then for all 0 < ε ε, β = max{β (1), µ 1 (2β n 0 (ε) = min{k : β k,ε P {ω : Γ 0 \ (U β )}, and 2 ε} the relation ˆΓ (ω) U Q (1) β (ω)) } K 1 (ε) + j M K 2 (ε) Proof. Assume for given ε > 0, n N, and ω Ω the relation Γ 0 \(U β ˆΓ (ω) U Q (1) β (ω)) Then there is x (ω) Γ 0 which does not belong to U β ˆΓ (ω) U Q (1) β (ω). If x (ω) / U Q (1) β (ω) we can employ (CO1). If x (ω) / U β ˆΓ (ω) we can choose x (ω) CI( β 2 ) with x (ω) / ˆΓ (ω), i.e. g j 0 ( x (ω), ω) > 0 for at least one j 0 J. Because of g j 0 0 ( x (ω)) µ( β ) 2 β we can employ (CO2). Now we consider κ -relaxed inequality constraints. Let ˆΓ κ n (ω) := {x E : g(x, j ω) < κ, j J} and Γ κ n (ω) = ˆΓ κ n (ω) Q. Theorem 3 (Outer Approximation of the Constraint Set, Relaxation) Assume (CO1) and (CO2) are satisfied. Then for all ε > 0, κ = β and β = max{β, (1) β } the relation P {ω : Γ 0 \ (ˆΓ κn (ω) U β (1) Q (ω)) } K 1 (ε) + j M K 2 (ε) Proof: Assume for given ε > 0, n N, and ω Ω the relation Γ 0 \ (ˆΓ κn (ω) U Q (1) β (ω)) Then there is x (ω) Γ 0 which does not belong to ˆΓ κ n (ω) U Q (1) β (ω). Hence g j 0(x (ω)) 0 j J and x (ω) Q 0 (ω), but either x (ω) / U Q (1) β (ω) or g j (x (ω), ω) > β = κ for at least one j J. In the first case we obtain Q 0 \ U Q (1) β (ω). The second case yields (g(x, j ω) g0(x)) j > β for at least one j J. x Γ 0 10

11 A corresponding result holds if U β (1) Q is replaced with Q in the condition (CO1) and in the assertion. The question arises under what conditions (Γ κ n ) is also an inner approximation. Results of kind will help to assess the quality of an outer approximation. An inspection of the proof to Theorem 1 shows with κ = β the following statement can be obtained. Theorem 4 (Inner Approximation of the Constraint Set, Relaxation) Assume (CI1), (CI2), (CI3), and (CI4) are satisfied. Then for all ε > 0, κ = β, β = max{ν 1 (β ), (1) ν 1 (µ 1 (2β ))}, and n 0 (ε) = min{k : Q 0 UQ 0 } the relation U β k,ε ε > 0 P {ω : Γ κ n (ω) \ U Γ β 0 } K 1 (ε) + j M K 2 (ε) Finally, we will summarize what we obtain for one constraint function under a growth condition. Theorem 5 (Approximation of the Constraint Set, Relaxation) Assume j M = 1 and (CI1), (CO1) (CI2), (CO2), (CI3), and (Gr-g 0 ) are satisfied. Then for all ε > 0, κ = β, β = max{β, (1) β, ν 1 (β ), (1) ν 1 (( 2β c 1 ) 1 δ 1 )}, and n 0 (ε) = min{k : U β Q 0 UQ 0 } the relation k,ε P {ω : (Γ κn (ω) \ U Γ β 0) (Γ 0 \ (ˆΓ κn (ω) Q (ω)) } 2K 1 (ε) + 2K 2 (ε) Proof. We have P {ω : (Γ κ n (ω) \ U Γ β 0) (Γ 0 \ (ˆΓ κ n (ω) Q (ω)) } P {ω : Γ κn (ω) \ U Γ β 0 } + P {ω : Γ 0 \ (ˆΓ κn (ω) Q (ω)) }. The assumptions of Theorem 3 and Theorem 4 are satisfied with µ(ε) = c 1 ε δ 1 (1) and β = β = β. Unfortunately, the result is not symmetric. In order to derive a result of the form 11

12 P {ω : (Γ κn (ω) \ U Γ β 0) (Γ 0 \ U β Γκn (ω)) } K(ε) we would need functions ν and conditions similar to (CI3) for each n. Now assume there is only one constraint function g n, which does not depend on ε, and Q Q 0 If β = ε γ n, instead of (CO2) the following condition can be used: (CO2 ) There exist a function K 2 and a sequence (γ n ) such P {ω : γ n (g n (x, ω) g 0 (x)) ε} K 2 (ε). x Γ 0 Then the following result can be derived. Corollary 5.1 Assume (CI2 ), (CO2 ), and (Gr-g 0 ) with Q 0 instead of UQ 0 are satisfied. Then for all ε > 0, κ = ε γ n, β = max{ ε γ n, ( 2ε c 1 γ n ) 1 δ 1 }, and n 0 (ε) = min{k : β k,ε θ 1} the relation P {ω : (Γ κ n (ω) \ U Γ β 0) (Γ 0 \ Γ κ n ) (ω)) } 2K 2 (ε) Proof. The proof follows by Theorem 5 proceeding in a similar way as in the derivation of Corollary 1.3. If max{ ε γ n, ( 2ε 1.3. c 1 γ n ) 1 δ 1 } = ( 2ε c 1 γ n ) 1 δ 1 the result can be rewritten as in Corollary 4 Approximation of the optimal values and the solution sets We turn to the optimal values and the solutions sets of the problems (P 0 ) and (P ). Let Φ (ω) := inf f (x, ω) and Ψ (ω) denote the corresponding solution set. In the following, the constraint sets and their approximations x Γ (ω) are not posed the have a special form. Especially, Γ can have the form which was used in Chapter 3, but it can also denote a set originating from a relaxation like Γ κn. 12

13 We do not impose compactness conditions on Γ 0 and Γ. Instead we assume, for sake of simplicity, the original and the approximating problems have a solution. Theorem 6 (Lower Approximation of the Optimal Value) Assume the following conditions are satisfied: (VL1) There exist a function K 1 and to all ε > 0 a sequence (β (1) P {ω : Γ (ω) \ U Γ (1) β 0 } K 1 (ε). ) such (VL2) There exist a function K 2 and to all ε > 0 a sequence (β ) such P {ω : inf (f (x, ω) f 0 (x)) β } K 2 (ε) for a suitable x UΓ 0 neighborhood UΓ 0. (VL3) There exists a function λ such for all ε > 0 x U λ(ε) Γ 0 UΓ 0 : f 0 (x) Φ 0 ε. Then for all ε > 0, β = max{2λ 1 (β (1) n 0 (ε) = min{k : Uλ( β k,ε ), 2β }, and Γ 0 UΓ 0 } the relation 2 ) P {ω : Φ (ω) Φ 0 β } K 1 (ε) + K 2 (ε) Proof. Assume for given ε > 0, n n 0 (ε), and ω Ω the relation Φ (ω) Φ 0 β Then there exists x (ω) Γ (ω) such f (x (ω), ω) = Φ (ω) Φ 0 β. Firstly, let x (ω) Γ Uλ( β 0. Then 2 ) inf (f (x, ω) f 0 (x)) f (x (ω), ω) f 0 (x (ω)) Φ (ω) Φ 0 + β x UΓ 0 β β 2. Eventually, if x (ω) / Γ Uλ( β 0, we have Γ (ω) \ U Γ (1) β 2 ) 0. The proof shows also the following assertion 2 13

14 Corollary 6.1 Assume Γ Γ 0 and the following condition is satisfied: (VL2 ) There exist a function K 2 and to all ε > 0 a sequence (β P {ω : inf (f (x, ω) f 0 (x)) β } K 2 (ε). x Γ 0 Then for all ε > 0 the relation P {ω : Φ (ω) Φ 0 β } K 2 (ε) ) such In the following we consider upper approximations and distinguish two cases according to whether ε > 0 : P {ω : Γ 0 \ U Γ (1) β (ω) } K 1 (ε) or ε > 0 : P {ω : Γ 0 \ Γ (ω) } K 1 (ε) is imposed. As expected, in the second case we obtain a better tail behavior. Theorem 7 (Upper Approximation of the Optimal Value I) Assume the following conditions are satisfied: (VU1) There exist a function K 1 and to all ε > 0 a sequence (β (1) P {ω : Γ 0 \ U Γ (1) β (ω) } K 1 (ε). ) such (VU2) There exist a function K 2 and to all ε > 0 a sequence (β ) such P {ω : (f (x, ω) f 0 (x)) β } K 2 (ε) for a suitable x UΨ 0 neighborhood UΨ 0. (VU3) There exists a function λ such for all ε > 0 x U λ(ε) Ψ 0 UΨ 0 : f 0 (x) Φ 0 + ε. Then for all ε > 0, β = max{2λ 1 (β (1) n 0 (ε) = min{k : Uλ( β k,ε ), 2β }, and Ψ 0 UΨ 0 } the relation 2 ) P {ω : Φ (ω) Φ 0 β } K 1 (ε) + K 2 (ε) 14

15 Proof. Assume for given ε > 0, n n 0 (ε), and ω Ω the relation Φ (ω) Φ 0 + β Then there exists x (ω) Γ (ω) such f (x (ω), ω) = Φ (ω) Φ 0 + β. To x (ω) we select x (ω) Γ (ω) such d( x (ω), Ψ 0 ) = min d(x, Ψ 0). x Γ (ω) Firstly, assume x (ω) Ψ Uλ( β 0. Then 2 ) f ( x (ω), ω) Φ (ω) Φ 0 + β f 0 ( x (ω)) + β and consequently, (f (x, ω) f 0 (x)) β β 2. If x (ω) / x UΨ 0 Ψ Uλ( β 0, we have Γ 0 \ U Γ (1) β (ω). 2 ) If we impose the special upper semicontinuity condition (UCon) for f 0, we obtain the following corollary. Corollary 7.1 (Upper Approximation of the Optimal Value I) Assume (VU1), (VU2) and the following condition (UCon) are satisfied: (UCon) There exist an increasing function ψ 2 R + R + and constants c 2 > 0, δ 2 > 0 and θ 2 > 0 such x UΨ 0 : f 0 (x) Φ 0 + ψ 2 (d(x, Ψ 0 )) and 0 < θ < θ 2 : ψ 2 (θ) c 2 θ δ 2. Then, for all ε > 0, β = max{2c 2 (β (1) ) δ 2, 2β }, and n 0 (ε) = min{k : U λk,ε Ψ 0 UΨ 0 U δ2 Ψ 0 } with λ k,ε = ( β k,ε the relation P {ω : Φ (ω) Φ 0 β } K 1 (ε) + K 2 (ε) 2 2c 2 ) 1 δ 2 Proof. We employ Theorem 7. Because of (UCon) we can choose λ(θ) = ( θ c 2 ) 1 δ 2 and consequently λ 1 (ε) = c 2 ε δ 2. A similar Corollary can be proved for the lower approximation of the optimal values. We give only the result for the upper approximation because we will use it in the following. Furthermore, the assertions can be plemented by results similar to Corollary 1.3 and Corollary

16 Theorem 8 (Upper Approximation of the Optimal Value II) Assume the following conditions are satisfied: (VU1-R) There exist a function K 1 and to all ε > 0 a sequence (β (1) P {ω : Γ 0 \ Γ (ω) } K 1 (ε). (VU2-R) There exist a function K 2 and to all ε > 0 a sequence (β P {ω : (f (x, ω) f 0 (x)) β } K 2 (ε). x Ψ 0 Then for all ε > 0 the relation P {ω : Φ (ω) Φ 0 β } K 1 (ε) + K 2 (ε) ) ) such such Proof. Assume for given ε > 0, n N, and ω Ω the relation Φ (ω) Φ 0 + β Then there exists x (ω) Γ (ω) such f (x (ω), ω) = Φ (ω) > Φ 0 + β. To x (ω) we select x (ω) Γ (ω) such d( x (ω), Ψ 0 ) = min d(x, Ψ 0). x Γ (ω) If x (ω) Ψ 0 we have f ( x (ω), ω) Φ (ω) Φ 0 + β = f 0 ( x (ω)) + β and consequently, (f (x, ω) f 0 (x)) β. x Ψ 0 Otherwise we have Γ 0 \ Γ (ω) and can employ the first assumption. Now we turn to the solution sets. We use the abbreviation ˆΨ = {x E : f 0 (x) Φ 0 }. Theorem 9 (Inner Approximation of the Solution Set) Assume (VL1), (VL2) and the following assumptions are satisfied: (SI3) There exist a function K 3 and to all ε > 0 a sequence ( ˆβ n 0 (ε) such P {ω : Φ (ω) Φ 0 ˆβ } K 3 (ε). ) and 16

17 (SI4) There exists a function ν such for all ε > 0 U ε Ψ 0 U ν(ε) Γ 0 U ν(ε) ˆΨ0. (SI5) There exists a function µ such for all ε > 0 x U ε Γ 0 \ U ε ˆΨ0 : f 0 (x) Φ 0 + µ(ε). Then for all ε > 0, β = max{ν 1 (β (1) n 1 (ε) = min{k n 0 (ε) : U β k,ε n n 1 (ε) ), ν 1 (µ 1 (β Γ 0 UΓ 0 } the relation + P {ω : Ψ (ω) \ U β Ψ 0 } K 1 (ε) + K 2 (ε) + K 3 (ε) ˆβ ))}, and Proof. Let g (x, ω) := f (x, ω) Φ (ω), g 0 (x) := f 0 (x) Φ 0. Then Ψ (ω) = Γ (ω) {x E : g (x, ω) 0} and Ψ 0 = Γ 0 {x E : g 0 (x) 0}. Furthermore, P {ω : P {ω : inf ( g (x, ω) g 0 (x)) β x UΓ 0 \Ψ 0 inf (f (x, ω) f 0 (x)) β } x UΓ 0 \Ψ 0 + P {ω : Φ (ω) + Φ 0 It remains to apply Theorem 1 with ˆβ } ˆβ } K 2 (ε) + K 3 (ε) =: K 2 (ε). β = β + ˆβ and K 2. We emphasize we can choose ν(ε) = ε if Γ Γ 0. Furthermore, if U ε Ψ 0 Γ 0 for a suitable ε > 0, we can also deal with ν(ε) = ε for all ε ε. Corollary 9.1 (Inner Approximation of the Solution Set) Assume (VL1), (VL2), (VU1), (VU2), (SI4), (UCon), and the following condition are satisfied: (Gr-f 0 ) There exist an increasing function ψ 1 R + R + and constants c 1 > 0, δ 1 > 0, and θ 1 > 0 such x UΓ 0 : f 0 (x) Φ 0 ψ 1 (d(x, ˆΨ 0 )) and 0 < θ < θ 1 : ψ 1 (θ) c 1 θ δ 1. Then for all ε > 0, β = max{ν 1 (β ), (1) ν 1 (( ˆβ = max{2c 2 (β ) (1) δ 2, 2β }, and n 1 (ε) = min{k n 0 (ε) : U β k,ε Γ 0 UΓ 0, U β k,ε 17 ˆβ +β c 1 ) 1 δ 1 )}, Ψ 0 UΨ 0, β k,ε max{θ 1, θ 2 }}

18 the relation n n 1 (ε) P {ω : Ψ (ω) \ U β Ψ 0 } 2K 1 (ε) + 2K 2 (ε) Proof: We apply Theorem 9 together with Corollary 7.1. (SI3) is satisfied with ˆβ and K 3 = K 1 + K 2. (VL2) is satisfied with ˆβ and K 2. Theorem 9 with µ 1 (ε) = ( ε c 1 ) 1 δ 1 yields the conclusion. If for all ε > 0 the condition (CK-R) There exist a function K 1 and to all ε > 0 a sequence (β ) (1) such P {ω : (Γ (ω) \ U β (1) Γ 0) (Γ 0 \ Γ (ω)) } K 1 (ε) is satisfied, then also (VL1) and (VU1) are fulfilled with K 1 and β. (1) Consequently, if Γ = Γ κ n we can employ Theorem 5 or Corollary 5.1 in order to determine a suitable κ and formulate sufficient conditions for (VL1) and (VU1). Now we consider outer approximations of the solution set via κ -optimal solutions of the approximating problems. Let ˆΨ κ n (ω) := {x E : f (x, ω) < Φ (ω) + κ }. Γ can, e.g., be specified as U Γ (1) β n or as Γ κ n. Theorem 10 (Outer Approximation of the Solution Set, Relaxation) Assume for all ε > 0 there exist sequences (β ) (i), i = 1, 2, and functions K i, i = 1, 2, such (VU1-R), (VU2-R) and the following assumption are satisfied: (SO3) There exist a function K 3 and to all ε > 0 a sequence ( ˆβ n 0 (ε) such P {ω : Φ (ω) Φ 0 ˆβ } K 3 (ε). ) and Then for all ε > 0, κ = β relation + ˆβ and β = max{β, (1) β + ˆβ } the 18

19 κn P {ω : Ψ 0 \ ( ˆΨ (ω) U Γ (1) β (ω)) } K 1 (ε) + K 2 (ε) + K 3 (ε) (ε) Proof. We apply Theorem 3 and label the denotations in Theorem 3 with a tilde. With g (x, ω) := f (x, ω) Φ (ω) and g 0 (x) := f 0 (x) Φ 0 we have Ψ 0 = {x Γ 0 : g 0 (x) 0} and ˆΨ κ n (ω) := {x E : g (x, ω) < κ }. Furthermore, let Q = Γ, ˆΓκ n = ˆΨ κ n, Q0 = Γ 0, and ˆΓ0 = ˆΨ 0. Because of P {ω : inf ((f (x, ω) Φ (ω)) (f 0 (x) Φ 0 )) β x Ψ 0 P {ω : inf (f (x, ω) f 0 (x)) β } + P {ω : Φ (ω) + Φ 0 ) x Ψ 0 K 2 (ε) + K 3 (ε) =: K2 (ε) condition (CO2) is satisfied with and K 2. + β = β + ˆβ } ˆβ } ˆβ 5 Examples We will illustrate the approach by three examples. When applying our results to problems in decision theory or estimation theory, the most critical assumption is probably (SI4). Fortunately, there are several important applications where some quantities do not vary with n and ν is not needed at all or, as in our third example, (SI4) is easy to verify. In the general case, however, if the constraint set and the objective function are approximated simultaneously and the solution lies on the boundary of the constraint set, ν can not be ignored. Only in rare cases one should have enough knowledge to determine it exactly. One way out are adaptive methods for successive approximation of ν. However, even if one does not succeed in determining ν with a satisfactory accuracy, our results still yield assertions on the convergence rate, albeit without a reliable constant. Results of kind can be used to derive asymptotic confidence sets if a limiting distribution is not available. Firstly, we will discuss a simple example which is intended to give an idea of how one can deal with the uniform convergence assumption for the objective functions. At the first glance this assumption seems to be rather restrictive. There is, however, a growing number of results from probability theory yielding assertions of kind, c.f. [15], [1], [12]. A more refined investigation, which relies on stability assertions adjusted to the direct utilization of 19

20 concentration-of measure inequalities for sequences of random variables, will be given elsewhere. Moreover, we would like to refer the reader to [12] where several sufficient conditions are gathered. We assume E = R p and consider a fixed compact constraint set K and a linear objective function q(z) T x with x = (x 1,..., x p ) T, q R m R p. z is the realization of a random vector Z with given distribution P Z on the sigmafield of Borel sets of R m. The range of q(z) is posed to be bounded. For sake of simplicity we deal with the metric d(x, y) = max x i y i. The i=1,...,p problem (P 0 ) min x K Eq(Z)T x is approximated, replacing the expectation with respect to P Z by the expectation with respect to the empirical distribution based on a sequence Z (j), i = 1, 2,..., of i.i. P Z -distributed random vectors: n 1 (P n ) min q(z (j) ) T x. x K n j=1 We assume Eq(Z) 0, because otherwise the problem becomes trivial, and abbreviate m := max q i (Z(ω)). i=1,...,p ω We consider the sets K k = {x K : k 1 d(0, x) k}, k = 1, 2,... Let I K := {k : K k K }. Hence we obtain by Hoeffding s inequality ([3], [1]): P {ω : 1 n x K n j=1 k I K P {ω : x K k 1 n q(z (j) (ω)) T x Eq(Z) T x ε n } n j=1 q(z (j) (ω)) T x Eq(Z) T x ε n } P {ω : max n 1 q k I i=1,...,p n i (Z (j) (ω)) Eq i (Z) ε k } n K j=1 2 e ε 2 2k 2 m 2 =: K 2 (ε). k I K Of course this inequality can be further improved. For example, due to the linearity, it is enough to consider a cover of the boundary of K only. Employing other concentration inequalities, also the boundedness condition for q(z) can be overcome. Moreover, if the functions have a more involved form, one can often proceed in a similar way. We aim at employing Theorem 9 and (in case of a non-unique solution) Theorem 10. (VL1), (VU1), and (VU1-R) are satisfied with K 1 0. (VL2), (VU2), and (VU2-R) are fulfilled with K 2. (SI4) is satisfied with ν(ε) = ε. 20

21 It remains to investigate the growth condition (Gr-f 0 ), where we can choose UΓ 0 = Γ 0, and the semicontinuity condition (UCon). We have f 0 (x) Φ 0 max E q i(z) d(x, Ψ 0 ), hence (UCon) is satisfied with i=1,...,p c 2 = max E q i(z) and δ 2 = 1. i=1,...,p Let I := {i {1,..., p} : Eq i (Z) 0}. According to our assumption I is not empty. To x with d(x, ˆΨ 0 ) > ε there are ˆx ˆΨ 0 and i I such x i ˆx i > ε. Consequently (Gr-f 0 ) is satisfied with c 1 = min E q i(z) and i I δ 1 = 1. Secondly, we consider the approximation of a constraint set which is determined by a probabilistic constraint. Again, replacing the true probability measure with the empirical measure, we obtain a sequence of approximating constraint functions. In detail, we assume the constraint function has the special form g 0 (x) = α P Z ((, γ(x)]) = α F Z (γ(x)). Here Z and P Z are defined as above; F Z denotes the corresponding distribution function. Additionally we restrict the considerations to p = 1. α (0, 1) denotes a probability level and γ E R 1 a given concave function. The inequality constraint g 0 (x) 0 then reads as P (Z γ(x)) α. We assume Γ 0 = ˆΓ 0 = {x E : g 0 (x) 0} =. The approximating constraint set has the form Γ n (ω) = {x E : α F n (γ(x), ω) 0} with the empirical distribution function F n. In order to fulfil (CI2 ) and (CO2 ) we can directly apply the Dvoretzky- Kiefer-Wolfowitz inequality with Massart s bound ([9], [1]) and we obtain P {ω : n (α F n (γ(x), ω)) (α F Z (γ(x)) > ε} 2e 2ε2. x R 1 (CI3) is not needed. In order to fulfill (Gr-g 0 ), we will impose growth conditions for F Z and γ. Assume, for the given probability level α, the α-quantile q α of F Z is unique and consider a compact set K such q α int K. Furthermore, let X K := {x E : γ(x) K} and pose the following conditions are satisfied: (IG) There exist positive constants c 1,γ, c 1,F, δ 1,γ, δ 1,F such y K with y < q α : α F Z (y) > c 1,F (d(y, q α )) δ 1,F and x X K : γ(x) < q α c 1,γ d(x, Γ 0 ) δ 1,γ. (Gr-g 0 ) with K instead of UQ 0 and a strict inequality is then satisfied with 21

22 c 1 = c F (c 1,γ ) δ F and δ 1 = δ 1,γ δ F. Of course only one inequality in (IG) has to be strict. If Γ 0 is single-valued, it remains to apply Theorem 1. Otherwise we employ Theorem 2 and assume the following condition is satisfied: (OG) There exist positive constants c 2,γ, c 2,F, δ 2,γ, δ 2,F such y K with y > q α : F Z (y) α > c 2,F (d(y, q α )) δ 2,F. Furthermore, there exists an ε > 0 such CI( ε) and x Γ 0 \ CI( ε) : γ(x) > q α + c 2,γ d(x, (E \ Γ 0 ) δ 2,γ. Hence, with respect to (CO3) we obtain for all 0 < ε ε Γ 0 ŪεCI(ε) and x Γ 0 \ CI( ε) : g 0 (x) < c 2 (d(x, E \ Γ 0 ) δ 2 with c 2 = c F (c 2,γ ) δ F and δ 2 = δ 2,γ δ F. Thus in Theorem 2 we can choose µ(ε) = c 2 ε δ 2. Consequently, for all ε ε, β = max{( ε c 1 ) 1 δ1 n 1 2 δ 1, (2 ε c 2 ) 1 δ2 n 1 2 δ 2 }, and n 0 (ε) = min{k : β k,ε 2 ε, γ(γ 0 \ CI( β k,ε )) K} the relation 2 P {ω : (Γ n (ω) \ U Γ β 0) (Γ 0 \ (U Γ β n(ω)) } 4e 2ε2 Eventually, we consider quantile estimation because here relaxation of the constraint set comes into play in a natural way. Papers dealing with quantile estimation usually assume the distribution function is strictly increasing in a neighborhood of the quantile (c.f. [4], [5]). There are, however, applications where one can not a priori assume, the lower and the upper quantile coincide. We consider - as in the foregoing example - a real-valued random variable Z with distribution P Z and distribution function F Z. We will, for a fixed α (0, 1), investigate the lower α-quantile qα l := inf{x R 1 : F Z (x) α}. We consider the constraint set Γ 0 := {x R : F Z (x) α} and the optimization problem (P 0 ) min x. x Γ 0 As F Z is upper semicontinuous by definition, the set Γ 0 is closed and the minimum qα l will be attained. (P 0 ) could be approximated replacing F Z by the empirical distribution function F n. Unfortunately, the set {x R 1 : F n (x) α}, in general, does not approximate the whole set Γ 0. In [19] we showed with a suitable relaxation κ the solutions to the approximate problems convergence in 22

23 probability to the desired quantile. Here we can proceed in a similar way, consider the modified constraint set Γ with Γ (ω) := {x R : F n (x, ω) > α ε n } and investigate the approximating optimization problems (P ) min x. x Γ (P ) has a unique solution, too. In order to obtain a convergence rate, we need some knowledge about F Z, e.g. a growth condition. Theorem 11 (Quantile Estimation) Assume there exist constants c > 0, δ > 0, and θ > 0 such x with 0 < d( x, Γ 0 ) θ : F 0 ( x) < α cd( x, Γ 0 ) δ. Then for all ε > 0, β β k,ε θ} the relations hold. = max{εn 1 2, ( 2ε c ) 1 δ n 1 2δ }, and n 0 (ε) = min{k : P {ω : (Γ (ω) \ U β Γ 0) (Γ 0 \ Γ (ω)) } 2e 2ε2 P {ω : Ψ (ω) \ U β Ψ 0 } 2e 2ε2 Proof. We employ Corollary 5.1 and, since the solution is unique, Theorem 9. The objective function is not approximated, hence (VL2) and (VU2) are satisfied with K 2 0. The conditions (UCon) and (Gr-f 0 ) are fulfilled with c i = δ i = 1, i = 1, 2. Due to the Dvoretzky-Kiefer-Wolfowitz inequality with Massart s bound ([9], [1]), strict variants of (CI2 ) and (CO2 ) are satisfied with γ n = n 1 2 and K 2 (ε) = e 2ε2. The first assertion then follows by Corollary 5.1. Furthermore, taking into account (SI4) is satisfied with ν(ε) = ε, the second assertion is implied by Theorem 9. and References [1] L. Devroye and G. Lugosi. Combinatorical Methods in Density Estimation. Springer, [2] O. Gersch. Convergence in Distribution of Random Closed Sets and Applications in Stability Theory of Stochastic Optimisation. Dissertation thesis, Technical University Ilmenau

24 [3] W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Amer. Statistical Association., 58:13 30, [4] A.I. Kibzun and Y.S. Kan. Stochastic Programming Problems. Wiley [5] K. Knight. What are the limiting distibutions of quantile estimators? Technical Report, University of Toronto, [6] P. Lachout, E. Liebscher and S. Vogel. Strong convergence of estimators as ε n -estimators of optimization problems. Ann. Inst. Statist. Math., 57: , [7] P. Lachout and S. Vogel. On continuous convergence and epi-convergence of random functions. Part I: Theory and relations. Kybernetika, 39:75 98, [8] P. Lachout and S. Vogel. On continuous convergence and epi-convergence of random functions. Part II: Sufficient conditions and applications. Kybernetika, 39:99 118, [9] P. Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. of Probability, 18: , [10] A. Nemirovski and A. Shapiro. Scenario approximations of chance constraints. SIAM J. Optimization, to appear. [11] G.C. Pflug. Asymptotic dominance and confidence for solutions of stochastic programs. Czechoslovak J. Oper. Res., 1:21 30, [12] G.C. Pflug. Stochastic Optimization and Statistical Inference. In: Stochastic Programming (A. Ruszczynski, A. Shapiro, eds.), Handbooks in Operations Research and Mangement Science Vol. 10, Elsevier 2003, [13] W. Römisch. Stability of Stochastic Programming. In: Stochastic Programming (A. Ruszczynski, A. Shapiro, eds.), Handbooks in Operations Research and Mangement Science Vol. 10, Elsevier 2003, [14] A. Shapiro. Monte Carlo Sampling Methods. In: Stochastic Programming (A. Ruszczynski, A. Shapiro, eds.), Handbooks in Operations Research and Mangement Science Vol. 10, Elsevier 2003,

25 [15] A.W. van der Vaart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer, [16] S. Vogel. A stochastic approach to stability in stochastic programming. J. Comput. and Appl. Math., Series Appl. Analysis and Stochastics, 56: 65 96, [17] S. Vogel. On stability in stochastic programming - Sufficient conditions for continuous convergence and epi-convergence. Preprint TU Ilmenau, [18] S. Vogel. On semicontinuous approximations of random closed sets with application to random optimization problems. Ann. Oper. Res., 142: , [19] S. Vogel. Qualitative stability of stochastic programs with applications in asymptotic statistics. Statistics and Decisions, 23: ,

RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY

RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY Random Approximations in Stochastic Programming 1 Chapter 1 RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY Silvia Vogel Ilmenau University of Technology Keywords: Stochastic Programming, Stability,

More information

Silvia Vogel. Random Approximations in Multiobjective Optimization. Ilmenau University of Technology Weimarer Strae Ilmenau Germany

Silvia Vogel. Random Approximations in Multiobjective Optimization. Ilmenau University of Technology Weimarer Strae Ilmenau Germany Silvia Vogel Random Approximations in Multiobjective Optimization Ilmenau University of Technology Weimarer Strae 25 98693 Ilmenau Germany E-mail: Silvia.Vogel@tu-ilmenau.de Tel. +49 3677 693626 Fax: +49

More information

ROBUST - September 10-14, 2012

ROBUST - September 10-14, 2012 Charles University in Prague ROBUST - September 10-14, 2012 Linear equations We observe couples (y 1, x 1 ), (y 2, x 2 ), (y 3, x 3 ),......, where y t R, x t R d t N. We suppose that members of couples

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

EMPIRICAL ESTIMATES IN STOCHASTIC OPTIMIZATION VIA DISTRIBUTION TAILS

EMPIRICAL ESTIMATES IN STOCHASTIC OPTIMIZATION VIA DISTRIBUTION TAILS K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 3, P AGES 459 471 EMPIRICAL ESTIMATES IN STOCHASTIC OPTIMIZATION VIA DISTRIBUTION TAILS Vlasta Kaňková Classical optimization problems depending on a probability

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

ORIGINS OF STOCHASTIC PROGRAMMING

ORIGINS OF STOCHASTIC PROGRAMMING ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Probability metrics and the stability of stochastic programs with recourse

Probability metrics and the stability of stochastic programs with recourse Probability metrics and the stability of stochastic programs with recourse Michal Houda Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic. Abstract In stochastic optimization

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

Complexity of two and multi-stage stochastic programming problems

Complexity of two and multi-stage stochastic programming problems Complexity of two and multi-stage stochastic programming problems A. Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA The concept

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Reformulation of chance constrained problems using penalty functions

Reformulation of chance constrained problems using penalty functions Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF

More information

Chance constrained optimization - applications, properties and numerical issues

Chance constrained optimization - applications, properties and numerical issues Chance constrained optimization - applications, properties and numerical issues Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) May 31, 2012 This

More information

Stochastic Convergence, Delta Method & Moment Estimators

Stochastic Convergence, Delta Method & Moment Estimators Stochastic Convergence, Delta Method & Moment Estimators Seminar on Asymptotic Statistics Daniel Hoffmann University of Kaiserslautern Department of Mathematics February 13, 2015 Daniel Hoffmann (TU KL)

More information

A note on scenario reduction for two-stage stochastic programs

A note on scenario reduction for two-stage stochastic programs A note on scenario reduction for two-stage stochastic programs Holger Heitsch a and Werner Römisch a a Humboldt-University Berlin, Institute of Mathematics, 199 Berlin, Germany We extend earlier work on

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han Victoria University of Wellington New Zealand Robert de Jong Ohio State University U.S.A October, 2003 Abstract This paper considers Closest

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

arxiv: v3 [math.oc] 25 Apr 2018

arxiv: v3 [math.oc] 25 Apr 2018 Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure Jamie Fairbrother *, Amanda Turner *, and Stein W. Wallace ** * STOR-i Centre for Doctoral Training,

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids

More information

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION O. SAVIN. Introduction In this paper we study the geometry of the sections for solutions to the Monge- Ampere equation det D 2 u = f, u

More information

On Consistent Hypotheses Testing

On Consistent Hypotheses Testing Mikhail Ermakov St.Petersburg E-mail: erm2512@gmail.com History is rather distinctive. Stein (1954); Hodges (1954); Hoefding and Wolfowitz (1956), Le Cam and Schwartz (1960). special original setups: Shepp,

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Convexity of chance constraints with dependent random variables: the use of copulae

Convexity of chance constraints with dependent random variables: the use of copulae Convexity of chance constraints with dependent random variables: the use of copulae René Henrion 1 and Cyrille Strugarek 2 1 Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin, Germany.

More information

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES JUHA LEHRBÄCK Abstract. We establish necessary conditions for domains Ω R n which admit the pointwise (p, β)-hardy inequality u(x) Cd Ω(x)

More information

On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments

On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments UDC 519.2 On probabilities of large and moderate deviations for L-statistics: a survey of some recent developments N. V. Gribkova Department of Probability Theory and Mathematical Statistics, St.-Petersburg

More information

Robustness in Stochastic Programs with Risk Constraints

Robustness in Stochastic Programs with Risk Constraints Robustness in Stochastic Programs with Risk Constraints Dept. of Probability and Mathematical Statistics, Faculty of Mathematics and Physics Charles University, Prague, Czech Republic www.karlin.mff.cuni.cz/~kopa

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo September 6, 2011 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization

Some Properties of the Augmented Lagrangian in Cone Constrained Optimization MATHEMATICS OF OPERATIONS RESEARCH Vol. 29, No. 3, August 2004, pp. 479 491 issn 0364-765X eissn 1526-5471 04 2903 0479 informs doi 10.1287/moor.1040.0103 2004 INFORMS Some Properties of the Augmented

More information

An iterative procedure for constructing subsolutions of discrete-time optimal control problems

An iterative procedure for constructing subsolutions of discrete-time optimal control problems An iterative procedure for constructing subsolutions of discrete-time optimal control problems Markus Fischer version of November, 2011 Abstract An iterative procedure for constructing subsolutions of

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Max-min (σ-)additive representation of monotone measures

Max-min (σ-)additive representation of monotone measures Noname manuscript No. (will be inserted by the editor) Max-min (σ-)additive representation of monotone measures Martin Brüning and Dieter Denneberg FB 3 Universität Bremen, D-28334 Bremen, Germany e-mail:

More information

Fall, 2007 Nonlinear Econometrics. Theory: Consistency for Extremum Estimators. Modeling: Probit, Logit, and Other Links.

Fall, 2007 Nonlinear Econometrics. Theory: Consistency for Extremum Estimators. Modeling: Probit, Logit, and Other Links. 14.385 Fall, 2007 Nonlinear Econometrics Lecture 2. Theory: Consistency for Extremum Estimators Modeling: Probit, Logit, and Other Links. 1 Example: Binary Choice Models. The latent outcome is defined

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Robust error estimates for regularization and discretization of bang-bang control problems

Robust error estimates for regularization and discretization of bang-bang control problems Robust error estimates for regularization and discretization of bang-bang control problems Daniel Wachsmuth September 2, 205 Abstract We investigate the simultaneous regularization and discretization of

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Chi-square goodness-of-fit test for vague data

Chi-square goodness-of-fit test for vague data Chi-square goodness-of-fit test for vague data Przemys law Grzegorzewski Systems Research Institute Polish Academy of Sciences Newelska 6, 01-447 Warsaw, Poland and Faculty of Math. and Inform. Sci., Warsaw

More information

2 Chance constrained programming

2 Chance constrained programming 2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.

More information

The Arzelà-Ascoli Theorem

The Arzelà-Ascoli Theorem John Nachbar Washington University March 27, 2016 The Arzelà-Ascoli Theorem The Arzelà-Ascoli Theorem gives sufficient conditions for compactness in certain function spaces. Among other things, it helps

More information

Convergence of generalized entropy minimizers in sequences of convex problems

Convergence of generalized entropy minimizers in sequences of convex problems Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Randomized Quantization and Optimal Design with a Marginal Constraint

Randomized Quantization and Optimal Design with a Marginal Constraint Randomized Quantization and Optimal Design with a Marginal Constraint Naci Saldi, Tamás Linder, Serdar Yüksel Department of Mathematics and Statistics, Queen s University, Kingston, ON, Canada Email: {nsaldi,linder,yuksel}@mast.queensu.ca

More information

Optimal Sequential Procedures with Bayes Decision Rules

Optimal Sequential Procedures with Bayes Decision Rules International Mathematical Forum, 5, 2010, no. 43, 2137-2147 Optimal Sequential Procedures with Bayes Decision Rules Andrey Novikov Department of Mathematics Autonomous Metropolitan University - Iztapalapa

More information

02. Measure and integral. 1. Borel-measurable functions and pointwise limits

02. Measure and integral. 1. Borel-measurable functions and pointwise limits (October 3, 2017) 02. Measure and integral Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/02 measure and integral.pdf]

More information

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.

From now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1. Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x

More information

ON STOCHASTIC STRUCTURAL TOPOLOGY OPTIMIZATION

ON STOCHASTIC STRUCTURAL TOPOLOGY OPTIMIZATION ON STOCHASTIC STRUCTURAL TOPOLOGY OPTIMIZATION A. EVGRAFOV and M. PATRIKSSON Department of Mathematics, Chalmers University of Technology, SE-4 96 Göteborg, Sweden J. PETERSSON Department of Mechanical

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE

SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE SEPARABILITY AND COMPLETENESS FOR THE WASSERSTEIN DISTANCE FRANÇOIS BOLLEY Abstract. In this note we prove in an elementary way that the Wasserstein distances, which play a basic role in optimal transportation

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems

Journal of Complexity. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems Journal of Complexity 26 (2010) 3 42 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jco New general convergence theory for iterative processes

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012 Instructions: Answer all of the problems. Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012 Definitions (2 points each) 1. State the definition of a metric space. A metric space (X, d) is set

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

A Minimal Point Theorem in Uniform Spaces

A Minimal Point Theorem in Uniform Spaces A Minimal Point Theorem in Uniform Spaces Andreas Hamel Andreas Löhne Abstract We present a minimal point theorem in a product space X Y, X being a separated uniform space, Y a topological vector space

More information

Fakultät für Mathematik und Informatik

Fakultät für Mathematik und Informatik Fakultät für Mathematik und Informatik Preprint 2017-03 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite programg problems ISSN 1433-9307 S. Dempe, F. Mefo Kue Discrete bilevel and semidefinite

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

What to remember about metric spaces

What to remember about metric spaces Division of the Humanities and Social Sciences What to remember about metric spaces KC Border These notes are (I hope) a gentle introduction to the topological concepts used in economic theory. If the

More information

SOME HISTORY OF STOCHASTIC PROGRAMMING

SOME HISTORY OF STOCHASTIC PROGRAMMING SOME HISTORY OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces

More information

The wave model of metric spaces

The wave model of metric spaces arxiv:1901.04317v1 [math.fa] 10 Jan 019 The wave model of metric spaces M. I. Belishev, S. A. Simonov Abstract Let Ω be a metric space, A t denote the metric neighborhood of the set A Ω of the radius t;

More information

Constructions of digital nets using global function fields

Constructions of digital nets using global function fields ACTA ARITHMETICA 105.3 (2002) Constructions of digital nets using global function fields by Harald Niederreiter (Singapore) and Ferruh Özbudak (Ankara) 1. Introduction. The theory of (t, m, s)-nets and

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Solving generalized semi-infinite programs by reduction to simpler problems.

Solving generalized semi-infinite programs by reduction to simpler problems. Solving generalized semi-infinite programs by reduction to simpler problems. G. Still, University of Twente January 20, 2004 Abstract. The paper intends to give a unifying treatment of different approaches

More information

Stochastic dominance with imprecise information

Stochastic dominance with imprecise information Stochastic dominance with imprecise information Ignacio Montes, Enrique Miranda, Susana Montes University of Oviedo, Dep. of Statistics and Operations Research. Abstract Stochastic dominance, which is

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

Bonus Homework. Math 766 Spring ) For E 1,E 2 R n, define E 1 + E 2 = {x + y : x E 1,y E 2 }.

Bonus Homework. Math 766 Spring ) For E 1,E 2 R n, define E 1 + E 2 = {x + y : x E 1,y E 2 }. Bonus Homework Math 766 Spring ) For E,E R n, define E + E = {x + y : x E,y E }. (a) Prove that if E and E are compact, then E + E is compact. Proof: Since E + E R n, it is sufficient to prove that E +

More information

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set Analysis Finite and Infinite Sets Definition. An initial segment is {n N n n 0 }. Definition. A finite set can be put into one-to-one correspondence with an initial segment. The empty set is also considered

More information

Document downloaded from: This paper must be cited as:

Document downloaded from:  This paper must be cited as: Document downloaded from: http://hdl.handle.net/10251/50602 This paper must be cited as: Pedraza Aguilera, T.; Rodríguez López, J.; Romaguera Bonilla, S. (2014). Convergence of fuzzy sets with respect

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

On a Class of Multidimensional Optimal Transportation Problems

On a Class of Multidimensional Optimal Transportation Problems Journal of Convex Analysis Volume 10 (2003), No. 2, 517 529 On a Class of Multidimensional Optimal Transportation Problems G. Carlier Université Bordeaux 1, MAB, UMR CNRS 5466, France and Université Bordeaux

More information

Asymptotic inference for a nonstationary double ar(1) model

Asymptotic inference for a nonstationary double ar(1) model Asymptotic inference for a nonstationary double ar() model By SHIQING LING and DONG LI Department of Mathematics, Hong Kong University of Science and Technology, Hong Kong maling@ust.hk malidong@ust.hk

More information

Individual confidence intervals for true solutions to stochastic variational inequalities

Individual confidence intervals for true solutions to stochastic variational inequalities . manuscript o. 2will be inserted by the editor Michael Lamm et al. Individual confidence intervals for true solutions to stochastic variational inequalities Michael Lamm Shu Lu Amarjit Budhiraja. Received:

More information

D I S C U S S I O N P A P E R

D I S C U S S I O N P A P E R I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

Mathematics for Economists

Mathematics for Economists Mathematics for Economists Victor Filipe Sao Paulo School of Economics FGV Metric Spaces: Basic Definitions Victor Filipe (EESP/FGV) Mathematics for Economists Jan.-Feb. 2017 1 / 34 Definitions and Examples

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures

Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon Measures 2 1 Borel Regular Measures We now state and prove an important regularity property of Borel regular outer measures: Stanford Mathematics Department Math 205A Lecture Supplement #4 Borel Regular & Radon

More information

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach

Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach Goodness-of-Fit Tests for Time Series Models: A Score-Marked Empirical Process Approach By Shiqing Ling Department of Mathematics Hong Kong University of Science and Technology Let {y t : t = 0, ±1, ±2,

More information

Concentration behavior of the penalized least squares estimator

Concentration behavior of the penalized least squares estimator Concentration behavior of the penalized least squares estimator Penalized least squares behavior arxiv:1511.08698v2 [math.st] 19 Oct 2016 Alan Muro and Sara van de Geer {muro,geer}@stat.math.ethz.ch Seminar

More information

Refined optimality conditions for differences of convex functions

Refined optimality conditions for differences of convex functions Noname manuscript No. (will be inserted by the editor) Refined optimality conditions for differences of convex functions Tuomo Valkonen the date of receipt and acceptance should be inserted later Abstract

More information

A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION. Miljenko Huzak University of Zagreb,Croatia

A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION. Miljenko Huzak University of Zagreb,Croatia GLASNIK MATEMATIČKI Vol. 36(56)(2001), 139 153 A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION Miljenko Huzak University of Zagreb,Croatia Abstract. In this paper a version of the general

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Logical Connectives and Quantifiers

Logical Connectives and Quantifiers Chapter 1 Logical Connectives and Quantifiers 1.1 Logical Connectives 1.2 Quantifiers 1.3 Techniques of Proof: I 1.4 Techniques of Proof: II Theorem 1. Let f be a continuous function. If 1 f(x)dx 0, then

More information

Domains of analyticity for response solutions in strongly dissipative forced systems

Domains of analyticity for response solutions in strongly dissipative forced systems Domains of analyticity for response solutions in strongly dissipative forced systems Livia Corsi 1, Roberto Feola 2 and Guido Gentile 2 1 Dipartimento di Matematica, Università di Napoli Federico II, Napoli,

More information

On Approximations and Stability in Stochastic Programming

On Approximations and Stability in Stochastic Programming On Approximations and Stability in Stochastic Programming Peter Kall Abstract It has become an accepted approach to attack stochastic programming problems by approximating the given probability distribution

More information