RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY

Size: px
Start display at page:

Download "RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY"

Transcription

1 Random Approximations in Stochastic Programming 1 Chapter 1 RANDOM APPROXIMATIONS IN STOCHASTIC PROGRAMMING - A SURVEY Silvia Vogel Ilmenau University of Technology Keywords: Stochastic Programming, Stability, Confidence Sets AMS Subject Classification: 90C15, 90C31, 62G15, 62G20. Abstract The paper considers random approximations of deterministic optimization problems. Such approximations come into play if unknown parameters or probability distributions are replaced with estimates or for numerical reasons.the objective function and the constraint set can be approximated simultaneously. Stochastic programming problems are of special interest in this context. They are deterministic models that heavily depend on the underlying probability distribution. Moreover, many statistical estimators being solutions of random optimization problems, the approaches can also be applied to constrained estimation problems. address: silvia.vogel@tu-ilmenau.de

2 2 Silvia Vogel Firstly the focus is on qualitative stability results in terms of convergence almost surely and in probability for constraint sets, optimal values, and solution sets. Because in general only a subset of the true solution set can be approximated, one-sided approximations are explained in detail. The statements are supplemented with sufficient conditions for the convergence of the functions involved. Furthermore, it is shown how so-called near-optimal or ε-optimal solutions can be incorporated. In the second part of the paper universal confidence sets for constraint sets, optimal values, and solution sets are derived. These quantitative results require stronger convergence properties of the approximating constraint functions and objective function. Two approaches are presented. One approach employs a convergence notion which is related to Kuratowski-Painlevé convergence of sequences of random sets, the second approach relies on relaxation of the constraints or optimality conditions. While the first approach uses some knowledge about the true problem, the second one can cope without any knowledge about the true problem. Again, sufficient conditions for the required convergence properties are provided. 1 Introduction Usually, real-life decision problems are optimization problems which are fraught with uncertainties. If at least a probability distribution for the uncertain quantities can be assumed or is available, stochastic programming models, like two-stage problems or chance constrained problems, come into play. These models are deterministic models, but they heavily depend on the underlying probability distribution. Unfortunately, as a rule, this distribution is also not completely known. Often it is estimated by the empirical distribution function. Sometimes a special class of distributions has proven to be a good model, and it remains to estimate the parameters. Decision makers usually take the estimates as surrogate for the unknown quantities and solve the problem as it then arises. They believe that the decisions obtained in that way come close to the true optimal decisions. Hence there is a need for assertions that can justify their belief, so-called stability assertions. Regarding the estimates as random variables, the decision problem which is really solved, is a realization of a random problem. Thus, from the mathematical point of view, a sequence (P n ) of random approximate problems is given, where n can be understood as the size of the sample from which the estimates have been derived. As reasonable estimates approximate the true value in some random sense if the sample size n tends to infinity, one can ask in what sense and under what conditions the random decision problems, and particularly their solutions, approximate the true ones. Qualitative stability statements provide conditions that ensure convergence almost surely, in probability, or in distribution of optimal values and solution sets of the random surrogate problems. One possibility to derive stability assertions in terms of convergence almost surely consists in employing results from deterministic parametric optimization. Especially in stochastic programming the probability measure has been regarded as parameter (cf. [10], [26], [27], [28]). However, considering convergence almost surely instead of convergence for all realizations, usually offers the possibility to weaken the assumptions. From the viewpoint of a practitioner the almost surely behavior is not worse than the deterministic

3 Random Approximations in Stochastic Programming 3 one. For weaker convergence notions such as convergence in probability or convergence in distribution the assumptions can be further weakened. The theory of random approximations as explained in this paper can always be employed if an arbitrary deterministic optimization problem is approximated by estimates. In stochastic programming however, due to the special role of the probability distribution, random approximations are of special interest. Estimates for unknown quantities are not the only framework where random approximations occur. Random surrogate problems come also into play if completely known decision problems are solved with an algorithm that uses random steps. Sample Average Approximation ([31], [9]) is an important example. Generally, the number of algorithms that use random steps, such as genetic algorithms, artificial-life algorithms, or estimationof-distribution algorithms, is rapidly increasing. Furthermore, bootstrap procedures (in a wider sense) often sample from an approximate model, which is obtained via an estimation procedure. Stability theory for random approximations exploits results of asymptotic statistics as it relies on assertions about convergence almost surely or in probability (so-called strong or weak consistency), or convergence in distribution (often in form of convergence to a normal distribution) of random variables. Furthermore there are many results about the behavior of the empirical distribution function, that can be employed ([32], [34]). However, asymptotic statistics can also benefit from stability theory of stochastic programming. Many estimations procedures being in fact optimization problems, convergence of solutions of programming problems in the almost surely or in probability sense yields consistency assertions; see e.g. [6], [14], [40] and the papers quoted there. Convergence in distribution may be employed to derive asymptotic confidence sets ([8], [20]). Of course, estimation procedures have already been backed with many theoretical results. Stability theory of stochastic programming can successfully contribute if the sets under consideration are not single-valued, e.g. in mixture models, or if constraints come into play ([14], [40]). For random variables it is well-known that in case of a deterministic limit convergence in probability and convergence in distribution coincide (cf. [5]). This also holds true for the random approximations considered in this paper. As long as we deal with a deterministic limit problem, convergence in distribution does not yield additional results. The situation is different if random limits are investigated ([40], [41], [8]). Random limit problems can also occur if the approximating problems are blown up with suitable factors ([21]). The limit distribution obtained in this way can be used to derive asymptotic confidence sets. As we will deal with another approach for the derivation of (non-asymptotic) confidence sets, we will not investigate convergence in distribution in this survey. Convergence properties are qualitative assertions only. When the sample size can be increased without much effort, they are valuable tools. However, there are situations where the generation of new samples is expensive, if not impossible. Then quantitative results are asked for. Also quantitative results can be regarded under a deterministic or under a stochastic perspective. Under the deterministic point of view quantitative stability assertions have been proved that derive bounds for distances between optimal values or solution sets in terms of distances between the underlying probability measures or parameters; see e.g. [28], [27].

4 4 Silvia Vogel From a random point of view confidence sets come into play. In statistics, particularly in parametric statistics, confidence sets are well-known tools. They are random sets that cover the true parameter - or in a more general framework the true set - not in every case, but with a prescribed high probability. They are derived from samples of size n and they should shrink to the true parameter or set if n increases. Thus they can provide important quantitative information. In the classical approach confidence sets are derived from a suitable statistic with known distribution, under reasonable assumptions about the sample. If this exact distribution is not known, one often makes use of the asymptotic distribution. For our purposes, however, this approach has two weak points: Often one does not have sufficient information about the accuracy for a fixed sample size n. Moreover, the approach fails if the sets under consideration, e.g. solutions of estimation-optimization problems, are not single-valued. In parametric statistics single-valuedness is usually enforced by an identifiability condition. When approximating optimization problems, however, sets that are not single-valued are the rule rather than the exception, think of constraint sets, set of efficient points etc. In [42] it was shown, how confidence sets for optimization problems can be derived without knowledge of the distribution of a statistic. The approach is based on a quantified version of one-sided uniform convergence in probability for sequences of random functions. Concentration-of-measure results for sequences of real-valued random variables can be employed to derive sufficient conditions. Confidence sets in a different framework are also considered in [3], [7], and [17]. When approximating optimization problems, one has to be aware of the fact that under reasonable assumptions only a subset of the true solution set will be obtained as limit. This has been well-known from parametric programming. A similar situation occurs if a level set, e.g. a constraint set of an optimization problem, is approximated. In the case of level sets, an additional inner point condition can be imposed in order to enforce approximation of the whole set. For solution sets, however, such a condition does not make sense. Hence there is reason to consider one-sided approximations, so-called outer and inner approximations in the Kuratowski-Painlevé sense or superset- and subset-approximations. Outer approximations in the Kuratowski-Painlevé sense tend to a superset of the true set and, for a fixed approximation level n, they contain the true set with a certain probability. Hence they can be regarded as so-called superset-approximations. Inner approximations in the Kuratowski-Painlevé sense do not have a corresponding property. They tend in the limit to a subset, but need not be contained in the true set for any n and any realization of the approximating problem. Therefore also so-called subset-approximations will be considered. It is the aim of the present paper to explain how general random convergence assertions for sequences of random optimization problems and their solutions (almost surely and in probability) can be derived and how confidence sets can be determined. For sake of simplicity we confine the considerations to decisions which belong to R p. Note that many results remain valid in a complete separable metric space. Furthermore, we will not consider multiobjective programming problems. Optimization problems with more than one objective functions are dealt with in [39] and [45]. We will also provide sufficient conditions for the assumptions of the stability results in different frameworks and illustrate the assertion by examples. Crucial assumptions are suitable convergence conditions for the objective and con-

5 Random Approximations in Stochastic Programming 5 straint functions. The availability of appropriate convergence notions is essential for the derivation of meaningful results. Therefore we give relatively broad space to the convergence notions and also discuss relations between epi-convergence, (semi-)continuous convergence, and uniform convergence. Often the solution of an optimization problem, which arises for a fixed n and a fixed realization of the involved random variables, is itself determined via an approximation procedure. Thus, for each approximating problem (P n ), n N, only near-optimal solutions may the available, i.e. solutions for which the value can deviate from the optimal value by at most a small value ε n. Usually the set of ε n -optimal solutions satisfies an inner point condition, which excludes some technical disadvantages, which may come along with the true solution set. We show how ε n -optimal solutions can be included into the considerations without much effort. The paper is organized as follows. In Section 2 the mathematical model is provided and the different kinds of approximations are explained. Furthermore, we present three examples which fit in our framework. In Section 3 we deal with qualitative stability assertions almost surely and in probability and present sufficient conditions. ε n -optimal solutions are considered in Section 3.3. Section 4 shows how confidence sets can be derived from superset-approximations which are supplemented with a convergence rate and a tail behavior function. Approximations in the Kuratowski-Painlevé sense and approximations via relaxations as important special cases are investigated. Moreover we explain how the quality of the confidence sets can be judged via subset-approximations or with inner approximations. Again, the considerations are completed with sufficient conditions for the underlying convergence notions for random functions. 2 Mathematical Model We assume that a deterministic optimization problem (P 0 ) min x Γ 0 f 0 (x) is approximated by a sequence(p n ) of random problems (P n ) min x Γ n(ω) f n(x, ω), n N, ω Ω. [Ω, Σ, P ] is assumed to be a complete probability space and Γ 0 is a nonempty closed subset of R p. The function f 0, which maps into the extended reals R 1 := R 1 { } {+ }, is lower semicontinuous (lsc). Γ n, n N, is a multifunction with values in the Borel subsets B p of R p, and f n R p Ω R 1 is a random function, which is supposed to be (B p Σ, B 1 )-measurable. B1 denotes the σ-field which is generated by the Borel sigma field B 1 and {+ }, { }. Furthermore we assume that f n (, ω), n N, is lsc for almost all ω Ω, and that all functions are (almost surely) proper functions, i.e. functions with values in (, + ] which are not identically. Constraint sets which are defined by inequality constraints are of special interest. In order to allow for certain extensions of the model, we furthermore allow for the intersection

6 6 Silvia Vogel with a closed sets in the model (P 0 ) and closed-valued measurable multifunctions in the approximating models: Let Q 0 be a closed non-empty subset of R p and J = {1,..., j M } a finite index set. We consider functions g j 0 Rp R 1, j J, which are lsc in all points x R p, and define Γ 0 := {x : g j 0 (x) 0, j J} Q 0. Γ 0 is assumed to be non-empty. The set Q 0 is approximated by a sequence (Q n ) of closed-valued measurable multifunctions, and the functions g j 0, j J, are approximated by sequences (gj n) of functions g j n R p R 1, j J, which are (B p Σ, B 1 )-measurable. Furthermore, we assume that the functions g j n(, ω) are lsc for almost all ω Ω. Eventually, the approximate constraint set Γ n is defined by Γ n (ω) := {x R p : g j n(x, ω) 0, j J} Q n (ω). Under our assumptions Γ n is a closed-valued measurable multifunction. The measurability conditions imposed here do not have the weakest form. We use them for sake of simplicity. Also in the present form they are usually satisfied in applications. They guarantee that all functions of ω needed in the following have the necessary measurability properties. Moreover, the lower semicontinuity assumption of the objective functions f n can be dropped. Imposing this condition, however, we can omit some technical details. Semicontinuity is often inherent in the model or can be enforced replacing the original functions by their lsc regularizations. In the following, the optimal values are denoted by Φ. Φ n (ω) := inf f n(x, ω) is x Γ n(ω) the optimal value for the realization (P n (ω)) of the approximate problem, while Φ 0 := inf f 0 (x) is the optimal value to (P 0 ). Ψ n (ω) and Ψ 0 denote the corresponding solution x Γ 0 sets (argmin sets). In order to indicate possible applications, we will present three illustrating examples, which fit into this framework. Example 1. Sample Average Approximation. Assume that the objective function f 0 is the expectation of a random function: f 0 (x) = Eϕ(x, Z) = ϕ(x, z)dp Z (z), z R m where Z Ω R m is a random variable with probability distribution P Z and ϕ R p R m R g is a Borel-measurable function such that the integrals exist. In order to ensure that f 0 is lsc, one can furthermore impose a convexity condition for ϕ(, z), or assume that ϕ(, z) is lsc for P Z -almost all z and to each x there is a neighborhood U{x} such that Eϕ(x, Z) = inf ϕ(x, z)dp Z(z) exists. x U{x} z R m P Z is approximated by the empirical measure P n, based on an i.i.d. sample of realvalued random variables Z 1,..., Z n. Then one has f n (x, ω) = ϕ(x, z)d P 1 n n (z, ω) = ϕ(x, Z i (ω)). n z R m i=1

7 Random Approximations in Stochastic Programming 7 Example 2. Chance Constraints. Consider a chance constrained problem (cf. [23]) where the probability distribution is approximated by the empirical distribution. Let Γ 0 = {x R p : P {ω : γ j (x, Z(ω)) 0} η j, j J}, 0 < η j < 1. Again Z is a random variable with values in R m. γ j R p R m R 1, j J, are measurable functions such that γ j (, z) is lsc for P Z -almost all z. We write probabilities of events in the extended Form P {ω :...} in order to indicate, which quantities are random and which are not. For sake of simplicity we confine to individual chance constraints. Joint chance constraints can be treated in a similar way. With the probability distribution P Z of the random variable Z and the 1-function (or indicator function) the set Γ 0 can also rewritten in the following form: Γ 0 = {x R p : P Z {z : γ j (x, z) 0} η j, j J}, hence g j 0 (x) = ηj P Z {z : γ j (x, z) 0} = η j E Z 1 (,0] (γ j (x, Z)). Furthermore, the probabilities can be regarded as expectations of the indicator functions of the sets M j (x) = {z R m : γ j (x, z) 0}. This yields g j 0 (x) = ηj E Z 1 (,0] (γ j (x, Z)) = η j E Z 1 M j (x))(z). Approximating P Z with the empirical measure as in Example 1 we obtain n n gn(x, j ω) = η j 1 n 1 (,0] (γ j (x, Z i (ω))) = η j 1 n 1 (M j (x)(z i (ω)). i=1 Example 3. M-estimators. When applying stability theory to statistical estimates the starting point is different. We are given a random optimization problem, which is solved for each realization in order to obtain the estimator. Consider, e.g. a maximum likelihood estimator for the parameter ϑ of the density f Z of a real-valued random variable Z. One assumes that an i.i.d. sample Z 1,..., Z n is available and maximizes the function f n (ϑ, ω) := 1 n ln f Z (ϑ, Z i (ω)) n i=1 (assuming that certain regularity conditions are fulfilled). In introductory statistics courses usually the factor 1 n is omitted. However, taking this factor into account, one realizes, by the Law of Large Numbers, that the limit function is i=1 f 0 (ϑ) := E(ln f Z (ϑ, Z)). The limit problem can be identified as an optimization problem which yields the true parameter. Maximum likelihood estimators and least squares estimators belong to the class of M-estimators (cf. [34]), where f 0 (x) = m(x, z)dp Z (z) and f n (x, ω) = 1 n m(x, Z i (ω)), Z i i.i.d. n Applications of the presented stability results in statistics are not restricted to the estimation of parameters. Often modes or level sets of densities or regression functions are asked for. These sets are usually derived as modes or level sets of approximating functions. Then the methods that will be described in the following apply as well. An example is given in [33]. i=1

8 8 Silvia Vogel 3 Convergence of Random Functions and Random Sets 3.1 Convergence Almost Surely Firstly, we will explain and discuss the convergence notions we shall deal with in the deterministic setting in order not to obscure the interrelationships by technical details. These notions can immediately be transferred to the a.s. case. Corresponding notions in probability will be considered in subsection 3.2. The relations between the convergence notions which will be explained in the deterministic case remain valid in essence also in the random settings. We consider a minimization problem (P 0,D ) min y Γ 0 f 0 (y) which is approximated by a sequence of surrogate problems (P n,d ) min y Γ n f n (y) where {f n, n N 0 } is a family of objective functions f n R p R 1 and {Γ n, n N 0 } is the corresponding family of closed constraint sets. N 0 is used as abbreviation for N {0}. When dealing with solution sets which are not single-valued, we need convergence notions for sequences of sets. Kuratowski-Painlevé-convergence has turned out to be an appropriate concept ([24], [35]). Let (S n ) be a sequence of subsets of R p. Then the Kuratowski-Painlevé-Limes superior and the Kuratowski-Painlevé-Limes inferior (called outer and inner limits in [24]) are defined in the following way: K lim sup S n := {s R p (s n ) : s n s and s n S n infinitely often}, n K lim inf n S n := {s R p (s n ) : s n s and s n S n n n 0 (s)}. As the solution sets of approximate optimization problems tend to a subset of the solution set to the true problem only, we introduce the notion of an inner approximation which describes this kind of approximation. Outer approximations are the completing part to Kuratowski-Painlevé convergence. Definition. (i) If K lim sup S n S 0, then (S n ) is called inner Kuratowski-Painlevé approximation to S 0 (abbreviated S n n K i S 0 ). (ii) If K lim inf n S n S 0, then (S n ) is called outer Kuratowski-Painlevé approximation to S 0 (abbreviated S n K o S 0 ).

9 Random Approximations in Stochastic Programming 9 (iii) If K lim sup S n = K lim inf S n = S 0, then (S n ) is convergent in the n n Kuratowski-Painlevé sense to S 0 : K lim S n = S 0 (abbreviated S K n S 0 ). n A sequence (S n ) is also an inner approximation to S 0 if K lim sup S n is empty. n This possibility is usually undesirable and has to be excluded by additional assumptions. In the statistics literature, for instance. one often claims that there is a sequence of estimators fulfilling certain conditions that guarantee convergence in the sense under consideration. For single-valued S n, n N 0, inner approximations reduce to convergence if there is a compact set K such that S n K n n 0. In a random setting, this compactness condition usually appears in a weaker form which corresponds to the convergence notion under consideration. Single-valued outer approximations are always convergent. It is worth mentioning that convergence in the Kuratowski Painlevé sense is metrizable [24]. This opens the possibility to employ the theory of convergence of random variables with values in metric spaces ([4], [5]) and particularly Skorochod s Representation Theorem for convergence in distribution. Using Skorochod s Representation Theorem, results for convergence in distribution can be derived from convergence almost surely. Furthermore, in the metric case, the Portmanteau Theorem provides useful equivalent characterizations for convergence in distribution. The topologies which describe one-sided approximations are studied in [12], [13], and [8]. Surrogate assertions for the Portmanteau Theorem that apply to one-sided approximations can be found in [41] and [8]. These results could be exploited to derive asymptotic confidence sets. The weakest condition on the objective functions (in the absence of constraints) which guarantees inner approximations is epi-convergence [24]. In order to be able to deal also with constraints, often modified objective functions, which take the value + if the constraints are violated, are considered. These modified objective functions can be written in the form f = f + δ Γ with the indicator function δ Γ which takes the value 0 if x Γ and equals + otherwise. Therefore we shall explain epi-convergence for objective functions which map into R 1. We agree that + α = and + α = for all α R 1. Definition. Let {f n R p R 1, n N 0 } be a family of functions. The sequence (f n ) ) is said to be epi-convergent to f 0 (abbreviated f n epi f 0 ) if the sequence of the corresponding epigraph multifunctions converges in the Kuratowski-Painlevé sense to the epigraph of f 0. The following condition is an equivalent characterization of epi-convergence: x 0 R p : lim inf f n(x n ) f 0 (x 0 ) (x n ) with x n x 0 n lim sup f n (x n ) f 0 (x 0 ) for a sequence (x n ) with x n x 0. n With respect to applications the following relations are of importance. Convergence in the spaces C[ d, d], d R 1, of continuous functions over [ d, d] R 1 with the uniform metric implies epi-convergence. Convergence in the spaces D[ d, d], d R 1, of cadlag functions with the Skorokhod metric or convergence in the spaces l [ d, d], d R 1, of bounded functions with the uniform metric imply epi-convergence of the lsc regularizations. For details see [40].

10 10 Silvia Vogel Unfortunately, epi-convergence of the original objective functions together with Kuratowski-Painlevé-convergence of the constraint sets, in general, does not imply epiconvergence of the modified objective functions. Therefore results for the simultaneous approximation of objective functions and constraints sets impose stronger convergence conditions on the original objective functions, usually continuous convergence. If, however, Γ n = Γ for all n N 0 and a closed set Γ, then epi-convergence and pointwise convergence of (f n ) imply that the modified objective functions are epiconvergent. In order to investigate epi-convergence and continuous convergence using the same methods as far as possible, one can split epi-convergence in an upper part, called epi-upper approximation, and a (more restrictive) lower part, called lsc approximation. Continuous convergence can then also be characterized via lsc approximations with respect to (f n ) and ( f n ) ; see the definitions below. When dealing with constrained problems, convergence of the objective functions is needed only on a set which contains the constraint set of the original problem. Hence we will restrict all types of convergence under consideration to a convergence region X, which is supposed to be closed. Of course this restriction is of importance especially for the (semi)continuous convergence. {f n R p R 1, n N 0 } is a family of deterministic functions. Definition. epi u (i) (f n ) is called epi-upper approximation to f 0 on X (abbreviated f n X f 0) if x 0 X : lim sup f n (x n ) f 0 (x 0 ) for a sequence (x n ) with x n x 0. n l (ii) (f n ) is called lsc approximation to f 0 on X (abbreviated f n X f 0) if x 0 X : lim inf f n(x n ) f 0 (x 0 ) for all sequences (x n ) with x n x 0. n (iii) A sequence (f n ) which is a lsc approximation to f 0 on X and for which ( f n ) is a lsc approximation to f 0 on X is continuously convergent to f 0 c on X (abbreviated f n X f 0). A pointwise convergent sequence of functions is an epi-upper approximation. Verification of lsc approximations, however, can require considerable effort. Hence sufficient conditions for this type of convergence are of special interest. Often the following assertion is helpful: If the limit function is continuous, then continuous convergence is equivalent to uniform convergence on each compact set. One-sided versions yield sufficient conditions for semicontinuous approximations. Now we turn to random approximate problems. Stability results are available also for random limit problems. In the following we confine to a deterministic limit problem and consider sequences of random functions (f n ) as explained in Section 2. In order to define the above convergence notions almost surely, one can simply rewrite the definitions requiring the convergence under consideration for almost all ω. Thus we obtain for instance the following definition of a lsc approximation almost surely which plays a central role in our investigations. All other definitions are given in detail in [37].

11 Random Approximations in Stochastic Programming 11 Definition. (f n ) is called lsc approximation almost surely to f 0 on X (abbreviated l a.s. f n X f 0) if P {ω : x 0 X (x n ) x 0 : lim inf f n(x n, ω) f 0 (x 0 )} = 1. n The following theorem relies widely on stability results for deterministic parametric programming problems, see e.g. [37] for references and proofs. By C p we denote the family of compact subsets of R p. In order to formulate half-sided versions we use the following denotations. Definition. Let (ξ n ) be a sequence of random variables with values in the extended reals and ξ 0 R. (i) (ξ n ) is said to be a lower approximation almost surely to ξ 0 (abbreviated ξ n l a.s. ξ 0 ) if P {ω : lim inf ξ n(ω) ξ 0 } = 1. n (ii) (ξ n ) is said to be an upper approximation almost surely to ξ 0 (abbreviated ξ n u a.s. ξ 0 ) if ( ξ n ) is a lower approximation almost surely to ξ 0. The following theorem is a slight generalization of Theorem 4.1 in [37]. The following conditions will be needed: (AS1a) f n u a.s. Γ 0 f 0. (AS1b) There exists x 0 Ψ 0 such that f n u a.s. {x 0 } f 0. Theorem 1 (Optimal Value and Inner Approximation of the Solution Set a.s.) (i) Let (AS1a) or (AS1b) hold and assume that Γ n K o a.s. Γ 0. Then Φ n u a.s. Φ 0. (ii) Assume that f n l a.s. Γ 0 f 0, Γ n K i a.s. Γ 0, and P {ω : K C p n 0 N n n 0 : Γ n (ω) K} = 1. Then Φ n l a.s. Φ 0. (iii) If Φ n u a.s. Φ 0, f n l a.s. Γ 0 f 0, and Γ n K i a.s. Γ 0, then Ψ n K i a.s. Ψ 0. Sufficient conditions for the assumptions of this theorem which apply to many problems will be given together with corresponding conditions for the in probability case in Section 3.4. If the compactness condition in part (ii) is not satisfied one can often make use of a socalled inf-compactness or equi-inf boundedness condition. Furthermore, this condition is sometimes replaced with the condition that a bounded sequence of solutions exists, see [40] for details. The above theorem opens the possibility to deal with constraints directly. If constraints are hidden in a modified objective function continuous convergence of the objective functions can be replaced with the weaker epi-convergence. We firstly formulate an auxiliary assertion (Theorem 4.2 in [37]): Lemma 1 If f n epi u a.s. R p f 0, or there is a minimizer x 0 of f 0 with f n epi u a.s. {x 0 } f 0, then Φ n u a.s. Φ 0.

12 12 Silvia Vogel Together with Theorem 1(iii) we obtain the following corollary: Corollary. If f n epi a.s. R p f 0, then Ψ n K i a.s. Ψ 0. Lemma 1 shows that the assertion of the Corollary still holds if the convergence region for the epi-upper part of the assumption is restricted to x 0 Ψ 0. In [14] it was proved that also the convergence region for the lsc part can be further restricted. 3.2 Convergence in Probability In the following we shall give the main definitions needed to prove stability assertion in probability. More information can be found in [37] and [15]. The definitions are based on investigations by Salinetti and Wets on Kuratowski-Painlevé-convergence and (unrestricted) epi-convergence in probability ([29]). Let (M n ) be a sequence of closed-valued measurable multifunctions and M 0 R p a closed set. Epif is the epigraph multifunction of a function f. U κ X denotes a (open) κ-neighborhood of X: U κ X := {y R p inf x X.... ŪκX, which is used later on, means the closure of U κ X. Definition. x y < κ} with the Euclidean norm (i) The sequence (M n ) is an inner Kuratowski-Painlevé approximation in probability to M 0 (abbreviated M K i prob n M 0 ) if ε > 0 K C p : lim P ([M n(ω) \ U ε M 0 ] K ) = 0. n (ii) (M n ) is an outer Kuratowski-Painlevé approximation in probability to M 0 (abbreviated M K o prob n M 0 ) if ε > 0 K C p : lim P ([M 0 \ U ε M n (ω)] K ) = 0. n (iii) (M n ) is Kuratowski-Painlevé-convergent in probability to M 0 (abbreviated M n K prob M 0 ) if it is an inner and an outer approximation in probability to M 0. Definition. l prob (i) (f n ) is a lsc approximation in probability to f 0 on X (abbreviated f n X f 0 ) if ε > 0 K C p+1 : lim P {ω : (Epif n(, ω) [U 1 X R]) n,l l \ U ε (Epif 0 [X R]) K } = 0. (ii) (f n ) is an upper semicontinuous (usc) approximation in probability to f 0 on u prob X (abbreviated f n X f 0) if ( f n ) is a lsc approximation in probability to f 0 on X. (iii) (f n ) is an epi-upper approximation in probability to f 0 on X (abbreviated epi u prob f n X f 0) if ε > 0 K C p+1 : lim P {ω : (Epif 0 [X R]) \ U ε (Epif n (, ω)) K } = 0. n

13 Random Approximations in Stochastic Programming 13 These conditions are not immediately accessible. Sufficient conditions that apply in many situations will be delivered in Section 3.4. In the following we will also need half-sided versions of usual convergence in probability of random variables ξ n with values in R 1. We repeat the definition from [37]. (We changed, however, the denotation and speak of lower and upper approximations instead of lower and upper semiconvergent sequences, respectively.) Equivalent characterizations are given in [15]. Definition. Let (ξ n ) be a sequence of random variables with values in the extended reals and ξ 0 R. (i) (ξ n ) is said to be a lower approximation in probability to ξ 0 (abbreviated ξ l prob n ξ 0 ) if ε > 0 : lim P {ω : ξ n(ω) < min{ξ 0 ε, 1 n ε }} = 0. (ii) (ξ n ) is said to be an upper approximation in probability to ξ 0 (abbreviated ξ n u prob ξ 0 ) if ( ξ n ) is a lower approximation in probability to ξ 0. A sequence (ξ n ) which is a lower and an upper approximation in probability to ξ 0 is convergent in probability to ξ 0. The following theorem was proved in [37] for a random limit problem. Part (i) uses conditions (AP1a) or (AP1b). Although we generally assumed that the functions are lsc, we repeat this condition for f 0 in the following, in order to indicate that we need this assumption not only for the simplification of certain proofs. The semicontinuity assumptions for f 0 result from general relations between the convergence notions almost surely and in probability, cf. [15], Theorem 3.1. (AP1a) f 0 is upper semicontinuous (usc) on Γ 0 and f n u prob Γ 0 f 0. (AP1b) There exists x 0 Ψ 0 such that f 0 is usc at x 0 and f n u prob {x 0 } f 0. Theorem 2 (Optimal Value and Inner Approximation of the Solution Set in Probability) (i) Let (AP1a) or (AP1b) hold and assume that Γ n K o prob Γ 0. Then Φ n u prob Φ 0. (ii) Let f 0 be lsc at Γ 0 and assume that f n l prob Γ 0 f 0, Γ n K i prob Γ 0, K C p K C p : lim n P (Γ n K K) = 1. Then Φ n l prob Φ 0. (iii) If Φ n u prob Φ 0,f 0 is lsc on Γ 0, f n l prob Γ 0 f 0, and Γ n K i prob Γ 0, then Ψ n K i prob Ψ 0. If there is a family {x n, n N} of solutions to (P n ) which is stochastically bounded, part (ii) of Theorem 2 can be replaced with the following statement from [40]:

14 14 Silvia Vogel Lemma 2 (ii ) Let f 0 be lsc on Γ 0 and let x n be a solution to (P n ), n N. If f n l prob Γ 0 f 0, Γ n K i prob Γ 0 and x n = O P (1), then Φ n l prob Φ 0. An in probability analogue to Lemma 1 can be derived making use of equivalent characterizations of the convergence notions in probability proved in [15]. Thus we obtain the following assertion: epi u prob epi u prob Lemma 3 If f n R f p 0, or there is a minimizer x 0 of f 0 with f n {x 0 } f 0, then Φ u prob n Φ 0. Note that upper semicontinuity of f 0 is not needed. Hence, with Theorem 2 (iii) we can derive an assertion on Kuratowski-Painlevé convergence in probability of the solution sets for a lsc objective function f 0. Corollary If f 0 is lsc and f n epi prob R p f 0, then Ψ n K i prob Ψ ε n -optimal solutions In this section we consider solutions of random optimization problems (P n ) which are optimal up to a random variable ε n. Let Ψ εn,n(ω) := {x Γ n f n (x, ω) inf f n( x, ω) + ε n (ω)}. x Γ n(ω) The elements of Ψ εn,n will be referred to as ε n -optimal solutions. ε n -optimal solutions are considered in [14] for the a.s. case. We will show how the above assertions on the behavior of (optimal) solutions can be extended to apply also to ε n -optimal solutions for the a.s. and the in probability sense as well. We follow the considerations in [40]. Lemma 4 Let, for all n N, ε n Ω R+ 1 and { εn (ω) + inf f n (x, ω) := f n( x, ω), if x Ψ εn,n(ω), x Γ n(ω) f n (x, ω) otherwise. Then for all ω Ω the relations Ψ fn (ω) = Ψ εn,n(ω) and Φ fn (ω) Φ n (ω) ε n (ω) are valid where Ψ fn and Φ fn denote the solution set and the optimal value with respect to the objective function f n. The functions f n in the above lemma differ from f n by a non-negative random function ε n, namely { εn (ω) + inf ε n (x, ω) := f n( x, ω) f n (x, ω), if x Ψ εn,n(ω), x Γ n(ω) 0 otherwise. Obviously, if f n is lsc, then also f n has this property. Furthermore, 0 ε n (x, ω) ε n (ω) for all ω Ω and all x Γ n (ω). Consequently, the following lemma can be used to carry over the results for the true optimal values and solution sets to the ε n -optimal case.

15 Random Approximations in Stochastic Programming 15 Lemma 5 Let ( ε n ), ε n R p Ω R 1 +, be a sequence of random functions which are (B p Σ)-measurable and f n := f n + ε n, n N. (i) If sup ε n (x, ) a.s. 0 for a suitable κ > 0, then x U κγ 0 l a.s. f n Γ 0 f 0 f l a.s. epi u a.s. n Γ 0 f 0 and f n f 0 f epi u a.s. n Γ 0 f 0. (ii) If sup ε n (x, ) prob 0 for a suitable κ > 0, then x U κγ 0 l prob f n Γ 0 f 0 f l prob epi u prob n Γ 0 f 0 and f n f 0 f epi u prob n Γ 0 f 0. Γ Sufficient Conditions for Semicontinuous Approximations, Epi-upper Approximations, and Kuratowski-Painlevé-Convergence of Constraint Sets The stability theorems of the foregoing sections assume Kuratowski-Painlevé-convergence of constraint sets given by inequality constraints and mostly semicontinuous approximations of the objective functions. We will provide sufficient conditions for these assumptions. It will turn out that convergence of the constraint sets heavily depends on semicontinuous approximations for the true constraint function. Thus the convergence assumptions for sequences of random functions are the crucial conditions. Sufficient conditions for Kuratowski-Painlevé-convergence of constraint sets in the almost surely setting can be derived from corresponding assertions in parametric programming. The case of convergence in probability was considered in [37]. For the reader s convenience we quote some results from [37] for the special case that Q n (ω) = Q 0 = R p for all n N and ω Ω. As announced in the Introduction, one can only expect that a whole level set is approximated if an inner point conditions is fulfilled. We use the following condition: (OA) Γ 0 cl{x R p g j 0 (x) < 0, j J}, where cl denotes the closure. Theorem 3 (Constraint Set) Γ 0 (i) If, for all j J, g j n l a.s. R p g j 0, then Γ n K i a.s. Γ 0. (ii) If, for all j J, the functions g j 0 are lsc and gj n l prob R p g j 0, then Γ n K i prob Γ 0. (iii) Assume that (OA) is satisfied. If, for all j J, g j n u a.s. Γ 0 g j 0, then Γ n K o a.s. Γ 0. (iv) Assume that (OA) is satisfied. If, for all j J, the functions g j 0 gn j u prob Γ 0 g j 0, then Γ n K o prob Γ 0. are usc and Sufficient conditions for lsc approximations almost surely and in probability are considered in detail in [16]. There are sufficient conditions which are relatively easy to check. Firstly, we consider a deterministic function of a random parameter. Suppose that there exists a Borel measurable function f R p R m R 1 such that f 0 (x) := f(x, y 0 ) for some y 0 R m. If y 0 is estimated by a sequence (Y n ) of random variables, we obtain f n (x, ω) = f(x, Y n (ω)). Let X R p and Y R m. Lemma 6 Let f be lsc on X Y and y 0 Y.

16 16 Silvia Vogel (i) If (Y n ) converges to y 0 almost surely, then f n l a.s. X f 0. (ii) If (Y n ) converges to y 0 in probability, then f n l prob X f 0. For the epi-upper counterpart we recall the definition of an epi-upper semincontinuous function f [25]. Definition. Let N (x 0 ) denote a neighborhood base of x 0 X and N (y 0 ) a neighborhood base of y 0 Y. f is called epi-upper semincontinuous (epi-usc) on X Y if for each x 0 X, y 0 Y the relation sup inf sup inf f(x, y) f(x 0, y 0 ) holds. W N (y 0 ) x V V N (x 0 ) Lemma 7 Let f be epi-usc on X Y and y 0 Y. y W (i) If (Y n ) converges to y 0 almost surely, then f n epi u a.s. X f 0. (ii) If (Y n ) converges to y 0 in probability, then f n epi u prob X f 0. In many cases, among them Sample Average Approximation and M-estimators, the objective functions of the underlying optimization problems can be written as an integral. If we consider probabilistic constrained problems, the probabilities can be regarded as expectations of an indicator function 1 {...}, hence integrals occur in the constraint functions. Thus in the limit problems we have an integral with respect to a deterministic measure, in the approximations integrals with respect to random measures, often the empiric measure. The following approach was called pointwise approach in [38] and [16], and scalarization in [11]. It offers the possibility to reduce verification of convergence of random functions to verification of convergence of suitable random variables with values in R 1. Consequently, existing Laws of Large Numbers and other limits theorems of probability theory can be employed. The method was firstly explained in [38] and is further elaborated in [16]. We consider an objective function f 0 with f 0 (x) = ϕ(x, z)dp Z (z) and approximation of the probability measure P Z on B m by a sequence of random measures (P n ). ϕ R p R m R 1 R m is supposed to be Borel measurable and integrable with respect to the second variable. Note that the integrand ϕ could also be approximated by a sequence (ϕ n ) ; see e.g. [40]. Here we confine to approximating functions of the form f n (x, ω) = ϕ ( x, z)dp n (z, ω), n N. R m The following auxiliary random variables play a crucial role in the approach. Hn(x ε 0, ω) : = ϕ( x, z)dp n (z, ω) and H ε 0 (x 0) IR m : = IR m inf x Ūε{x 0} inf x Ūε{x 0} ϕ( x, z)dp Z (z). Combining Theorem 4.2 and Proposition 4.1 in [16] we obtain Theorem 4. Theorem 4 (Semicontinuous Convergence, Pointwise Approach) Let f 0 be lsc on X and assume that the following conditions are satisfied for all x 0 X: (SC1) ϕ(, z) is lsc at x 0 for P Z -almost all z.

17 (SC2) Random Approximations in Stochastic Programming 17 inf ϕ(x, z) dp Z (z) < for an ε > 0. R m x Ū ε{x 0 } Then Hn(x ε 0, ) a.s. H0 ε(x l a.s. 0) x 0 X ε ε implies f n X f 0 and Hn(x ε 0, ) prob H0 ε(x l prob 0) x 0 X ε ε implies f n X f 0. Note that in (SC1) the set {z R m } where the lsc property may fail can vary with x 0. This is important if probabilities are approximated, e.g. for chance constraints. Then the integrand is an indicator function 1 {...} which is usually not lsc in each point. (SC1) allows to include at least continuous probability measures; see [16]. If the true measure is approximated by the empirical measure, one can employ a Law of Large Numbers for the random variables inf x Ūε{x 0} ϕ( x, Z i ), x 0 X. If the Z i, i = 1, 2..., are i.i.d. the integrability condition (SC2) already implies, by Kolmogorov s Law of Large Numbers, the convergence of (H ε n(x 0, )) almost surely. Hence Theorem 4 immediately applies to the three examples. Finally, it should be emphasized, that Theorem 4 is also applicable to dependent samples or approximations of the probability measure by density estimators etc. Further methods employ equitightness [1] or lower equi-integrability [16]. 4 Universal Confidence Sets 4.1 Preliminaries In the foregoing sections we dealt with convergence of solutions and/or optimal values of optimization problems. These results do not say anything about the rate of the convergence. Rates could be included in many cases, see the approach in [35]. Convergence rates enjoy great popularity in statistics. However, rates do not make assertions about the accuracy of an approximation for a fixed sample size n. Fortunately, the development of concentrationof-measure results opens the possibility to supplement convergence in probability with a convergence rate and a tail behavior function. This quantified version of convergence in probability can be exploited to derive conservative non-asymptotic confidence sets. Conservative confidence sets are sets that cover the true set at least with a prescribed high probability. We will explain a method which yields such confidence sets. Because the approach works for each n the denotation universal confidence sets has been chosen. G.Ch. Pflug [22] who firstly considered confidence sets with this property, called them strong universal confidence sets. We will speak of universal confidence sets only, because the name fits into the denotation known from mathematical statistics. In [22] also so-called weak confidence sets are considered. They utilize the fact that in general only a subset of the true set is approximated. In order not to overload the presentation with technical details in this section we assume that there is a set K C p such that Γ n (ω) K ω Ω and Γ 0 K. In this way we get rid of the intersection with compact sets in the definition of convergence in probability. In reallife problems this condition is usually satisfied. It can also be enforced by the intersection with suitable sets Q n.

18 18 Silvia Vogel Confidence sets can be derived from so-called superset-approximations which will be explained in the following. Let a set M 0 R p be given and assume that sequences (Mn,κ sup ), κ > 0, are available which have the following property: κ > 0 : sup P {ω : M 0 \ Mn,κ sup (ω) } H(κ). (1) H R 1 + R 1 + is a function with the property lim H(κ) = 0. We can assume that the κ convergence is monotonous. Then, for a prescribed probability level ε 0, κ 0 can be chosen such that H(κ 0 ) ε 0, and the sequence (M n,κ0 ) yields for each n N a conservative confidence set, i.e. a set which covers the true set M 0 with at least the prescribed probability 1 ε 0. We use the following denotation. Definition. A family {(M sup n,κ ), κ > 0} of sequences of measurable multifunctions which satisfies condition (1) is called superset-approximation to M 0 with tail behavior function H. In order to derive useful confidence sets, one would like to have approximating sequences that become smaller with each n, i.e. the distance (in a suitable measure) between M 0 and Mn,κ sup should tend to zero with increasing n for each κ > 0. Examples for superset-approximations in the above sense can be derived from outer Kuratowski-Painlevé approximations as considered in the foregoing sections, but the approximations have to be supplemented with a so-called convergence rate and a tail behavior function. Superset-approximations are obtained as suitable supersets of the set under consideration, say the solution set, for each n. A superset is created adding a ball with radius β n,κ in each point. Thus instead of ε-neighborhoods as in the definition of convergence in probability in the Kuratowski-Painlevè sense, we use β n,κ -neighborhoods. The tail behavior function quantifies the convergence to zero of the probabilities. We will also investigate another kind of outer approximations, the so-called relaxations. For the definition of Kuratowski-Painlevé approximations we use the following denotations. B is the set of sequences of positive numbers that converge monotonously to zero. H denotes the set of functions H R 1 + R 1 + with lim H(κ) = 0. κ Definition. Let for each κ > 0 a sequence (β n,κ ) B and a function H H be given. A sequence (M n ) of measurable multifunctions is called outer Kuratowski-Painlevè approximation to M 0 with convergence rate β n,κ and tail behavior function H if κ > 0 : sup P {ω : M 0 \ U βn,κ M n (ω) } H(κ). The family {(U βn,κ M n ), κ > 0} is then a superset-approximation in the above sense. Approximations of that kind have firstly been considered in [42]. Superset approximations via relaxation are obtained, roughly spoken, relaxing determining properties. Thus constraints can be relaxed replacing the restriction 0 with β n,κ where β n,κ denotes the convergence rate for the constraint function(s). In a similar way the optimality condition Φ n (ω) for the objective function can be relaxed with Φ n (ω) + β n,κ where β n,κ is the convergence rate for the objective function. Supersetapproximations via relaxation will be considered in Section 4.3.

19 Random Approximations in Stochastic Programming 19 If a superset-approximation with tail behavior function H has been identified, each sequence of supersets of the original sequence inherits the property. Hence, in order to judge the accuracy of a given superset-approximation, subset-approximations, which are contained in the true set with high probability, are a useful tool. Definition. Let a function H H be given. A family {(Mn,κ sub ), κ > 0} of sequences of measurable multifunctions which satisfies the condition κ > 0 : sup P {ω : Mn,κ sub (ω) \ M 0 } H(κ) is called subset-approximation to M 0 with tail behavior function H. Outer Kuratowski-Painlevè approximations with convergence rate and tail behavior function can be supplemented with inner Kuratowski-Painlevè approximations with convergence rate and tail behavior function, see [42]. Unfortunately, as mentioned earlier, inner approximations need not be subsets of the true set. Subset-approximations obtained via relaxation will be considered in Section 4.3. Definition. Let for each κ > 0 a sequence (β n,κ ) B and a function H H be given. A sequence (M n ) is called inner Kuratowski-Painlevè approximation to M 0 with convergence rate β n,κ and tail behavior function H if κ > 0 : sup P {ω : M n (ω) \ U βn,κ M 0 } H(κ). Note that a sequence which is an inner and an outer approximation in probability with the same convergence rate and a common tail behavior function H is Kuratowski-Painlevèconvergent in probability with this convergence rate and the tail behavior function 2H. Approximations in the Kuratowski-Painlevè sense are considered in Section 4.2, and relaxation is dealt with in Section 4.3. Because level sets are not only of importance as constraint sets, but could be employed to derive results for the solution set, they will be considered in these sections together with the solutions sets. Sufficient conditions for the convergence properties of the involved functions that apply to both approaches will be considered in Section 4.4. Note that also mixtures between the Kuratowski-Painlevè approach and the relaxation approach can be useful, see for instance [42]. 4.2 Confidence Sets via Kuratowski-Painlevè Approximations For the derivation of confidence sets via Kuratowski-Painlevè approximations quantified versions of the convergence in probability of random functions are employed. Furthermore, we need some knowledge about the true (or limit) problem such as a growth function and/or a continuity function. Firstly we consider approximations of the optimal values. We distinguish approximations from below, so-called lower approximations, and from above, so-called upper approximations, see the conclusions of Theorem 5 and Theorem 6 below. Both approximations with the same convergence rate yield convergence in probability (of a sequence of random variables) with this convergence rate and the sum of the tail behavior functions. As mentioned, the convergence properties of the objective and/or constraint functions, which will be explained in the following, can be regarded as quantified versions of the

Universal Confidence Sets for Solutions of Optimization Problems

Universal Confidence Sets for Solutions of Optimization Problems Universal Confidence Sets for Solutions of Optimization Problems Silvia Vogel Abstract We consider random approximations to deterministic optimization problems. The objective function and the constraint

More information

Silvia Vogel. Random Approximations in Multiobjective Optimization. Ilmenau University of Technology Weimarer Strae Ilmenau Germany

Silvia Vogel. Random Approximations in Multiobjective Optimization. Ilmenau University of Technology Weimarer Strae Ilmenau Germany Silvia Vogel Random Approximations in Multiobjective Optimization Ilmenau University of Technology Weimarer Strae 25 98693 Ilmenau Germany E-mail: Silvia.Vogel@tu-ilmenau.de Tel. +49 3677 693626 Fax: +49

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

Semicontinuities of Multifunctions and Functions

Semicontinuities of Multifunctions and Functions Chapter 4 Semicontinuities of Multifunctions and Functions The notion of the continuity of functions is certainly well known to the reader. This topological notion plays an important role also for multifunctions.

More information

Lopsided Convergence: an Extension and its Quantification

Lopsided Convergence: an Extension and its Quantification Lopsided Convergence: an Extension and its Quantification Johannes O. Royset Operations Research Department Naval Postgraduate School Monterey, California, USA joroyset@nps.edu Roger J-B Wets Department

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

0.1 Pointwise Convergence

0.1 Pointwise Convergence 2 General Notation 0.1 Pointwise Convergence Let {f k } k N be a sequence of functions on a set X, either complex-valued or extended real-valued. We say that f k converges pointwise to a function f if

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Lopsided Convergence: an Extension and its Quantification

Lopsided Convergence: an Extension and its Quantification Lopsided Convergence: an Extension and its Quantification Johannes O. Royset Operations Research Department Naval Postgraduate School joroyset@nps.edu Roger J-B Wets Department of Mathematics University

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

2. The Concept of Convergence: Ultrafilters and Nets

2. The Concept of Convergence: Ultrafilters and Nets 2. The Concept of Convergence: Ultrafilters and Nets NOTE: AS OF 2008, SOME OF THIS STUFF IS A BIT OUT- DATED AND HAS A FEW TYPOS. I WILL REVISE THIS MATE- RIAL SOMETIME. In this lecture we discuss two

More information

s P = f(ξ n )(x i x i 1 ). i=1

s P = f(ξ n )(x i x i 1 ). i=1 Compactness and total boundedness via nets The aim of this chapter is to define the notion of a net (generalized sequence) and to characterize compactness and total boundedness by this important topological

More information

Keywords. 1. Introduction.

Keywords. 1. Introduction. Journal of Applied Mathematics and Computation (JAMC), 2018, 2(11), 504-512 http://www.hillpublisher.org/journal/jamc ISSN Online:2576-0645 ISSN Print:2576-0653 Statistical Hypo-Convergence in Sequences

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

( f ^ M _ M 0 )dµ (5.1)

( f ^ M _ M 0 )dµ (5.1) 47 5. LEBESGUE INTEGRAL: GENERAL CASE Although the Lebesgue integral defined in the previous chapter is in many ways much better behaved than the Riemann integral, it shares its restriction to bounded

More information

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems

Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Process-Based Risk Measures for Observable and Partially Observable Discrete-Time Controlled Systems Jingnan Fan Andrzej Ruszczyński November 5, 2014; revised April 15, 2015 Abstract For controlled discrete-time

More information

Document downloaded from: This paper must be cited as:

Document downloaded from:  This paper must be cited as: Document downloaded from: http://hdl.handle.net/10251/50602 This paper must be cited as: Pedraza Aguilera, T.; Rodríguez López, J.; Romaguera Bonilla, S. (2014). Convergence of fuzzy sets with respect

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications.

is a Borel subset of S Θ for each c R (Bertsekas and Shreve, 1978, Proposition 7.36) This always holds in practical applications. Stat 811 Lecture Notes The Wald Consistency Theorem Charles J. Geyer April 9, 01 1 Analyticity Assumptions Let { f θ : θ Θ } be a family of subprobability densities 1 with respect to a measure µ on a measurable

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Continuity of convex functions in normed spaces

Continuity of convex functions in normed spaces Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Near-Potential Games: Geometry and Dynamics

Near-Potential Games: Geometry and Dynamics Near-Potential Games: Geometry and Dynamics Ozan Candogan, Asuman Ozdaglar and Pablo A. Parrilo January 29, 2012 Abstract Potential games are a special class of games for which many adaptive user dynamics

More information

Convex Analysis and Economic Theory Winter 2018

Convex Analysis and Economic Theory Winter 2018 Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Supplement A: Mathematical background A.1 Extended real numbers The extended real number

More information

Relationships between upper exhausters and the basic subdifferential in variational analysis

Relationships between upper exhausters and the basic subdifferential in variational analysis J. Math. Anal. Appl. 334 (2007) 261 272 www.elsevier.com/locate/jmaa Relationships between upper exhausters and the basic subdifferential in variational analysis Vera Roshchina City University of Hong

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

CHAPTER 7. Connectedness

CHAPTER 7. Connectedness CHAPTER 7 Connectedness 7.1. Connected topological spaces Definition 7.1. A topological space (X, T X ) is said to be connected if there is no continuous surjection f : X {0, 1} where the two point set

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Epiconvergence and ε-subgradients of Convex Functions

Epiconvergence and ε-subgradients of Convex Functions Journal of Convex Analysis Volume 1 (1994), No.1, 87 100 Epiconvergence and ε-subgradients of Convex Functions Andrei Verona Department of Mathematics, California State University Los Angeles, CA 90032,

More information

TUKEY QUOTIENTS, PRE-IDEALS, AND NEIGHBORHOOD FILTERS WITH CALIBRE (OMEGA 1, OMEGA) by Jeremiah Morgan BS, York College of Pennsylvania, 2010

TUKEY QUOTIENTS, PRE-IDEALS, AND NEIGHBORHOOD FILTERS WITH CALIBRE (OMEGA 1, OMEGA) by Jeremiah Morgan BS, York College of Pennsylvania, 2010 TUKEY QUOTIENTS, PRE-IDEALS, AND NEIGHBORHOOD FILTERS WITH CALIBRE (OMEGA 1, OMEGA) by Jeremiah Morgan BS, York College of Pennsylvania, 2010 Submitted to the Graduate Faculty of the Kenneth P. Dietrich

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3

1 Topology Definition of a topology Basis (Base) of a topology The subspace topology & the product topology on X Y 3 Index Page 1 Topology 2 1.1 Definition of a topology 2 1.2 Basis (Base) of a topology 2 1.3 The subspace topology & the product topology on X Y 3 1.4 Basic topology concepts: limit points, closed sets,

More information

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS

WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS WEAK LOWER SEMI-CONTINUITY OF THE OPTIMAL VALUE FUNCTION AND APPLICATIONS TO WORST-CASE ROBUST OPTIMAL CONTROL PROBLEMS ROLAND HERZOG AND FRANK SCHMIDT Abstract. Sufficient conditions ensuring weak lower

More information

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3

PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 PARTIAL SECOND-ORDER SUBDIFFERENTIALS IN VARIATIONAL ANALYSIS AND OPTIMIZATION BORIS S. MORDUKHOVICH 1, NGUYEN MAU NAM 2 and NGUYEN THI YEN NHI 3 Abstract. This paper presents a systematic study of partial

More information

Max-min (σ-)additive representation of monotone measures

Max-min (σ-)additive representation of monotone measures Noname manuscript No. (will be inserted by the editor) Max-min (σ-)additive representation of monotone measures Martin Brüning and Dieter Denneberg FB 3 Universität Bremen, D-28334 Bremen, Germany e-mail:

More information

AW -Convergence and Well-Posedness of Non Convex Functions

AW -Convergence and Well-Posedness of Non Convex Functions Journal of Convex Analysis Volume 10 (2003), No. 2, 351 364 AW -Convergence Well-Posedness of Non Convex Functions Silvia Villa DIMA, Università di Genova, Via Dodecaneso 35, 16146 Genova, Italy villa@dima.unige.it

More information

COSMIC CONVERGENCE. R.T. Rockafellar Departments of Mathematics and Applied Mathematics University of Washington, Seattle

COSMIC CONVERGENCE. R.T. Rockafellar Departments of Mathematics and Applied Mathematics University of Washington, Seattle COSMIC CONVERGENCE R.T. Rockafellar Departments of Mathematics and Applied Mathematics University of Washington, Seattle Roger J-B Wets Department of Mathematics and Institute of Theoretical Dynamics University

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

106 CHAPTER 3. TOPOLOGY OF THE REAL LINE. 2. The set of limit points of a set S is denoted L (S)

106 CHAPTER 3. TOPOLOGY OF THE REAL LINE. 2. The set of limit points of a set S is denoted L (S) 106 CHAPTER 3. TOPOLOGY OF THE REAL LINE 3.3 Limit Points 3.3.1 Main Definitions Intuitively speaking, a limit point of a set S in a space X is a point of X which can be approximated by points of S other

More information

On Approximations and Stability in Stochastic Programming

On Approximations and Stability in Stochastic Programming On Approximations and Stability in Stochastic Programming Peter Kall Abstract It has become an accepted approach to attack stochastic programming problems by approximating the given probability distribution

More information

Martin Luther Universität Halle Wittenberg Institut für Mathematik

Martin Luther Universität Halle Wittenberg Institut für Mathematik Martin Luther Universität Halle Wittenberg Institut für Mathematik Lagrange necessary conditions for Pareto minimizers in Asplund spaces and applications T. Q. Bao and Chr. Tammer Report No. 02 (2011)

More information

A General Overview of Parametric Estimation and Inference Techniques.

A General Overview of Parametric Estimation and Inference Techniques. A General Overview of Parametric Estimation and Inference Techniques. Moulinath Banerjee University of Michigan September 11, 2012 The object of statistical inference is to glean information about an underlying

More information

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form.

An exponential family of distributions is a parametric statistical model having densities with respect to some positive measure λ of the form. Stat 8112 Lecture Notes Asymptotics of Exponential Families Charles J. Geyer January 23, 2013 1 Exponential Families An exponential family of distributions is a parametric statistical model having densities

More information

9 Sequences of Functions

9 Sequences of Functions 9 Sequences of Functions 9.1 Pointwise convergence and uniform convergence Let D R d, and let f n : D R be functions (n N). We may think of the functions f 1, f 2, f 3,... as forming a sequence of functions.

More information

A projection-type method for generalized variational inequalities with dual solutions

A projection-type method for generalized variational inequalities with dual solutions Available online at www.isr-publications.com/jnsa J. Nonlinear Sci. Appl., 10 (2017), 4812 4821 Research Article Journal Homepage: www.tjnsa.com - www.isr-publications.com/jnsa A projection-type method

More information

The small ball property in Banach spaces (quantitative results)

The small ball property in Banach spaces (quantitative results) The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

A PROOF OF A CONVEX-VALUED SELECTION THEOREM WITH THE CODOMAIN OF A FRÉCHET SPACE. Myung-Hyun Cho and Jun-Hui Kim. 1. Introduction

A PROOF OF A CONVEX-VALUED SELECTION THEOREM WITH THE CODOMAIN OF A FRÉCHET SPACE. Myung-Hyun Cho and Jun-Hui Kim. 1. Introduction Comm. Korean Math. Soc. 16 (2001), No. 2, pp. 277 285 A PROOF OF A CONVEX-VALUED SELECTION THEOREM WITH THE CODOMAIN OF A FRÉCHET SPACE Myung-Hyun Cho and Jun-Hui Kim Abstract. The purpose of this paper

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Product metrics and boundedness

Product metrics and boundedness @ Applied General Topology c Universidad Politécnica de Valencia Volume 9, No. 1, 2008 pp. 133-142 Product metrics and boundedness Gerald Beer Abstract. This paper looks at some possible ways of equipping

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm Proposition : B(K) is complete. f = sup f(x) x K Proof.

More information

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones

Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Noname manuscript No. (will be inserted by the editor Necessary Optimality Conditions for ε e Pareto Solutions in Vector Optimization with Empty Interior Ordering Cones Truong Q. Bao Suvendu R. Pattanaik

More information

arxiv: v2 [math.st] 21 Jan 2018

arxiv: v2 [math.st] 21 Jan 2018 Maximum a Posteriori Estimators as a Limit of Bayes Estimators arxiv:1611.05917v2 [math.st] 21 Jan 2018 Robert Bassett Mathematics Univ. California, Davis rbassett@math.ucdavis.edu Julio Deride Mathematics

More information

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers

Strongly convex functions, Moreau envelopes and the generic nature of convex functions with strong minimizers University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part B Faculty of Engineering and Information Sciences 206 Strongly convex functions, Moreau envelopes

More information

WHY SATURATED PROBABILITY SPACES ARE NECESSARY

WHY SATURATED PROBABILITY SPACES ARE NECESSARY WHY SATURATED PROBABILITY SPACES ARE NECESSARY H. JEROME KEISLER AND YENENG SUN Abstract. An atomless probability space (Ω, A, P ) is said to have the saturation property for a probability measure µ on

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

CHODOUNSKY, DAVID, M.A. Relative Topological Properties. (2006) Directed by Dr. Jerry Vaughan. 48pp.

CHODOUNSKY, DAVID, M.A. Relative Topological Properties. (2006) Directed by Dr. Jerry Vaughan. 48pp. CHODOUNSKY, DAVID, M.A. Relative Topological Properties. (2006) Directed by Dr. Jerry Vaughan. 48pp. In this thesis we study the concepts of relative topological properties and give some basic facts and

More information

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation

Statistics 612: L p spaces, metrics on spaces of probabilites, and connections to estimation Statistics 62: L p spaces, metrics on spaces of probabilites, and connections to estimation Moulinath Banerjee December 6, 2006 L p spaces and Hilbert spaces We first formally define L p spaces. Consider

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

Contents. Index... 15

Contents. Index... 15 Contents Filter Bases and Nets................................................................................ 5 Filter Bases and Ultrafilters: A Brief Overview.........................................................

More information

Only Intervals Preserve the Invertibility of Arithmetic Operations

Only Intervals Preserve the Invertibility of Arithmetic Operations Only Intervals Preserve the Invertibility of Arithmetic Operations Olga Kosheleva 1 and Vladik Kreinovich 2 1 Department of Electrical and Computer Engineering 2 Department of Computer Science University

More information

Existence and Uniqueness

Existence and Uniqueness Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Andreas H. Hamel Abstract Recently defined concepts such as nonlinear separation functionals due to

More information

Banach Spaces V: A Closer Look at the w- and the w -Topologies

Banach Spaces V: A Closer Look at the w- and the w -Topologies BS V c Gabriel Nagy Banach Spaces V: A Closer Look at the w- and the w -Topologies Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we discuss two important, but highly non-trivial,

More information

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d 66 2. Every family of seminorms on a vector space containing a norm induces ahausdorff locally convex topology. 3. Given an open subset Ω of R d with the euclidean topology, the space C(Ω) of real valued

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures

Spring 2014 Advanced Probability Overview. Lecture Notes Set 1: Course Overview, σ-fields, and Measures 36-752 Spring 2014 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-fields, and Measures Instructor: Jing Lei Associated reading: Sec 1.1-1.4 of Ash and Doléans-Dade; Sec 1.1 and A.1

More information

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA  address: Topology Xiaolong Han Department of Mathematics, California State University, Northridge, CA 91330, USA E-mail address: Xiaolong.Han@csun.edu Remark. You are entitled to a reward of 1 point toward a homework

More information

Implications of the Constant Rank Constraint Qualification

Implications of the Constant Rank Constraint Qualification Mathematical Programming manuscript No. (will be inserted by the editor) Implications of the Constant Rank Constraint Qualification Shu Lu Received: date / Accepted: date Abstract This paper investigates

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets

FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Banach Spaces II: Elementary Banach Space Theory

Banach Spaces II: Elementary Banach Space Theory BS II c Gabriel Nagy Banach Spaces II: Elementary Banach Space Theory Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we introduce Banach spaces and examine some of their

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence

Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Introduction to Empirical Processes and Semiparametric Inference Lecture 08: Stochastic Convergence Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and Operations

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

On the Converse Law of Large Numbers

On the Converse Law of Large Numbers On the Converse Law of Large Numbers H. Jerome Keisler Yeneng Sun This version: March 15, 2018 Abstract Given a triangular array of random variables and a growth rate without a full upper asymptotic density,

More information

Fragmentability and σ-fragmentability

Fragmentability and σ-fragmentability F U N D A M E N T A MATHEMATICAE 143 (1993) Fragmentability and σ-fragmentability by J. E. J a y n e (London), I. N a m i o k a (Seattle) and C. A. R o g e r s (London) Abstract. Recent work has studied

More information

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT

ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES ABSTRACT ASYMPTOTICALLY NONEXPANSIVE MAPPINGS IN MODULAR FUNCTION SPACES T. DOMINGUEZ-BENAVIDES, M.A. KHAMSI AND S. SAMADI ABSTRACT In this paper, we prove that if ρ is a convex, σ-finite modular function satisfying

More information

Sets, Structures, Numbers

Sets, Structures, Numbers Chapter 1 Sets, Structures, Numbers Abstract In this chapter we shall introduce most of the background needed to develop the foundations of mathematical analysis. We start with sets and algebraic structures.

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Econometrica Supplementary Material

Econometrica Supplementary Material Econometrica Supplementary Material SUPPLEMENT TO USING INSTRUMENTAL VARIABLES FOR INFERENCE ABOUT POLICY RELEVANT TREATMENT PARAMETERS Econometrica, Vol. 86, No. 5, September 2018, 1589 1619 MAGNE MOGSTAD

More information

The main results about probability measures are the following two facts:

The main results about probability measures are the following two facts: Chapter 2 Probability measures The main results about probability measures are the following two facts: Theorem 2.1 (extension). If P is a (continuous) probability measure on a field F 0 then it has a

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Introduction to Topology

Introduction to Topology Introduction to Topology Randall R. Holmes Auburn University Typeset by AMS-TEX Chapter 1. Metric Spaces 1. Definition and Examples. As the course progresses we will need to review some basic notions about

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Approximation Metrics for Discrete and Continuous Systems

Approximation Metrics for Discrete and Continuous Systems University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science May 2007 Approximation Metrics for Discrete Continuous Systems Antoine Girard University

More information