Convex Stochastic optimization

Size: px
Start display at page:

Download "Convex Stochastic optimization"

Transcription

1 Convex Stochastic optimization Ari-Pekka Perkkiö January 30, 2019 Contents 1 Practicalities and background 3 2 Introduction 3 3 Convex sets Separation theorems Convex functions Continuity of convex functions Convex conjugates Recessions, directional derivatives and subgradients Conjugate duality 17 6 Stochastic optimization Set-valued mappings and normal integrands Stochastic dynamic programming 26 1

2 8 Duality in stochastic optimization Lower semicontinuity of the value function Conjugates of integral functionals Dual representations Optimal investment and indifference pricing Pricing formulas for indifference prices

3 1 Practicalities and background Upon passing the exam, attending and solving the exercises give a bonus to the final grade. We assume that the following concepts are familiar: 1. Vector spaces, topology. 2. Probability space, random variables, expectation, convergence theorems. 3. Conditional expectations, martingales. 4. The fundamentals of discrete time financial mathematics. 2 Introduction 3 Convex sets Let U be a real vector space. A set A U is convex if λu + (1 λ)u A. for every u, u A and λ (0, 1). For λ R and sets A and B, we define the scalar multiplication and a sum of sets as λa := {λu u A} A + B := {u + u u A, u B}. The summation is also known as Minkowski addition. With this notation, A is convex if and only if λa + (1 λ)a A λ (0, 1). Convex sets are stable under many algebraic operations. linear vector space. Let X be another Theorem 3.1. Let J be an arbitrary index set, (A j ) j J a collection of convex sets and A X U a convex set. Then, 1. for λ R +, the scaled set λa is convex, 2. for finite J, the sum j J Aj is convex, 3. the intersection j J Aj is convex, 4. the projection {u U x : (x, u) A} is convex. 3

4 Proof. Exercise. Next we turn to topological properties of convex sets. Let τ be a topology on U (the collection of open sets, their complements are called closed sets) and let A U. The interior int A of A is the union of all open sets contained in A and closure cl A is the intersection of closed sets containing A. The set A is a neighborhood of u if u int A. We denote the collection of neighborhoods of u by H u and the collection of open neighborhoods of u by Hu. o Note that A is a neighborhood of u if and only if A contains an open neighborhood of u. Exercise For A U, u cl A if and only if A O for all O H o u. A function g from U to another topological space V is continuous at a point u if the preimage of every neighborhood of g(u) is a neighborhood of u. A function f is continuous if it is continuous at every point. Exercise A function is continuous if and only if the preimage of every open set is open. A collection E of neighborhoods of u is called a neighborhood base if every neighborhood of u contains and element of E. Evidently H 0 u is a neighborhood base. Exercise Given local bases E u of u and E v of v = g(u), g is continuous at u if and only if the preimage of every element of E v contains an element of E u. Given another topological space (U, τ ), the product topology on U U is the smallest topology containing all the sets {O O O τ, O τ }. We always equip products of topological spaces with the product topology. Clearly {(O, O ) H o u H o u } is a neighborhood basis of (u, u ). Exercise Let p : U U V be continuous. For every u U, u p(u, u ) is continuous. The space (U, τ) is a topological vector space (TVS) if (u, u ) u + u is continuous from U U to U and (u, α) αu is continuous from U R to U. Exercise In a topological vector space U, 1. αo H 0 0 for all α 0 and O H o 0, 2. for all u U and O U, (O + u) H o u if and only if O H o 0, 3. sum of a nonempty open set with any set is open, 4. for every O H o 0, there exists O H o 0 such that 2O O, 5. αa H 0 for all α 0 and A H 0, 4

5 6. for all u U and A U, (A + u) H u if and only if A H 0, 7. for every A H 0, there exists A H 0 such that 2A A. A set C is symmetric if x C implies x C. Lemma 3.2. In a topological vector space, every (resp. convex) neighborhood of the origin contains a symmetric (resp. convex) neighborhood of the origin. Proof. Let A H 0. By continuity of p(α, u) := αu from R U to U, there is α and O H0 o such that αo A for all α α. The set B := α α (αo) is the sought neighborhood. Assume additionally that A is convex. The set A ( A) is symmetric, so, since B A is symmetric as well, B A ( A). Hence A ( A) is a symmetric convex set containing a neighborhood of the origin. Lemma 3.3. Let C be a convex set in a TVS. Then int C and cl C are convex. Proof. Let λ (0, 1). We have int C C, so λ(int C) + (1 λ) int C C. Since sums and strictly positive scalings of open sets are open, we see that λ(int C) + (1 λ) int C int C, since int C is the largest open set contained in C. Since λ (0, 1) was arbitrary, this means that int C is convex. To prove that cl C is closed we use results from Exercises and Let u, u cl C, λ (0, 1) and Õ Ho 0. It suffices to show that λu + (1 λ)u + Õ C. There are O, O H0 o with λo + (1 λ)o ũ C (u + O ). Thus Õ and ũ C (u + O) and λũ + (1 λ)ũ λ(u + O) + (1 λ)(u + O ) λu + (1 λ)u + Õ where the left side belongs to C. 3.1 Separation theorems Sets of of the form {u U l(u) = α} are called hyper-planes, where l is a real-valued linear function and α R. Each hyperplane generates two half-spaces (opposite sides of the plane) {u U l(u) α}, {u U l(u) α}. A hyperplane separates sets C 1 and C 2 if they belong to the opposite sides of the hyperplane. The separation is proper unless both sets are contained in the hyperplane. In other words, proper separation means that sup{l(u 1 u 2 ) u i C i } 0 and inf{l(u 1 u 2 ) u i C i } < 0. 5

6 A set C U is called algebraically open if {α R u + αu C} is open for any u, u U. The set C is algebraically closed if its complement is open, or equivalently, if the set {α R u + αu C} is closed for any u, u U. Exercise In a topological vector space, open (resp. closed) sets are algebraically open (resp. closed), and the sum of a nonempty algebraically open set with any set is algebraically open. The following separation theorem states that the origin and an algebraically open convex set not containing the origin and can be properly separated. Theorem 3.4. Assume that C in a linear vector space U is an algebraically open convex set with 0 / C. Then there exists a linear l : U R such that sup{l(u) u C} 0, inf{l(u) u C} < 0. In particular, l(u) < 0 for all u C. Proof. This is an application of Zorn s lemma. Omitted. The above separation theorem implies a series of other separation theorems for convex sets. In the locally convex setting below, we get separation theorems in terms of continuous linear functionals, or equivalently, in terms of closed hyperspaces as the next exercise shows. A real-valued function g is bounded from above on B U if there is M R such that g(u) < M for all u B. If g is continuous at u dom g, then it is bounded from above on a neighborhood at u. Indeed, choose a neighborhood g 1 ((, M)) for some M > g(u). Theorem 3.5. Assume that l is a real-valued linear function on a topological vector space. Then the following are equivalent: 1. l is bounded from above in a neighborhood of the origin. 2. l is continuous. 3. {u U l(u) = α} is closed for all α R. 4. {u U l(u) = 0} is closed. Proof. Exercise. A topological vector space is locally convex (LCTVS) if every neighborhood of the origin contains a convex neighborhood of the origin. Theorem 3.6. Assume that C is a closed convex set in a LCTVS and u / C. Then there is a continuous linear functional separating properly u and C. 6

7 Proof. The origin belongs to the open set (C u) C, so there is a convex O H o 0 such that 0 / C u + O. By Theorem 3.4, there is a linear l such that l(u ) < 0 u C u + O. This means that l(u ) < l(u) for all u C+O, so l is continuous by Theorem 3.5. The following corollary is very important in the sequel. For instance, it will give the biconjugate theorem that is the basis of duality theory in convex optimization. Corollary 3.7. The closure of convex set in a LCTVS is the intersection of all closed hyperplanes containing the set. Proof. By Lemma 3.3, cl C is convex for convex C. For any u / cl C, there is, by the above theorem, a closed half-space H u such that cl C H u and x / H u. We get cl C = H u. x / C 4 Convex functions Throughout the course, R = R {± } is the extended real line. For a, b R, the ordinary summation is extended as a + b = + if a = +, and as a + b = if a + and b =. Let g : U R. The function g is convex if g(λu + (1 λ)u ) λg(u) + (1 λ)g(u ) for all u, u U and λ [0, 1]. A function is convex if and only if its epigraph epi g := {(u, α) U R g(u) α} is a convex set. Applying the last part of Theorem 3.1 to epi g, we see that the domain dom g := {x X g(x) < } is convex when g is convex. Many algebraic operations also preserve convexity of functions. Theorem 4.1. Let J be an arbitrary index set, (g j ) j J a collection of convex functions, and p : X U R a convex function. Then, 1. for finite J and strictly positive (λ j ) j J, the sum j J λj g j is convex, 7

8 2. the infimal convolution is convex, x inf{ g j (x j ) x j = x} 3. the supremum x sup j J g j (x) is convex, 4. the marginal function u inf x p(x, u) is convex. Proof. Exercise. The function g is called positively homogeneous if and sublinear if g(αu) = αg(u) u dom g and α > 0, g(α 1 u 1 + α 2 u 2 ) α 1 g(u 1 ) + α 2 g(u 2 ) u i dom g and α i > 0. The second part in the following exercise shows that norms are convex. Exercise Let g be an extended real-valued function on U. 1. If g is positively homogeneous and convex, then it is sublinear. 2. If g is positively homogeneous, then it is convex if and only if 3. If g is convex, then g(u 1 + u 2 ) g(u 1 ) + g(u 2 ) u i dom g. G(λ, u) = { λg(u/λ) if λ > 0, + otherwise is positively homogeneous and convex on R U. In particular, p(u) = inf G(λ, u) λ>0 is positively homogeneous and convex on U. The third part above is sometimes a surprising source of convexity. It also implies properties for recession functions and directional derivatives introduced later on. 8

9 4.1 Continuity of convex functions Assume now that g is an extended real-valued function on U. The function g is said to be proper if it is not identically + and if it never takes the value. The function g is lower semicontinuous (lsc)if the level-set lev α g := {u U g(u) α} is closed for each α R. Equivalently, g is lsc if its epigraph is closed, or if, for every u U, sup inf u A g(u ) g(u). A H u When U is sequential (e.g., a Banach space), the last condition is equivalent to the more familiar lim inf u ν u g(uν ) g(u). Theorem 4.2. Let J be an arbitrary index set, g a lsc function and (g j ) j J a collection of lsc functions on a topological vector space U. Then, 1. for a continuous F : V U, g F is lsc. 2. for finite J and strictly positive (λ j ) j J, the sum j J λj g j is lsc, 3. the supremum u sup j J g j (u) is lsc. Proof. Exercise. Given a set A U, core A = {u U u U λ : u + λ u A λ (0, λ)} is known as the core (or algebraic interior) of A and its elements are called internal points, not to be confused with interior points. We always have int A core A. Theorem 4.3. If a convex function is bounded from above on an open set, then it is continuous throughout the core of its domain. Proof. Let g be a function that is bounded from above on a neighborhood O of ū. We show first that this implies that g is continuous at ū. Replacing g by g(u + ū) g(ū), we may assume that ū = 0 and that g(0) = 0. Hence there exists M > 0 such that g(u) M for all u O. Let ɛ > 0 be arbitrary and choose λ (0, 1) with λm < ɛ. We have g(λu) < ɛ for each u O. Moreover 0 = g((1/2(( λu) + (1/2)λu) (1/2)g( λu) + (1/2)g(λu), 9

10 which implies that g( λu) g(λu) for all u O ( O). Thus, g(u) < ɛ for all u α(o ( O)), so g is continuous at the origin. Assume now that g is bounded from above on an open set A, i.e., there is M such that g(u) M for each u A. By above, it suffices to show that g is bounded from above at each u core dom g. Let u A. There is ū dom g and λ (0, 1) with u = λū + (1 λ)u. We have g(λū + (1 λ)ũ)) λg(ū) + (1 λ)m ũ A, so g is bounded from above on a open neighborhood λū + (1 λ)a of u. In R d, geometric intuition suggests that a convex function is continuous on the core of its domain. This idea extends to lsc convex functions on a barreled space. The LCTVS space U is barreled if every closed convex symmetric absorbing set is a neighborhood of the origin 1. A set C is called absorbent if α R + (αc) = U. A set is absorbent if and only if the origin belongs to its core. For example, every Banach space is barreled 2. Lemma 4.4. Let A U be a convex set. Then int A = core A under any of the following conditions: 1. U is finite dimensional, 2. int A, 3. U is barreled and A is closed. Proof. We leave the first case as an exercise. To prove the second, it suffices to show that, for u core A, we have u int A. Let u int A. There is λ (0, 1) and ū A with u = λu + (1 λ)ū. Now u + (1 λ)(int A u ) = ū + (1 λ) int A A where the left side is an open neighborhood of u. To prove the last claim, let U be barreled and A closed. Again, it suffices to show that, for u core A, we have u int A. Let B = A u. Now 0 core B, so 0 core(b ( B)). Thus (B ( B)) is a closed convex symmetric absorbing set and hence it is a neighborhood of the origin. Thus 0 int B and x int A. Theorem 4.5. A convex function g is continuous on core dom g in the following situations: 1. U is finite dimensional 1 It is an exercise to show that in a LCTVS, every neighborhood of the origin contains a closed convex symmetric absorbing set 2 An application of the Baire category theorem: if U = n N (nc) for a closed C, then int C 10

11 2. U is barreled and g is lower semicontinuous. Proof. We leave the first part as an exercise. To prove the second, let u core dom g and α > g(u). For u U, the function λ g(u + λu ) is continuous at the origin by the first part, so u core lev α g. By Lemma 4.4 and lower semicontinuity of g, int lev α g. Thus continuity follows from Theorem Convex conjugates From now on, we assume that U and Y are vector spaces that are in separating duality under the bilinear form u, y. That the bilinear form is separating means that for every u u, there is y Y with u u, y 0. On U the weak topology σ(u, Y ) is the weakest locally convex topology under which each u u, y is continuous. That is, σ(u, Y ) is generated by sets of the form {u U u, y < α} where α > 0 and y Y. Under σ(u, Y ), U is a locally convex topological space. The Mackey topology τ(u, Y ) is the strongest locally convex topology under which each continuous linear functional can be identified with an element of Y. The Mackey topology is generated by sets of the form where K is convex symmetric and σ(y, U)-compact. {u U sup u, y < 1} (4.1) y K Turning the idea around, when U is a locally convex topological vector space, a natural choice for Y is the dual space of continuous linear functionals on U. By Theorem 3.6, the bilinear form is separating. Especially for Banach spaces, σ(u, Y ) is called the weak topology and σ(y, U) the weak -topology, and the Mackey topology τ(u, Y ) coincides with the norm topology. We call both these topologies simply weak topologies, when the spaces in question are clear. When U = R d, we always choose Y = R d and the bilinear form as just the usual inner product. Example 4.6. Recall that, for p [1, ), the Lebesque space L p := L p (Ω, F, P ) is a Banach space under the norm u := (E u p ) 1/p. For p > 1, its dual space can be identified with U for p satisfying 1/p + 1/p = 1. For p = 1, we set p =, and the dual space of L 1 is the space L = 11

12 L (Ω, F, P ) of essentially bounded random variables. For all p = [1, ), the bilinear form between L p and U is given by q, c := E[qc]. Given an extended real-valued function g on U, its conjugate g : Y R is g (y) := sup{ u, y g(u)}. u U The function g is also known as Legendre-Fenchel transform, polar function, or convex conjugate of g. Since g is a supremum of lower semicontinuous functions, g is a lower semicontinuous function on Y. The Fenchel inequality g(u) + g (y) u, y follows directly from the definition of the convex conjugate. In the exercises, we will familiarize ourselves with this transformation by calculating conjugates of convex functions defined on R d. The biconjugate of g is the function g (c) = sup{ u, y g (y)}. y By the Fenchel inequality, we always have g g. The following biconjugate theorem is the fundamental theorem on convex conjugates. The lower semicontinuous hull lsc g is a function defined via epi(lsc g) := cl epi g. Theorem 4.7 (Biconjugate theorem). Assume that g is a convex extended realvalued function on U such that (lsc g)(u) > for all u. Then lsc g = g, i.e., (lsc g)(u) = sup y Y { u, y g (y)}. In particular, if g is a lsc proper convex function, then g = g, i.e., g has the dual representation g)(u) = sup{ u, y g (y)}. y Y Proof. Recall that the closure of convex set in a locally convex space is the intersection of all closed half spaces containing the set. Applying this to the epigraph of lsc g, we get that lsc g is the supremum of all affine continuous functions less than g, i.e., (lsc g)(u) = sup { u, y α u, y α g(u)}. y Y,α R 12

13 Let y Y be fixed. Then u u, y α is smaller than g if u, y α g(u) for all u, i.e. α sup{ u, y g(u)} = g (y). u We got that, if g (y) <, then the largest affine function less than g with the slope y is u u, y g (y), whereas if g (y) =, then there are no such affine functions. Since this is valid for any y, we get that u sup{ u, y g (y)} is the supremum of all affine functions less than g. Combining this with the first paragraph of the proof, we get the result. Exercise A proper convex function is σ(u, Y )-lsc if and only if it is τ(u, Y )-lsc. Given a set C U, the function is known as the support function of C, as the gauge of C, and the set σ C (y) = sup u, y u C j C (u) := inf {λ u λc} λ>0 C := {y σ C (y) 1} as the polar of C. Note that σ C is the conjugate of the indicator function { 0 if u C, δ C (u) = + otherwise. In the following theorem, cl C = C is known as the bipolar theorem. Theorem 4.8. If a convex set C contains the origin, then σ C = j C, j C = δ C and cl C = C. Proof. Exercise. Theorem 4.9. The set C U is a Mackey neighborhood of the origin if and only if {y Y σ C (y) α} is weakly compact for some α > 0. In this case, {y Y σ C (y) α} is weakly compact for all α. In particular, C is a Mackey neighborhood of the origin if and only if C is weakly compact. Proof. Let C be a Mackey neighborhood of the origin. By (4.1), K C for some convex weakly compact K for which {y Y σ C (y) α} {y Y σ K (y) α} = αk = αk. 13

14 Thus the closed set on the left side belongs to a weakly compact set and is thus compact for all α > 0. To prove the converse, fix α > 0 with αc = {y Y σ C (y) α} weakly compact. The convex symmetric set K := co(αc ( αc )) is weakly compact as well (an exercise). By (4.1), K is a Mackey neighborhood of the origin. Since C K/α, we have αk C = cl C, so C is a neighborhood of the origin. Theorem Let g be a proper convex lower semicontinuous function on U. The following are equivalent: 1. g is bounded from above on a τ(u, Y )- neighborhood of ū, 2. for every α R, {y Y g (y) ū, y α} is σ(y, U)-compact. Here 1 implies 2 even when g is not lsc. Proof. By translations, we may assume that ū = 0 and g(0) = 0. To prove that 1. implies 2., note that we have (see the exercises) so, for any α R, g(u) γ + δ O (u) u g (y) σ O (y) γ y, {y Y g (y) α} {y Y σ O (y) α + γ}, where the set on the right side is weakly compact when O is a Mackey neighborhood of the origin. That 2. implies 1., we may again do translations so that g (0) = inf y g (y) = 0; the details are left as an exercise. Let γ > 0 and denote If y / K, we have K := {y Y g (y) γ}. j K (y) := inf {λ y λk} λ>0 = inf {λ y λk} λ>1 = inf λ>1 {λ g (y/λ) γ} inf λ>1 {λ g (y)/λ γ} = g (y)/γ. If y K, we have j K (y) 1, so putting these together we get that g (y) γj K (y) γ. Conjugating, we get g(u) δ K (u/γ) + γ. Therefore, g is bounded above in a neighborhood of the origin, since K is the polar of a weakly compact set. 14

15 4.2.1 Exercises Exercise Let f be a convex lower semicontinous function on the real line. Convince yourself that, given a slope v, f (v) is the smallest constant α such that the affine function x vx α is majorized by f. What does this mean geometrically? Exercise Calculate the conjugates of the following functions on the real line: 1. f(x) = x 2. f(x) = δ B (x), where B = {x x 1}. 3. f(x) = 1 p x p, for p > V (x) = (e ax 1)/a. Exercise Let V be a nondecreasing convex function on the real line. Analyze V using the geometric idea from the first exercise. 1. Is V positive? 2. Is V zero somewhere? 3. Is V monotone? 4. Where is V finite? 5. Is V necessarily finite at the origin? Hint: For the last three question, the answer depends on your choice of V. 4.3 Recessions, directional derivatives and subgradients Let g be a convex function. Given ū dom g, the function { λ(g(ū + u/λ) g(ū)) if λ > 0, G(λ, u) := + otherwise. is positively homogeneous and convex on R U by Exercise The function G(u, ) is a decreasing on R +, i.e., λ is increasing on R +. The function g(ū + λu) g(ū) λ u g (ū; u) = lim λ 0 g(ū + λu) g(ū) λ 15

16 gives the directional derivative of g at ū. We have g (ū; u) = inf λ 0 g(ū + λu) g(ū), λ so g (ū, ) is positively homogeneous and convex by Exercise The function g g(ū + λu) g(ū) (u) = lim λ λ is called the recession function of g. Note that g is independent of the choice ū, and g g(ū + λu) g(ū) (u) = sup, λ>0 λ so g is positive homogeneous and convex. When g is lsc, g is lsc as well. A vector y Y is a subgradient of g at u dom g if g(u ) g(u) + u u, y u U. The subdifferential g(u) is the set of all subgradients of g at u. Note that we avoided defining subgradients outside the domain. Exercise We have y g(u) if and only if g(u) + g (y) = u, y. We say that g is subdifferentiable at u if g(u). Exercise Assume that g is a differentiable convex function on the real line. Then g(u) = {g (u)}, the derivative of g at u. Given an expression for g in terms of the derivative. Exercise Give an example of a proper lsc convex extended real-valued function on the real line that is not subdifferentiable at a point in its domain. Theorem Assume that g is proper and bounded from above in a neighborhood of u. Then g(u) is nonempty and weakly compact, and g (u, ) is the support function of g(u). Proof. Exercise. Hint: show first that g (u, ) is bounded above in a neighborhood of the origin. Theorem For a proper convex g, we have (g ) = σ dom g. If g is also lsc, then g = σ dom g. Proof. Exercise. 16

17 Theorem For a proper lsc convex function g, we have λ(g(ū + u/λ) g(ū)) if λ > 0, G(λ, u) := g (u) if λ = 0, + otherwise. Proof. Exercise. For a convex C, the set C := {x x + λx C x C, λ > 0} is called the recession cone of C. We have δc = δ C, so the recession cone is closed for a closed C. Evidently, Theorem 4.12 implies that a convex set is bounded if and only if (cl C) = {0}. Exercise If (x ν ) is a sequence in a closed convex C, λ ν 0 and λ ν x ν x, then x C. 5 Conjugate duality Now we turn our attention to convex optimization problems. Given a vector space X and a locally convec topological vector space U, assume that F is a jointly convex extended real-valued function on X U and consider the value function ϕ(u) := inf F (x, u). x X This function gives the optimal value of the convex minimization problem minimize F (x, u) over x X (P) as a function of u. The value function is always convex, so, when it is proper and lower semicontinuous, the biconjugate theorem gives inf F (x, u) = ϕ(u) = sup{ u, y x X ϕ (y)}. The optimization problem (P) is called the primal problem and maximize u, y ϕ (y) over y Y (D) is called the dual problem. When ϕ is proper and lower semicontinuous, their optimal values coincide, and one says that there is no duality gap, or there is strong duality between (P) and (D). Even when ϕ is not lsc, the inequality φ φ means that, at least, the dual problem gives lower bounds to optimal value of the primal problem. This is often called as weak duality between (P) and (D). A simple condition for the absence of duality gap and the existence of dual solutions is given by continuity of ϕ. 17

18 Theorem 5.1. Assume that ϕ is proper and bounded from above in a neighborhood of u. Then the optimal values of (P) and (D) coincide, and (D) has a solution. Proof. Exercise. For the rest of the section, X is also a locally convex space with the dual space V. Theorem 5.2. Assume that F is proper lsc convex function. The conjugate of x F (x, u) is (lsc γ u ), where γ u (v) := inf y {F (v, y) u, y }. The function ϕ is lsc at u if and only if γ u is lsc at the origin. everywhere, then ϕ = lsc inf F (x, ). x If ϕ is lsc Proof. Exercise. Hint: to prove the last claim, use Theorem Theorem 5.3. Assume that g : R n R m R is a proper lsc convex function such that L := {x R n g (x, 0) 0} is a linear space. Then the infimum in p(u) := inf g(x, u) x Rn is attained, p is a proper lsc convex function and p (u) = inf x R n g (x, u). Proof. To prove that p that is lsc, it suffices to show that lev β p B is compact (and hence closed) for every closed ball B. That g(x+x ) = g(x) for all x L is left as an exercise. Let L := {x R n x x = 0 x L} and ḡ = g + δ L B so that p(u) + δ B (u) = p(u) := inf x R n ḡ(x, u). For a proper lsc convex function f : R n R, we have that lev β f is bounded (and hence compact) for every β if and only if {x f (x) 0} = {0} (an exercise). Applying this to x ḡ(x, u), this function has inf-compact level sets and thus the infimum in the definition of p and in that of p is attained for every u. In particular, we have lev β p B = Π(lev β ḡ) where Π is the projection from R n R m to R m. We have {(x, u) ḡ (x, u) 0} = {(x, u) u = 0, x L, g (x, u) 0} = {(0, 0)}, so ḡ has compact-level sets as well. Thus lev β p B is a projection of a compact set and hence compact. 18

19 To prove the recession formula, we apply the first part to g to get that u inf x R n g (x, u) is a proper lsc convex function; then the result follows from Theorem 5.2 Exercise Assume that g : R n R m R is a proper lsc convex function such that L := {x R n g (x, 0) 0} is a linear space. Then g(x + x, u) = g(x, u) for every x L. Exercise For a proper lsc convex function f : R n R, we have that lev β f is bounded (and hence compact) for every β if and only if {x f (x) 0} = {0}. 19

20 6 Stochastic optimization Let (Ω, F, P ) be a complete probability space with a filtration (F t ) T t=0 of complete sub sigma-algebras of F and consider the parametric dynamic stochastic optimization problem minimize Ef(x, u) := f(x(ω), u(ω), ω)dp (ω) over x N, (P u ) where, for given integers n t and m N = {(x t ) T t=0 x t L 0 (Ω, F t, P ; R nt )}, u L 0 (Ω, F, P ; R m ) is the parameter and f is an extended real-valued B(R n ) B(R m ) F-measurable function, where n := n n T. Here and in what follows, we define the expectation of a measurable function φ as + unless the positive part φ + is integrable 3. The function Ef is thus well-defined extended real-valued function on N L 0 (Ω, F, P ; R m ). We will assume throughout that the function f(,, ω) is proper, lower semicontinuous and convex for every ω Ω. This fits the conjugate duality framework with X = N, U = L p, and The value function becomes F (x, u) = Ef(x, u). ϕ(u) = inf Ef(x, u). x N For a stochastic process x, we denote x t := x t x t 1 and x 1 := 0. Example 6.1 (Superhedging). Let (s t ) T t be an adapted R d -valued stochastic process, describing the asset prices. Here N models the set of adapted portfolio processes, so T s t x t s=0 gives the total trading costs of buying ( and selling) x t assets at each time t. When the portfolio is liquidated at time T, meaning that x T = 0 almost surely, integration by parts gives T T 1 s t x t = x t s t+1, s=0 where the stochastic integral on the right side is the terminal wealth. We define { 0 if T 1 t=0 f(x, u, ω) = x t s t+1 (ω) u, + otherwise. 3 In particular, the sum of extended real numbers is defined as + if any of the terms equals +. t=0 20

21 The value function of (P u ) becomes ϕ = δ C with C = {u L p x N : x t s t+1 u a.s.}. Interpreting u as a claim, C is the set of claims in L p that can be superhedged with zero initial capital. 6.1 Set-valued mappings and normal integrands Throughout, every finite dimensional space is equipped with the usual topology and the usual Borel-σ-algebra, and S : Ω R d is a set-valued mapping, i.e., for every ω, S(ω) R d. The mapping S is measurable if the preimage S 1 (O) := {ω Ω S(ω) O } of every open O R d is measurable, i.e. S 1 (O) F for every open O. The mapping S is (resp. convex, cone, etc.) closed-valued when S(ω) is (resp convex,conical, et.c) closed for each ω Ω. The set dom S := {ω S(ω) } is the domain of S. Being the preimage of the whole space, it is measurable as soon as S is measurable. Evidently, if S is measurable, then its image-closure mapping ω cl S(ω) is measurable, since its preimages of open sets coincide with those of S. The function is the distance mapping. d(x, A) = inf x A x x Theorem 6.2. Let S : Ω R s be closed-valued. The following are equivalent. 1. S is measurable, 2. S 1 (C) F for every compact set C, 3. S 1 (C) F for every closed set C, 4. S 1 (B) F for every closed ball B, 5. S 1 (O) F for every open ball O, 6. {ω Ω S(ω) O} F for every open O, 7. {ω Ω S(ω) C} F for every closed C, 8. ω d(x, S(ω)) is measurable for every x R d. Proof. Exercise. 21

22 Measurability is preserved under algebraic operations. Theorem 6.3. Let J be a countable index set and let each S j, j J, be a measurable set-valued mapping. Then 1. ω j J Sj (ω) is measurable if each S j is closed, 2. ω j J Sj (ω) is measurable, 3. ω j J λj S j (ω) is measurable for finite J, where λ j R, 4. ω (S 1 (ω),..., S j (ω)) is measurable for finite J ; here we may allow S j : Ω R dj. Proof. 4. Let R(ω) = (S 1 (ω),..., S j (ω)). Every open set O in the product space is expressible as a union of rectangular open sets j O j ν. Thus R 1 (O) = ν ( j(s j ) 1 (O j ν)), where each (S j ) 1 (O j ν) is measurable by the assumption. 3. Let R(ω) = j J λj S j (ω).. For an open O, the set is open in J j=1 Rd. Now O = {(x 1,..., x n ) R 1 (O) = {ω ( n λ ν x ν O} ν=1 n λ ν S ν )(ω) O } ν=1 = {ω (S 1 (ω),..., S J (ω)) O }, where the set on the right-hand side is measurable by part ( j J Sj ) 1 (O) = j J (Sj ) 1 (O) for any open O. 1. Assume first that J = {1, 2}. Take any compact C R d, and denote R ν (t) = S ν (t) C, then (S 1 S 2 ) 1 (C) = {ω S 1 (ω) S 2 (ω) C } = {ω 0 R 1 (ω) R 2 (ω)} = (R 1 R 2 ) 1 ({0}). Here R 1 R 2 is measurable by part 3; let us show that it is closed-valued as well. Since S ν are closed-valued, R ν are compact-valued, so R 1 R 2 is compact valued. (an exercise). Hence S 1 S 2 is measurable. The case of finite J follows from by induction. Suppose finally that J is countable, J = {1, 2, 3,... }. Denote S µ = µ ν=1 Sν. Note that ν=1 Sν (ω) = S µ=1 µ (ω), and that S µ are measurable by preceding. 22

23 The proof is complete as soon as we show ( S ν ) 1 (C) = ( S µ ) 1 (C). If ω ( S ν ) 1 (C), it is straight-forward to check that ω µ=1 ( S µ ) 1 (C). For the converse, take ω µ=1 ( S µ ) 1 (C). Since ( S µ (ω) C) µ=1 is a nested sequence of nonempty compact sets, µ=1 ( S µ (ω) C). By ν=1 Sν (ω) = S µ=1 µ (ω) this means that ω ( S ν ) 1 (C). µ=1 A function h : Ω R d R is a normal integrand on R d if the mapping S h (ω) := epi h(, ω) = {(x, α) R d R h(x, ω) α} is measurable and closed-valued. The mapping S o h(ω) := {(x, ω) R n R h(x, ω) < α} is the ω-wise strict epigraphical mapping of h. Since preimages of open sets under S h are Sh o the same, one is measurable if and only if the other is so. Theorem 6.4. For a normal integrand h on R d and β L 0, the level-set mapping ω {x R d h(x, ω) β(ω)} is measurable and closed-valued. A function h : R d Ω R is a normal integrand if lev β h(ω) := {x R d h(x, ω) β} is measurable and closed-valued for every β R. Proof. Let S(ω) := {x R d h(x, ω) β(ω)}. For a closed C R d, R(ω) := C {α α β(ω)} is measurable and closed-valued (an exercise). Now which shows the measurability of S. S 1 (C) = {ω S(ω) C } = {ω S h (ω) R(ω) } = dom(s h R) To prove the second claim, note first that the closedness of the level sets implies that S h is closed-valued. Let (β ν ) be a dense sequence in R. Since countable unions of measurable mappings are measurable, are measurable. We have so S h is measurable. lev <β ν h(ω) := {x R d h(x, ω) < β ν } S o h(ω) = ν (lev <β ν h(, ω) [β ν, )), 23

24 The function h is a Caratheodory integrand if h(, ω) is continuous for each ω Ω and h(x, ) is measurable for each x R d. Theorem 6.5. A Caratheodory integrand is a normal integrand. Proof. Let {x ν ν N} be a dense set in R d and define α ν,q (ω) = h(x ν, ω) + q, where q Q +. Since h(, ω) is continuous, the set Ô = {(x, α) h(x, ω) < α} is open. For any (x, α) epi h(, ω) and for any open neighborhood O of (x, α), O Ô is open and nonempty, and there exists (xν, α ν,q ) O Ô, i.e., {(x ν, α ν,q (ξ) ν N, q Q} is dense in epi h(, ω). Thus for any open set O U R, {ω epi h(, ω) O } = ν,q{ω (x ν, α ν,q (ω)) O} is measurable. Theorem 6.6. Assume that f : R n R m Ω R is a normal integrand on R n R m and let p(u, ω) := inf f(x, u, ω). x Rn The function defined by cl u p(u, ω) is a normal integrand on R m. In particular, if p(, ω) is lsc for every ω, p is a normal integrand. Proof. Let Π(x, u, α) = (u, α) be the projection from R n R m R to R m R. It is easy to check that ΠS o f (ω) = So p(ω), so, for an open O R n R, (S o p) 1 (O) = (S o f ) 1 (Π 1 (O)), where the right side is measurable, since f is a normal integrand and Π is continuous. Thus S p is measurable. Moreover, ω cl S p is measurable, which shows that (u, ω) (lsc u p)(u, ω) is a normal integrand. If p is lsc in the first argument, it is thus a normal integrand. Exercise For a measurable closed-valued S : Ω R d and x L 0 (R d ), the projection mapping ω P S(ω) (x(ω)) = argmin x x(ω) x S(ω) is measurable and closed-valued. Hint: Express the projection mapping as a level-set mapping of a normal integrand. A function x : Ω R d is called a selection of S if x(ω) S(ω) for all ω dom S. The set of measurable selections of S is denoted by L 0 (S). The sequence (x ν ) of measurable selections of S in the following theorem is known as Castaing representation of S. 24

25 Theorem 6.7 (Castaing representation). Let S : Ω R d be closed-valued. Then S is measurable if and only if dom S is measurable and there exists a sequence (x ν ) L 0 (Ω; R d ) such that, for all ω dom S, S(ω) = cl{x ν (ω) ν = 1, 2,... }. Proof. Assuming the Castaing representation exists, we have, for an open O, S 1 (O) = (x ν ) 1 (O), ν=1 so S is measurable. Assume now that S is measurable. Let J be the countable collection of q = (q 0, q 1,..., q d ), q Q d such that {q 0, q 1,..., q d } are affinely independent. For each q J, we define recursively S q,0 (ω) := P S(ω) (q 0 ) and S q,i (ω) := P S q,i 1 (ω)(q i ). These mappings are measurable and closed-valued, by Exercise Moreover, S q,d (ω) is a singleton, a point in S(ω) nearest to q 0. Setting x q (ω) := S q,d (ω), (x q ) is a Castaing representation of S. Let us verify that S q,d is single-valued. We fix ω and omit it from the notation. By the recursive definition of S q,i, for each q i, there is r i 0 such that S q,d B(q i, r i ). Thus, for any x S q,d, x q i 2 = (r i ) 2 for all i. Now it is an elementary exercise to check that these equations give a unique representation of x in the basis {q i q 0, i = 1,... d}. Corollary 6.8 (Measurable selection theorem). Any measurable closed-valued S : Ω R n admits a measurable selection. A function M : R n Ω R m is a Caratheodory mapping if M(, ω) is continuous for all ω and M(x, ) is measurable for all x R n Exercise For a Caratheodory mapping M : R n Ω R m, S(ω) := (gph M)(, ω) := {(x, u) R n R m u M(x, ω)} defines a measurable closed-valued mapping. Hint: Construct a Castaing representation of S. Theorem 6.9. Let S : Ω R n be measurable and closed-valued and let M(, ω) : R n R m be such that G(ω) := (gph M)(, ω) := {x, u u M(x, ω} defines a measurable closed-valued mapping. Then R(ω) = M(S(ω), ω) defines a measurable set-valued mapping. Proof. Let Π : R n R m R m be the projection mapping. We have R = Π Q for Q(ω) := [S(ω) R m ] G(ω), which is measurable by Theorem 6.2. Thus R is measurable, since R 1 (O) = Q 1 (Π 1 (O)), where Π 1 (O) is open for any open O. 25

26 Theorem Let S : Ω R m be a measurable closed-valued mapping and M : R n Ω R m a Caratheodory mapping. Then is a measurable mapping. Proof. Exercise. ω {x R n M(x, ω) S(ω)} Theorem Assume that f : R n R m Ω R is a normal integrand on R n R m and that u L 0. Let h(x, ω) := f(x, u(ω), ω), S(ω) := argmin f(x, u(ω), ω). x R n Then h is a normal integrand and S is a measurable closed-valued mapping, and there exists x L 0 such that, for all ω dom S, inf f(x, u(ω), ω) = f(x(ω), ū(ω), ω). x Rn Proof. Defining M(x, ω) := (x, u(ω)), which is a Caratheodory mapping, we have, for β R, that lev β h(, ω) = {x R n M(x, ω) lev β f(,, ω)}, where the right side defines a measurable mapping by Theorems 6.4 and Evidently, the level sets of h(, ω) closed, so h is a normal integrand by Theorem 6.4 again. By Theorem 6.6, p(ω) = inf x h ( x, ω) is measurable. Since S(ω) = {x f(x, u(ω), ω) p(ω)}, S is measurable by Theorem 6.4. Moreover, S is closed-valued since f(, ω) is lsc. The last claim follows from the measurable selection theorem, Corollary Stochastic dynamic programming The purpose of this section is to prove a general dynamic programming recursion which generalizes the classical Bellman equation for convex stochastic optimization. We will use the notion of a conditional expectation of a normal integrand In certain financial applications, the new condition turns out to be equivalent to the classical no-arbitrage condition. Let X be a nonnegative F-measurable function and let G F be another sigmaalgebra. Then, there is a G-measurable nonnegative function E G X, unique up to sets of P -measure zero, such that E[χ A X] = E[χ A (E G X)] A G, (7.1) 26

27 where χ A denotes the characteristic function of A; see e.g. Shiryaev [?, II.7]. The function E G X is called the G-conditional expectation of X. For a general F-measurable extended real-valued function X, we set E G X := E G X + E G X, where again, the convention = is used. It is easily checked that with the extended definition of the integral, (7.1) is then valid for any measurable function X. Our choice of setting = is not arbitrary but specifically directed towards minimization problems. The G-conditional expectation of a normal integrand h is a G-measurable normal integrand E G h such that (E G h)(x(ω), ω) = E G [h(x( ), )](ω) P -a.s. for all x L 0 (Ω, G, P ; R n ). The proof of the following will be given in the next subsection. Lemma 7.1. Let G F be a sigma-algebra and assume that h is an F-normal integrand with an integrable lower bound i.e. an integrable function m such that h(x, ω) m(ω) for every x and ω. Then h has a well-defined conditional expectation E G h which has the integrable lower bound E G m. We will study problem (P u ) for a fixed u L 0 (Ω, F, P ; R m ) so we will omit it from the notation and define h(x, ω) = f(x, u(ω), ω). By [?, 14.45(c)], h is a normal integrand. The convexity of f implies that of h. We will use the notation E t = E Ft and x t = (x 0,..., x t ) and define extended real-valued functions h t, h t : R n1+ +nt Ω R recursively for t = T,..., 0 by h T = h, h t 1 (x t 1, ω) = h t = E t ht, inf h t (x t 1, x t, ω). x t R n t (SDP) In the above formulation, one does not separate the decision variables x t into state and control like in the classical dynamic programming models. A formulation closer to the classical dynamic programming equations will be given in Corollary 7.4 below. In order to ensure that h t and h t are well-defined it suffices to require that the function h has an integrable lower bound and that h(, ω) is inf-compact (i.e. {x R n h(x, ω) α} is compact for every α R) for every ω Ω. We define the recession function of a normal integrand omegawise, that is, h (x, ω) = sup λ>0 h(λx + x, ω) h( x, ω), λ 27

28 Since countable supremums of normal integrands are normal, the function h is a convex normal integrand. If h(, ω) has an integrable lower bound, then h (x, ω) 0 for every x R n. Lemma 7.2. Assume that h t is a normal integrand and that the set-valued mapping N t (ω) = {x t R nt h t (x t, ω) 0, x t 1 = 0} is linear-valued. Then h t 1 is a normal integrand with h t 1(x t 1, ω) = inf h x t R n t t (x t 1, x t, ω). Moreover, given an x N, there is an F t -measurable x t such that x t (ω) N t (ω) and h t 1 (x t 1 (ω), ω) = h t (x t 1 (ω), x t (ω), ω). Proof. By Theorem 5.3, the linearity condition implies that the infimum in the definition of h t 1 is attained and that h t 1 (, ω) is a lower semicontinuous convex function with h t 1(x t 1, ω) = inf h x t R n t (x t 1, x t, ω). t By Theorem 6.6, the lower semicontinuity implies that h t 1 is an F t -measurable convex normal integrand. By Theorem 6.11, the function p(x, ω) := h t (x t 1 (ω), x, ω) is then also an F t -measurable normal integrand and there is an F t -measurable x t that attains the minimum for every ω. By Theorem 5.3, the value of h t (x t 1 (ω), x, ω) does not change if we replace x t (ω) by its projection to the orthogonal complement of N t (ω). By Exercise 6.1.1, such a projection preserves measurability. It is clear that if h t has an integrable lower bound, then so will h t 1. Applying Lemmas 7.1 and 7.2 recursively backwards for t = T,..., 0, we then see that if h has an integrable lower bound, the functions h t and h t are well-defined for every t provided that N t is linear-valued at each step. Theorem 7.3. Assume that h has an integrable lower bound and that N t is linear-valued for t = T,..., 0. The functions h t are then well-defined normal integrands and we have for every x N that Eh t (x t (ω), ω) inf (P u ) t = 0,..., T. (7.2) Optimal solutions x N exist and they are characterized by the condition x t (ω) argmin h t (x t 1 (ω), x t, ω) P -a.s. t = 0,..., T. x t which is equivalent to having equalities in (7.2). Moreover, there is an optimal solution x N such that x t N t for every t = 0,..., T. 28

29 Proof. As noted above, a recursive application of Lemmas 7.1 and 7.2 imply that the functions h t and h t are well-defined normal integrands. Given an x N, the law of iterated expectations gives Thus, Eh t (x t (ω), ω) E h t 1 (x t 1 (ω), ω) = Eh t 1 (x t 1 (ω), ω) t = 1,..., T. Eh(x(ω), ω) = Eh T (x T (ω), ω) Eh 0 (x 0 (ω), ω) E where the inequalities hold as equalities if and only if inf x 0 R n 0 h t (x t (ω), ω) = h t 1 (x t 1 (ω), ω) P -a.s. t = 0,..., T. h 0 (x 0, ω), The existence of such an x N with x t N t follows by applying Lemma 7.2 recursively for t = 0,..., T. When the normal integrand h has a separable structure, the dynamic programming equations (7.5) can be written in a more familiar form. Corollary 7.4 (Bellman equations). Assume that h(x, ω) = T k t (x t 1, x t, ω) t=0 for some fixed initial state x 1 and F t -measurable normal integrands h t with integrable lower bounds. Consider the functions V t : R nt Ω R given by V T (x T, ω) = 0, Ṽ t 1 (x t 1, ω) = inf {k t (x t 1, x t, ω) + V t (x t, ω)}, x t R (7.3) n t V t 1 = E t 1 Ṽ t 1 and assume that the set-valued mappings N t (ω) = {x t R nt k t (0, x t, ω) + V t (x t, ω) 0} are linear-valued for each t = T,..., 0. The functions V t are then well-defined normal integrands and we have for every x N that [ t ] E k s (x s 1 (ω), x s (ω), ω) + V t (x t (ω), ω) inf (P u ) t = 0,..., T. (7.4) s=0 Optimal solutions x N exist and they are characterized by the condition x t (ω) argmin{k t (x t 1 (ω), x t, ω) + V t (x t, ω)} P -a.s. t = 0,..., T, x t R n t which is equivalent to having equalities in (7.4). Moreover, there is an optimal solution x N such that x t N t for every t = 0,..., T. 29

30 Proof. Exercise. The following example covers both the convex stochastic control and the classical linear programming formulations. Example 7.5. Let k t (x t 1, x t, ω) = { E t c t (x t, ω) if A t (ω)x t + B t (ω)x t 1 = b t (ω), + otherwise. where A t is deterministic, B t and b t are F t -measurable and independent of F t 1 and E t c t = Ec t. A simple induction argument gives Ṽ t 1 (x t 1, ω) = v t (b t (ω) B t (ω)x t 1 ) where v t are defined recursively by v T +1 = 0 and v t (z) = inf {E[c t (x t, ω) + v t+1 (b t+1 (ω) B t+1 (ω)x t )] A t x t = z}. (7.5) x t R n t When x t = (X t, U t ) and Ax t = X t, we obtain the stochastic control model, while c t (x t, ω) = d t (ω) x t + δ R n t + (x t) gives the classical LP model. Example 7.6 (Stochastic control). The problem minimize E [ T 1 ] L t (X t, U t, W t+1 ) + K(X T ) t=0 subject to X t+1 = f t (X t, U t, W t+1 ) over X, U N, fits the above with x t = (X t, U t ), { K(X T ) if X T = f T 1 (X T 1, U T 1, W T ), k T (x T, x T 1 ) = + otherwise. and k t (x t, x t 1 ) = { E t L t (X t, U t, W t+1 ) if X t = f t 1 (X t 1, U t 1, W t ), + otherwise. It is seen by backward induction that, in this case, Ṽ t 1 (x t 1, W t ) = v t (f t 1 (X t 1, U t 1, W t )), where the functions v t are defined recursively by v T = K and v t (X) = inf U E{L t(x, U, W t+1 ) + v t+1 (f t (X, U, W t+1 ))}, 30

31 Exercise Let s = (s t ) T t=0 be the price process of a binomial model with T = 4, s 0 = 1 and s t+1 = s t η t+1 for an i.i.d. (η t ) T t=1 with P (η = 2) = 1/3 = 1 P (η = 1/2). Let R t = (K s t ) + for K = 2 and consider maximize E T R t x t over x N t=0 subject to x t 0 t, x T = 1 a.s., where x 1 = 0. This is a convex relaxation of the optimal stopping problem (problem of exercising an American option) maximize E T R t x t over x N t=0 subject to x t {0, 1} t, x T = 1 a.s.. It can be shown (not part of the exercise) that there is always an optimal solution x of the convex relaxation that satisfies x t {0, 1}, i.e., it is also a solution of the optimal stopping problem. Write down the functions k t and solve the Bellman equations under the (nonconvex constraint) x t {0, 1} for all t. What is the optimal solution? Exercise Let (S t ) T t=0 be an adapted process (the price process) and consider T 1 minimize E exp(v 0 + z t S t+1 ) over z N, t=0 This is the exponential utility maximization problem of the terminal wealth (with obvious changes of signs). Write the problem into the stochastic control format with X being the wealth process, U the process describing the amount of money invested in the assets S (i.e., U i = z i S i ) and W t = S t /S t 1 being the return process. When (W t ) are i.i.d., what can you say about the optimal (U t )?. Consider Example?? and assume that U the portfolio process (in terms of wealth invested), W is the return process, X the wealth process, L t (X t, U t, W t ) = δ R ( j J U j X t ), f t (X t, U t, W t+1 ) = W t+1 U t and K a loss function (negative utility). We get v t (X) = inf {Ev t+1(w t+1 U) U j = X}. (7.6) U j J Using the constraint to substitute out U 0, the recursion becomes v t (X) = inf Ev t+1(wt+1x 0 + (W j t+1 W t+1)u 0 j ). h j J\{0} 31

32 If W 0 = 1 and v t+1 (X) = c t+1 e X, this becomes v t (X) = c t+1 e X inf E exp( (W j t+1 1)U j ). h j J\{0} If W is an iid sequence, the last term as well as the argmin are constant which means that it is optimal to invest constant amounts of cash in the risky assets. The rest of this section is devoted to the study of the linearity condition in Theorem 7.3. Recall that a G-measurable selector of an R n -valued set-valued mapping C is a G-measurable function x such that x(ω) C(ω) almost surely. Lemma 7.7. Let G F be a sigma-algebra and assume that h is an F-normal integrand with an integrable lower bound. If there is an x L 0 (Ω, G, P ; R n ) such that Eh( x(ω), ω) is finite, then (E G h) = E G h and the level sets lev 0 h (ω) = {x R n h (x, ω) 0}, lev 0 (E G h) (ω) = {x R n (E G h) (x, ω) 0} have the same G-measurable selectors. Proof. We know that h is a well-defined F-normal integrand. Moreover, the lower bound on h implies that h is nonnegative. By Lemma 7.1, E G h and E G h are thus well-defined. To show that the latter is the recession function of the former, let x L 0 (Ω, G, P ; R n ) and A G. We know that h( x(ω) + λx(ω), ω) h( x(ω), ω) λ is increasing in λ for every ω. The lower bound on h and the integrability of h( x( ), ) thus imply that, for λ 1, the quotients are minorized by a fixed integrable function. Monotone convergence theorem then gives for every A G E[1 A h (x)] = E[1 A lim (h( x + λx) h( x))/λ] λ = lim λ E[1 A(h( x + λx) h( x))/λ] = lim λ E[1 A((E G h)( x + λx) (E G h)( x))/λ] = E[1 A lim λ ((EG h)( x + λx) (E G h)( x))/λ] = E[1 A (E G h) (x)], which means that (E G h) is the conditional expectation of h. To prove the last claim, let x L 0 (Ω, G, P ; R n ). definition of a conditional integrand, (E G h )(x( ), ) = E G h (x( ), ). By the first claim and the We have h (x(ω), ω) 0 almost surely if and only if E G h (x( ), ) 0 almost surely, since h 0. 32

33 The following result shows that the linearity condition of Theorem 7.3 can be stated in terms of the original normal integrand h directly. In the proof, we will denote the set of G-measurable selectors of a set-valued mapping C by L 0 (G; C). We will also use the fact that if C is closed-valued and G-measurable, then it is almost surely linear-valued if and only if the set of its measurable selectors is a linear space. This follows by considering the Castaing representation of C. Lemma 7.8. Assume that h has an integrable lower bound and that Eh( x(ω), ω) < for some x N. Then h t is well-defined and N t is linear-valued for t = T,..., 0 if and only if L = {x N h (x(ω), ω) 0 a.s.} is a linear space. If x L is such that x t 1 = 0 then x t N t almost surely. Proof. Redefining h(x, ω) := h(x x(ω), ω), we may assume that x = 0. Indeed, such a translation amounts to translating the functions h t and h t accordingly and it does not affect the recession functions h t and h t. We proceed by induction on T. When T = 0, Lemma 7.7 gives L = {x N h T (x(ω), ω) 0 a.s.} = L 0 (F T ; N T ). Since N T is F T -measurable, the linearity of L is equivalent to N T being linearvalued. Let now T be arbitrary and assume that the claim holds for every (T 1)-period model. If L is linear then L = {x N x 0 = 0, h (x(ω), ω) 0 a.s.} is linear as well. Applying the induction hypothesis to the (T 1)-period model obtained by fixing x 0 0, we get that N t is linear for t = T,..., 1. Applying Lemmas 7.1 and 7.2 backwards for s = T,..., 1, we then see that h 0 is well defined. Lemmas 7.7 and 7.2 give L 0 (F 0 ; N 0 ) = {x 0 L 0 (F 0 ) h 0 (x 0 (ω), ω) 0 a.s.} = {x 0 L 0 (F 0 ) h 0 (x 0 (ω), ω) 0 a.s.} = {x 0 L 0 (F 0 ) inf h 1 (x 0 (ω), x 1, ω) 0 a.s.} x 1 = {x 0 L 0 (F 0 ) x N : x 0 = x 0, h 1 ( x 1 (ω), ω) 0 a.s.}, where the last equality follows by applying the last part of Lemma 7.2 to the normal integrand h. Repeating the argument for t = 1,..., T, we get L 0 (F 0 ; N 0 ) = {x 0 L 0 (F 0 ) x N : x 0 = x 0, h T ( x(ω), ω) 0 a.s.} = {x 0 L 0 (F 0 ) x N : x 0 = x 0, h ( x(ω), ω) 0 a.s.} = {x 0 L 0 (F 0 ) x L : x 0 = x 0 }. (7.7) The linearity of L thus implies that of L 0 (F 0 ; N 0 ) which is equivalent to N 0 being linear-valued. 33

Conjugate duality in stochastic optimization

Conjugate duality in stochastic optimization Ari-Pekka Perkkiö, Institute of Mathematics, Aalto University Ph.D. instructor/joint work with Teemu Pennanen, Institute of Mathematics, Aalto University March 15th 2010 1 / 13 We study convex problems.

More information

Duality and optimality conditions in stochastic optimization and mathematical finance

Duality and optimality conditions in stochastic optimization and mathematical finance Duality and optimality conditions in stochastic optimization and mathematical finance Sara Biagini Teemu Pennanen Ari-Pekka Perkkiö April 25, 2015 Abstract This article studies convex duality in stochastic

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Dedicated to Michel Théra in honor of his 70th birthday

Dedicated to Michel Théra in honor of his 70th birthday VARIATIONAL GEOMETRIC APPROACH TO GENERALIZED DIFFERENTIAL AND CONJUGATE CALCULI IN CONVEX ANALYSIS B. S. MORDUKHOVICH 1, N. M. NAM 2, R. B. RECTOR 3 and T. TRAN 4. Dedicated to Michel Théra in honor of

More information

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06

A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 A SET OF LECTURE NOTES ON CONVEX OPTIMIZATION WITH SOME APPLICATIONS TO PROBABILITY THEORY INCOMPLETE DRAFT. MAY 06 CHRISTIAN LÉONARD Contents Preliminaries 1 1. Convexity without topology 1 2. Convexity

More information

Semicontinuous functions and convexity

Semicontinuous functions and convexity Semicontinuous functions and convexity Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto April 3, 2014 1 Lattices If (A, ) is a partially ordered set and S is a subset

More information

The proximal mapping

The proximal mapping The proximal mapping http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes Outline 2/37 1 closed function 2 Conjugate function

More information

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015

Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015 Examen du cours Optimisation Stochastique Version 06/05/2014 Mastère de Mathématiques de la Modélisation F. Bonnans Parcours OJD, Ecole Polytechnique et Université Pierre et Marie Curie 05 Mai 2015 Authorized

More information

Convex Analysis and Optimization Chapter 2 Solutions

Convex Analysis and Optimization Chapter 2 Solutions Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Optimization and Optimal Control in Banach Spaces

Optimization and Optimal Control in Banach Spaces Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

Convex duality in optimal investment and contingent claim valuation in illiquid markets

Convex duality in optimal investment and contingent claim valuation in illiquid markets Convex duality in optimal investment and contingent claim valuation in illiquid markets Teemu Pennanen Ari-Pekka Perkkiö March 9, 2016 Abstract This paper studies convex duality in optimal investment and

More information

Chapter 2 Convex Analysis

Chapter 2 Convex Analysis Chapter 2 Convex Analysis The theory of nonsmooth analysis is based on convex analysis. Thus, we start this chapter by giving basic concepts and results of convexity (for further readings see also [202,

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q)

On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) On Semicontinuity of Convex-valued Multifunctions and Cesari s Property (Q) Andreas Löhne May 2, 2005 (last update: November 22, 2005) Abstract We investigate two types of semicontinuity for set-valued

More information

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect

GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, Dedicated to Franco Giannessi and Diethard Pallaschke with great respect GEOMETRIC APPROACH TO CONVEX SUBDIFFERENTIAL CALCULUS October 10, 2018 BORIS S. MORDUKHOVICH 1 and NGUYEN MAU NAM 2 Dedicated to Franco Giannessi and Diethard Pallaschke with great respect Abstract. In

More information

CHAPTER V DUAL SPACES

CHAPTER V DUAL SPACES CHAPTER V DUAL SPACES DEFINITION Let (X, T ) be a (real) locally convex topological vector space. By the dual space X, or (X, T ), of X we mean the set of all continuous linear functionals on X. By the

More information

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT

ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT ZERO DUALITY GAP FOR CONVEX PROGRAMS: A GENERAL RESULT EMIL ERNST AND MICHEL VOLLE Abstract. This article addresses a general criterion providing a zero duality gap for convex programs in the setting of

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Local strong convexity and local Lipschitz continuity of the gradient of convex functions Local strong convexity and local Lipschitz continuity of the gradient of convex functions R. Goebel and R.T. Rockafellar May 23, 2007 Abstract. Given a pair of convex conjugate functions f and f, we investigate

More information

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions

A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions A Geometric Framework for Nonconvex Optimization Duality using Augmented Lagrangian Functions Angelia Nedić and Asuman Ozdaglar April 15, 2006 Abstract We provide a unifying geometric framework for the

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Convex Functions. Pontus Giselsson

Convex Functions. Pontus Giselsson Convex Functions Pontus Giselsson 1 Today s lecture lower semicontinuity, closure, convex hull convexity preserving operations precomposition with affine mapping infimal convolution image function supremum

More information

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version

Convex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com

More information

Optimality Conditions for Nonsmooth Convex Optimization

Optimality Conditions for Nonsmooth Convex Optimization Optimality Conditions for Nonsmooth Convex Optimization Sangkyun Lee Oct 22, 2014 Let us consider a convex function f : R n R, where R is the extended real field, R := R {, + }, which is proper (f never

More information

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization

More information

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε 1. Continuity of convex functions in normed spaces In this chapter, we consider continuity properties of real-valued convex functions defined on open convex sets in normed spaces. Recall that every infinitedimensional

More information

Constraint qualifications for convex inequality systems with applications in constrained optimization

Constraint qualifications for convex inequality systems with applications in constrained optimization Constraint qualifications for convex inequality systems with applications in constrained optimization Chong Li, K. F. Ng and T. K. Pong Abstract. For an inequality system defined by an infinite family

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. Minimization Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping. 1 Minimization A Topological Result. Let S be a topological

More information

CHAPTER I THE RIESZ REPRESENTATION THEOREM

CHAPTER I THE RIESZ REPRESENTATION THEOREM CHAPTER I THE RIESZ REPRESENTATION THEOREM We begin our study by identifying certain special kinds of linear functionals on certain special vector spaces of functions. We describe these linear functionals

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING

UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING UTILITY OPTIMIZATION IN A FINITE SCENARIO SETTING J. TEICHMANN Abstract. We introduce the main concepts of duality theory for utility optimization in a setting of finitely many economic scenarios. 1. Utility

More information

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES

EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES EQUIVALENCE OF TOPOLOGIES AND BOREL FIELDS FOR COUNTABLY-HILBERT SPACES JEREMY J. BECNEL Abstract. We examine the main topologies wea, strong, and inductive placed on the dual of a countably-normed space

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Notes on Distributions

Notes on Distributions Notes on Distributions Functional Analysis 1 Locally Convex Spaces Definition 1. A vector space (over R or C) is said to be a topological vector space (TVS) if it is a Hausdorff topological space and the

More information

g 2 (x) (1/3)M 1 = (1/3)(2/3)M.

g 2 (x) (1/3)M 1 = (1/3)(2/3)M. COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is

More information

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation

Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Translative Sets and Functions and their Applications to Risk Measure Theory and Nonlinear Separation Andreas H. Hamel Abstract Recently defined concepts such as nonlinear separation functionals due to

More information

Extreme points of compact convex sets

Extreme points of compact convex sets Extreme points of compact convex sets In this chapter, we are going to show that compact convex sets are determined by a proper subset, the set of its extreme points. Let us start with the main definition.

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45 Division of the Humanities and Social Sciences Supergradients KC Border Fall 2001 1 The supergradient of a concave function There is a useful way to characterize the concavity of differentiable functions.

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

REAL AND COMPLEX ANALYSIS

REAL AND COMPLEX ANALYSIS REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

SYMBOLIC CONVEX ANALYSIS

SYMBOLIC CONVEX ANALYSIS SYMBOLIC CONVEX ANALYSIS by Chris Hamilton B.Sc., Okanagan University College, 2001 a thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the Department of

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian

More information

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION

USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION USING FUNCTIONAL ANALYSIS AND SOBOLEV SPACES TO SOLVE POISSON S EQUATION YI WANG Abstract. We study Banach and Hilbert spaces with an eye towards defining weak solutions to elliptic PDE. Using Lax-Milgram

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

BASICS OF CONVEX ANALYSIS

BASICS OF CONVEX ANALYSIS BASICS OF CONVEX ANALYSIS MARKUS GRASMAIR 1. Main Definitions We start with providing the central definitions of convex functions and convex sets. Definition 1. A function f : R n R + } is called convex,

More information

Set, functions and Euclidean space. Seungjin Han

Set, functions and Euclidean space. Seungjin Han Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,

More information

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus 1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality

More information

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University February 7, 2007 2 Contents 1 Metric Spaces 1 1.1 Basic definitions...........................

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

The Subdifferential of Convex Deviation Measures and Risk Functions

The Subdifferential of Convex Deviation Measures and Risk Functions The Subdifferential of Convex Deviation Measures and Risk Functions Nicole Lorenz Gert Wanka In this paper we give subdifferential formulas of some convex deviation measures using their conjugate functions

More information

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions

A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions A Unified Analysis of Nonconvex Optimization Duality and Penalty Methods with General Augmenting Functions Angelia Nedić and Asuman Ozdaglar April 16, 2006 Abstract In this paper, we study a unifying framework

More information

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion

Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Self-equilibrated Functions in Dual Vector Spaces: a Boundedness Criterion Michel Théra LACO, UMR-CNRS 6090, Université de Limoges michel.thera@unilim.fr reporting joint work with E. Ernst and M. Volle

More information

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

arxiv: v2 [q-fin.mf] 10 May 2018

arxiv: v2 [q-fin.mf] 10 May 2018 Robust Utility Maximization in Discrete-Time Markets with Friction Ariel Neufeld Mario Šikić May 11, 2018 arxiv:1610.09230v2 [q-fin.mf] 10 May 2018 Abstract We study a robust stochastic optimization problem

More information

Introduction to Convex and Quasiconvex Analysis

Introduction to Convex and Quasiconvex Analysis Introduction to Convex and Quasiconvex Analysis J.B.G.Frenk Econometric Institute, Erasmus University, Rotterdam G.Kassay Faculty of Mathematics, Babes Bolyai University, Cluj August 27, 2001 Abstract

More information

Functional Analysis I

Functional Analysis I Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker

More information

Calculus of Variations. Final Examination

Calculus of Variations. Final Examination Université Paris-Saclay M AMS and Optimization January 18th, 018 Calculus of Variations Final Examination Duration : 3h ; all kind of paper documents (notes, books...) are authorized. The total score of

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Lebesgue Integration: A non-rigorous introduction. What is wrong with Riemann integration?

Lebesgue Integration: A non-rigorous introduction. What is wrong with Riemann integration? Lebesgue Integration: A non-rigorous introduction What is wrong with Riemann integration? xample. Let f(x) = { 0 for x Q 1 for x / Q. The upper integral is 1, while the lower integral is 0. Yet, the function

More information

Exercises: Brunn, Minkowski and convex pie

Exercises: Brunn, Minkowski and convex pie Lecture 1 Exercises: Brunn, Minkowski and convex pie Consider the following problem: 1.1 Playing a convex pie Consider the following game with two players - you and me. I am cooking a pie, which should

More information

8. Conjugate functions

8. Conjugate functions L. Vandenberghe EE236C (Spring 2013-14) 8. Conjugate functions closed functions conjugate function 8-1 Closed set a set C is closed if it contains its boundary: x k C, x k x = x C operations that preserve

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION

ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION ON GENERALIZED-CONVEX CONSTRAINED MULTI-OBJECTIVE OPTIMIZATION CHRISTIAN GÜNTHER AND CHRISTIANE TAMMER Abstract. In this paper, we consider multi-objective optimization problems involving not necessarily

More information

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space 1 Professor Carl Cowen Math 54600 Fall 09 PROBLEMS 1. (Geometry in Inner Product Spaces) (a) (Parallelogram Law) Show that in any inner product space x + y 2 + x y 2 = 2( x 2 + y 2 ). (b) (Polarization

More information

Chapter 2 Metric Spaces

Chapter 2 Metric Spaces Chapter 2 Metric Spaces The purpose of this chapter is to present a summary of some basic properties of metric and topological spaces that play an important role in the main body of the book. 2.1 Metrics

More information

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Convex Analysis and Economic Theory AY Elementary properties of convex functions Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory AY 2018 2019 Topic 6: Convex functions I 6.1 Elementary properties of convex functions We may occasionally

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Lecture 1: Background on Convex Analysis

Lecture 1: Background on Convex Analysis Lecture 1: Background on Convex Analysis John Duchi PCMI 2016 Outline I Convex sets 1.1 Definitions and examples 2.2 Basic properties 3.3 Projections onto convex sets 4.4 Separating and supporting hyperplanes

More information

The Asymptotic Theory of Transaction Costs

The Asymptotic Theory of Transaction Costs The Asymptotic Theory of Transaction Costs Lecture Notes by Walter Schachermayer Nachdiplom-Vorlesung, ETH Zürich, WS 15/16 1 Models on Finite Probability Spaces In this section we consider a stock price

More information

Lebesgue Integration on R n

Lebesgue Integration on R n Lebesgue Integration on R n The treatment here is based loosely on that of Jones, Lebesgue Integration on Euclidean Space We give an overview from the perspective of a user of the theory Riemann integration

More information

Some Background Math Notes on Limsups, Sets, and Convexity

Some Background Math Notes on Limsups, Sets, and Convexity EE599 STOCHASTIC NETWORK OPTIMIZATION, MICHAEL J. NEELY, FALL 2008 1 Some Background Math Notes on Limsups, Sets, and Convexity I. LIMITS Let f(t) be a real valued function of time. Suppose f(t) converges

More information

Subdifferential representation of convex functions: refinements and applications

Subdifferential representation of convex functions: refinements and applications Subdifferential representation of convex functions: refinements and applications Joël Benoist & Aris Daniilidis Abstract Every lower semicontinuous convex function can be represented through its subdifferential

More information

Lectures on Analysis John Roe

Lectures on Analysis John Roe Lectures on Analysis John Roe 2005 2009 1 Lecture 1 About Functional Analysis The key objects of study in functional analysis are various kinds of topological vector spaces. The simplest of these are the

More information

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6

Nonlinear Programming 3rd Edition. Theoretical Solutions Manual Chapter 6 Nonlinear Programming 3rd Edition Theoretical Solutions Manual Chapter 6 Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts 1 NOTE This manual contains

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

Math 209B Homework 2

Math 209B Homework 2 Math 29B Homework 2 Edward Burkard Note: All vector spaces are over the field F = R or C 4.6. Two Compactness Theorems. 4. Point Set Topology Exercise 6 The product of countably many sequentally compact

More information

Lectures on Geometry

Lectures on Geometry January 4, 2001 Lectures on Geometry Christer O. Kiselman Contents: 1. Introduction 2. Closure operators and Galois correspondences 3. Convex sets and functions 4. Infimal convolution 5. Convex duality:

More information

Key words. Calculus of variations, convex duality, Hamiltonian conditions, impulse control

Key words. Calculus of variations, convex duality, Hamiltonian conditions, impulse control DUALITY IN CONVEX PROBLEMS OF BOLZA OVER FUNCTIONS OF BOUNDED VARIATION TEEMU PENNANEN AND ARI-PEKKA PERKKIÖ Abstract. This paper studies fully convex problems of Bolza in the conjugate duality framework

More information

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION

THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION THE UNIQUE MINIMAL DUAL REPRESENTATION OF A CONVEX FUNCTION HALUK ERGIN AND TODD SARVER Abstract. Suppose (i) X is a separable Banach space, (ii) C is a convex subset of X that is a Baire space (when endowed

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Victoria Martín-Márquez

Victoria Martín-Márquez A NEW APPROACH FOR THE CONVEX FEASIBILITY PROBLEM VIA MONOTROPIC PROGRAMMING Victoria Martín-Márquez Dep. of Mathematical Analysis University of Seville Spain XIII Encuentro Red de Análisis Funcional y

More information

3.1 Convexity notions for functions and basic properties

3.1 Convexity notions for functions and basic properties 3.1 Convexity notions for functions and basic properties We start the chapter with the basic definition of a convex function. Definition 3.1.1 (Convex function) A function f : E R is said to be convex

More information

s P = f(ξ n )(x i x i 1 ). i=1

s P = f(ξ n )(x i x i 1 ). i=1 Compactness and total boundedness via nets The aim of this chapter is to define the notion of a net (generalized sequence) and to characterize compactness and total boundedness by this important topological

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

In English, this means that if we travel on a straight line between any two points in C, then we never leave C. Convex sets In this section, we will be introduced to some of the mathematical fundamentals of convex sets. In order to motivate some of the definitions, we will look at the closest point problem from

More information

Convex Analysis Background

Convex Analysis Background Convex Analysis Background John C. Duchi Stanford University Park City Mathematics Institute 206 Abstract In this set of notes, we will outline several standard facts from convex analysis, the study of

More information

Subdifferentiability and the Duality Gap

Subdifferentiability and the Duality Gap Subdifferentiability and the Duality Gap Neil E. Gretsky (neg@math.ucr.edu) Department of Mathematics, University of California, Riverside Joseph M. Ostroy (ostroy@econ.ucla.edu) Department of Economics,

More information

The local equicontinuity of a maximal monotone operator

The local equicontinuity of a maximal monotone operator arxiv:1410.3328v2 [math.fa] 3 Nov 2014 The local equicontinuity of a maximal monotone operator M.D. Voisei Abstract The local equicontinuity of an operator T : X X with proper Fitzpatrick function ϕ T

More information