Bounds for the stop loss premium for unbounded risks under the variance constraints

Size: px
Start display at page:

Download "Bounds for the stop loss premium for unbounded risks under the variance constraints"

Transcription

1 Bounds for the stop loss premium for unbounded risks under the variance constraints V. Bentkus 1,2 Abstract Let f be a function with convex derivative f. We consider the maximization problem max Ef(X) assuming that X satisfies the stochastic boundedness condition X st Z, with some Z with known distribution, say ν = L(Z), as well as that the mean m = E X and the variance σ 2 = var X are known. It turns out that the maximizers are independent of the function f and can be constructively described in terms of m, σ 2, ν. In particular, the description of the maximizers allows to find the maximal stop loss premium E(X h) 2 +. Similar results hold for symmetric X. Using the results, one can consider stop loss premiums for aggregated risks having the form X X n assuming that the sum has a super-martingale structure. The basic result can be applied to derive tight bounds for the tail probabilities P{X X n x}, which generalize and improve the classical inequalities for sums of bounded random variables. Key words and phrases: bounds for stop loss premium, unbounded risks, heavy tails, bounds for tail probabilities, variance, convex ordering, s-convex functions and orderings, Hoeffding inequalities. A.M.S. classification (1991): 60E15. 1 Introduction We are interested in finding of upper bounds for the stop loss premium π α (X, h) = E (X h) α +, α 0, 1 Department of Probability Theory and Statistics, Institute of Mathematics and Informatics, Akademijos str. 4, LT Vilnius, Lithuania. 2 The research partially supported by CRC 701 Spectral Structures and Topological Methods in Mathematics, Bielefeld University. 1

2 where the risk X is interpreted as a random variable, and h R is a non-random retention limit. Here z + = max{0, z}, and z α + = (z +) α. We define π 0 (X, h) = P{X h}. In this paper we consider only the case α 1. There is a great number of papers where this question was addressed, let us mention Pinelis 1998, 1999, 2006, 2007a,b, Bentkus 2002, 2004, 2008a,b, 2009 for results with a probabilistic background, where the most complicated case α = 0 is of the primary interest. In these papers results with α 1 are used to derive bounds in the case α = 0. In finance and insurance mathematics bounds for the stop loss premium (with α 1) were obtained by De Vylder and Goovaerts 1983, Kaas and Goovaerts 1986, Jansen et al 1986, Denuit et al 2005, etc. See Hürlimann 2008 for a rather complete review of the related results. Often the problem is stated as follows. Assume that X is bounded, a X b with some known non random a < b. Assume further that certain moments of X are known, for example its mean m = EX and/or the variance var X, etc. Maximize the stop loss premium π α (X, h). In the case α 1, the maximizers usually have discrete distributions with several non zero atoms. Such formulation of the problem has some not always desirable features. First of all, the maximizers having discrete distributions with several non-zero atoms constitute a class with a quite poor structure, which can make it difficult to construct sufficiently flexible models in order to simulate financial or economical phenomena. Secondly, it is difficult to integrate into such models cases where X is unbounded and has heavy tails. In this article we continue to study a class of models able to include the case of unbounded and heavy tailed X (see B 2008a,b, 2009, where a simpler case of bounds without variance constraints is considered). The models seem to be sufficiently flexible since they are parameterized by moments and certain distribution functions. A nice feature of the models is that as a particular case they include the classical case a X b of bounded X. Finally, the models are solvable in the sense that we can provide a constructive description of the maximizers, and hence of the maximal stop loss premium. Another motivation leading to the models would be to obtain sufficiently sharp inequalities for tail probabilities of sums of unbounded random variables comparable in quality with the classical inequalities for bounded random variables (see, for example Hoeffding 1963) or even with inequalities in Pinelis 1998, 1999, 2006, 2007a,b, B 2002, 2004, 2008a,b. While proving inequalities for tail probabilities of sums of random variables it is of interest to maximize the expectation Ef(S n ), where f runs over a given family of functions, and the summands X k of the sum S n = X X n satisfy some moment, (stochastic) boundedness, and structure (like symmetry, unimodality, etc.) restrictions. The families f h (t) = exp{ht}, h > 0, f h (t) = (t h) α +, h R, with α 0 are of interest (see Hoeffding 1963, Talagrand 1995, Pinelis , B , BKZ 2006 among others). An important aspect is that the maximizers have to be independent of f from the family under the consideration (in the case of the stop loss premium of the retention limit), like in Theorems 1, 5, 8, 10, 11 below. This is the reason why under the variance constraints we have to assume that α 2. 2

3 One can interpret the stochastic boundedness conditions as a natural extension of the standard boundedness conditions. To see this, let us reformulate the boundedness condition a X b as follows. Let Y respectively Z be a random variable such that P{Y = a} = 1 respectively P{Z = b} = 1. Then we can represent the condition a X b as Y st X st Z, (1) where Y st X means that X dominates Y in the stochastic order, that is, P{Y x} P{X x} for all x R. Let us briefly describe results from B 2009 obtained without the variance constraints. Let f be a convex function. Consider the maximization problem max Ef(X) (2) assuming that X satisfies the stochastic boundedness condition (1) with some Y and Z with known distributions, say µ = L(Y ) and ν = L(Z), as well as that the mean m = EX is known. It turns out that the maximizers are independent of the function f and can be constructively described in terms of m, µ, ν. In particular, the description of the maximizers allows to find the maximal stop loss premium. Using the information, one can obtain bounds for the stop loss premiums for aggregated risks having the form X 1 + +X n assuming that the sum has a super-martingale structure. The bounds imply tight bounds for the tail probabilities P{X X n x}, which generalize and improve the classical inequalities (see, for example Hoeffding 1963), and extend to the case of unbounded random variables the inequalities from B 2002, 2004, 2008a,b. The approach in B 2009 is related to that one in B 2008a,b, where the special case 0 X st Z was considered. In this paper we consider counterparts of (2) under additional variance (respectively variance and symmetry) constraints for some classes of s-convex functions f. In turns out that the two sided stochastic boundedness condition Y st X st Z can be relaxed to X st Z. Henceforth numbers p, q 0 satisfy p + q = 1. We say positive, increasing, etc instead of non-negative, non-decreasing, etc. In cases where the strict inequalities hold, we say strictly positive, etc. We write x + = max{0, x}, and x s + = (x + ) s. Furthermore, stands for. By δ a we denote Diracs measure concentrated at point a, and δ = δ 0. We R assume that all functions and sets under consideration are measurable. We write I{A} for the indicator function of event A. Let us pass to a description of the stochastic boundedness condition (1) and of the maximizers in the optimization problem (2). Assume that Y st Z. Then the set D = {X : Y st X st Z} is nonempty since, for example, Z D. Assuming that E Y <, E Z <, 3

4 we write It is clear that M N. M = EY, N = EZ. We have to introduce several definitions related to the distributions µ = L(Y ) and ν = L(Z). With 0 p 1 and ν = L(Z) we associate a positive measure, say ν [p], having the total mass p = ν [p] (R) such that ν and ν [p] coincide in a maximal possible neighborhood of. If G ν (t) = ν([t, )) is the survival function of ν then we define the survival function G [p] ν (t) = ν [p] ([t, )) setting G [p] ν (t) = min{p, G ν (t)}. In a more detailed way, if there exists b such that ν(i) = p, where I = [b, ) or I = (b, ), then we define ν [p] setting ν [p] (A) = ν(i A) for (measurable) A R. If such b does not exist then there exist b such that ν((b, )) < p < ν([b, )), which means that ν has an atom at point b. Then we define ν [p] (A) = εν(a {b}) + ν(a (b, )) choosing ε such that ν [p] (R) = p. Note that in general the point b from the definition of ν [p] is not defined in a unique way. With 0 q 1 and µ = L(Y ) we associate a positive measure, say µ {q}, having the total mass q = µ {q} (R) such that µ and µ {q} coincide in a maximal possible neighborhood of. The definition is similar to that of ν [p], exchanging by. If G µ (t) = µ([t, )) is the survival function of µ then we define the survival function G {q} µ (t) = µ {q} ([t, )) setting G {q} µ (t) = max{0, G µ (t) p}. For example, if there exists a such that µ(i) = q, where I = (, a) or I = (, a], then we define µ {q} setting µ {q} (A) = µ(i A) for A R. If such a does not exist, then we correct the definition of µ {q} splitting the atom of µ at point a. Note that in general a is not defined in a unique way. Recalling that p + q = 1, consider the probability distribution λ [p] = µ {q} + ν [p]. Note that the survival function G λ [p](t) = λ [p] ([t, )) of λ [p] is given by G λ [p] = G {q} µ +G [p] ν. Furthermore, the relations { } { } G λ [p] = max G µ, min{p, G ν } = min G ν, max{p, G µ } are obvious. One can check that the function p m p def = t λ [p] (dt) 4

5 is a continuous increasing function of 0 p 1 such that m 0 = M and m 1 = N. Hence, for any m [M, N] there exists p = p m such that m = m p. By η = η(m, µ, ν) we denote a random variable having the distribution L(η) = λ [pm]. It is clear that η D m, where D m = {X D : EX = m}. Note that in general p = p m, a = a m and b = b m are not defined in a unique way. Since their concrete definition is immaterial for our purposes, one can define them in an arbitrary convenient way. Henceforth we write p, q, a, b instead of p m, q m, a m, b m. The following bound for the stop loss premium under some additional technical type condition was obtained in B Theorem 1. Assume that E f(y ) < and E f(z) <. If f is convex then max X D m Ef(X) = Ef(η), (3) where η = η(m, µ, ν) is the random variable defined above, and D m = {X D : EX = m}. If f is convex and increasing then max t m max X D t Ef(X) = Ef(η). (4) Proofs of all non obvious statements are given in Section 2. Choosing f(t) = (t h) +, Lemma 1 provides sharp upper bound for the stop loss premium E (X h) + for X D t with t m. Example 2. To make the definitions more transparent, we assume for a while that the distributions of Y and Z have densities, say u and v such that: u is a strictly increasing (and therefore strictly positive) continuous function on an interval (, A], and u(t) 0 for t > A; the function v is a strictly decreasing (and therefore strictly positive) continuous function on an interval [B, ), where B A, and v(t) 0 for t < B. Then, for given m, the probability distribution λ [p] of η = η(m, µ, ν) is absolutely continuous and has the density t u(t)i{t a} + v(t)i{t b}. The related numbers p, q, a, b are defined in a unique way via the equations p + q = 1, q = a u(t) dt, p = v(t) dt, (5) b and m = a tu(t) dt + b tv(t) dt. (6) 5

6 Example 3. Another simple example would be the case where Z st X st Z and EX = 0, with some Z 0. Note that symmetric X such that X st Z satisfy the conditions. Then η has the distribution L(η)(A) = ν [p] (A) + ν [p] ( A) with p = 1/2. Let us turn to situations where the variance varx is known. We write V(m, σ 2, ν) for the set of random variables such that EX = m, varx = σ 2, X st Z. If instead of EX = m the inequality EX m holds then we write V( m, σ 2, ν) instead of V(m, σ 2, ν), etc. Define η = η(m, σ 2, ν) as a random variable with the distribution L(η) = qδ a + ν [p] such that To ensure the existence of η we impose the conditions varη = σ 2, Eη = m. (7) Z 0, 0 < EZ 2 <, m 0, σ 2 0. (8) Lemma 4. Assume that (8) holds. Then the random variable η = η(m, σ 2, ν) is well defined and η V(m, σ 2, ν). Recall that f is s-convex if the (s 2)-th derivative f (s 2) is convex, see Section 3 for definitions and properties of s-convex functions. Theorem 5. Assume that (8) is fulfilled. Let X V(m, σ 2, ν). Then E (X h) 2 + E (η h)2 +, for h R, (9) where η = η(m, σ 2, ν). Furthermore, if f is 3-convex (that is, f is convex), E f(z) <, and E f(x) < then Ef(X) Ef(η). (10) Example 6. Let b > 0 be a non random constant, and Z = b. Then (9) and (10) hold for X b such that EX = m 0 and var X = σ 2. The random variable η is a Bernoulli random variable which assumes the values b and a = m σ 2 /(b m) with probabilities p = σ 2 σ 2 + (b m) 2 and q = (b m)2 σ 2 + (b m) 2 6

7 respectively. As special cases, Theorem 5 yields the classical inequalities for bounded random variables. Let us recall related known results. Let m = 0. For f(t) = exp{ht}, h 0 inequality (10) is established by Hoeffding In the case of f(t) = (t h) 2 + with h R, inequality (10) is given in B Example 7. Assume that Z 0 has density, say u, such that u is a strictly decreasing positive continuous function on [B, ) for some B 0, and u(t) = 0 for t < B. Let t 2 u(t) dt <. Then, for given m 0 and σ 2 0, the parameters p, q = 1 p, a, b from the definition of η are defined in a unique way via the system of equations p = b u(t) dt, m = aq + b tu(t) dt, (a m) 2 q + b t 2 u(t) dt = σ 2. For example, in the special case of m = 0 and the Pareto distribution P{Z t} = t 3 for t 1, one can express p = b 3, a = 3b/(2b 3 2), where b is a unique solution of the equation 12b 3 3 = 4σ 2 (b 4 b). One can apply Theorem 5 in cases where the means and variances of X and η coincide. However, often only some upper bounds for EX and varx are available. The next theorem is applicable in such situations. The price to pay is that one has to restrict somewhat the class of the functions f. Theorem 8. Assume that (8) is fulfilled. Let X V( 0, σ 2, ν). Then E (X h) 2 + E (η h) 2 +, for h R, (11) where η = η(0, σ 2, ν). If f is convex increasing and 3-convex (that is, f, f are convex and f 0), E f(z) <, and E f(x) < then Ef(X) Ef(η). (12) Let us pass to the case of symmetric X with known variance. We write S(σ 2, ν) for the set of symmetric random variables such that varx = σ 2, X st Z. If instead of varx = σ 2 the inequality var X σ 2 holds then we write S( σ 2, ν) instead of S(σ 2, ν). Recalling that ν = L(Z), and that the measure ν [p] is defined above, introduce the measure µ [p] such that µ [p] (A) = ν [p] ( A) for A R. Define η = η(σ 2, ν) as a random variable with the distribution L(η) = µ [p] + rδ + ν [p] such that r = 1 2p, var η = σ 2, (13) 7

8 where δ is the Dirac measure concentrated at 0. To ensure the existence of η we impose the conditions Z 0, 0 < EZ 2 <, 0 σ 2 2 t 2 ν [1/2] (dt). (14) Lemma 9. Assume that (14) holds. Then the symmetric random variable η = η(σ 2, ν) is well defined and η S(σ 2, ν). Theorem 10. Assume that (14) is fulfilled. Let X S(σ 2, ν). Then E (X h) 3 + E (η h)3 +, for h R, (15) where η = η(σ 2, ν). Furthermore, if f is 4-convex (that is, f is convex), E f(z) <, then Ef(X) Ef(η). (16) Theorem 11. Assume that (14) is fulfilled. Let X S( σ 2, ν). Then E (X h) 3 + E (η h)3 +, for h R, (17) where η = η(σ 2, ν). If f is convex, 4-convex, and f is increasing (that is, f is convex positive and increasing), E f(z) <, then Ef(X) Ef(η). (18) 2 Proofs Proof of Theorem 1. In B 2009 this theorem is proved under the additional assumption that there exists y R such that Y y Z. However, an inspection of the proof shows that this assumption is superfluous. Then We write m p = t ν [p] (dt), σ 2 p = t 2 ν [p] (dt). (19) (t m) 2 ν [p] (dt) = σ 2 p 2mm p + pm 2. (20) 8

9 Proof of Lemma 4. Since η = η(m, σ 2, ν) has the distribution L(η) = qδ a + ν p, we can rewrite the conditions varη = σ 2 and Eη = m as (a m) 2 q + σ 2 p 2m pm + pm 2 = σ 2, aq + m p = m. (21) It suffices to check that there exist p and a satisfying the equations (21) such that 0 < p 1 and a 0. Using the second equation in (21), we can write a = (m m p )/q. Introduce the function u(p) = (a p m) 2 q + (t m) 2 ν [p] (dt), 0 p 1, (22) R where a p = (m m p )/q. Note that u is a continuous positive function of the argument 0 p < 1 such that u(0) = 0, and u(1) =. Indeed, if p = 0 then q = 1, ν [p] = 0, m p = 0, a p = m. Inserting these values in (22), we get u(0) = 0. If p = 1 then q = 0 and m p = EZ > 0 due to the assumptions Z 0 and EZ 2 > 0. Hence, using a p = (m m p )/q and m 0, we have (a p m) 2 q (pm m p ) 2 /q m 2 p/q =. Let us show that the function p u(p) is strictly increasing. It is clear that the function p (t m) 2 ν [p] (dt) is increasing. Note that (a p m) 2 q (m p pm) 2 /q. The function p (m p pm) 2 is increasing and strictly positive for p > 0 since we assume that EZ 2 > 0 and therefore m p > 0 for p > 0. The function p 1/q is strictly increasing. We conclude that u is strictly increasing. Since the function p u(p) is strictly increasing continuous positive function of the argument 0 p < 1 such that u(0) = 0, and u(1) =, the equation u(p) = σ 2 has a unique solution p = p(m, σ 2 ). The knowledge of p allows to define q = 1 p and m p as in (19). Setting a = (m m p )/q we see that η V(m, σ 2, ν). Proof of Theorem 5. We have to prove (9) and (10). Recall that L(η) = qδ a +ν [p], and that b is a number such that ν((b, )) p ν([b, )). We can assume that b 0 since Z 0. The number a satisfies a 0, see the proof of Lemma 4. Both X and η have equal means m and variances σ 2. If σ 2 = 0 then η = m, X = m, and the assertion of the theorem clearly holds. Therefore henceforth we assume that σ 2 > 0. Let us prove (9), that is, that E (X h) 2 + E (η h)2 +, for h R. (23) 9

10 We consider the cases separately. i) h a, ii) a < h < b, iii) h b i) Using the obvious inequality (t h) 2 + (t h)2, noting that E (η h) 2 = E (η h) 2 + since P{η {a} [b, )} = 1 and h a, we have E (X h) 2 + E (X h)2 = σ 2 + (m h) 2 = E (η h) 2 +. ii) Now a < h < b. The function u(t) = max{c(t a) 2, (t h) 2 + }, c = (b h)2 /(b a) 2, satisfies (t h) 2 + u(t). It is elementary to check that u(t) = c(t a) 2 for t b, and that u(t) = (t h) 2 + for t b. Hence, we can write u(t) = c(t a) 2 + v(t), v(t) = ( (t h) 2 c(t a) 2) I{t b}, whence E (X h) 2 + Eu(X) = c E(X a) 2 + Ev(X). (24) Using m = EX = Eη and σ 2 = varx = varη, we obtain Ec(X a) 2 = c(σ 2 + (m a) 2 ) = Ec(η a) 2. (25) The function v is increasing, v(b) = 0 for t b, and X st Z. In view of a < h both (t h) 2 + and (t a)2 vanish at t = a. Hence E v(x) Ev(Z) = {(t h) 2 c(t a) 2 } ν [p] (dt) = (t h) 2 ν [p] (dt) c(t a) 2 ν [p] (dt) Combining (24) (26), we get E (X h) 2 + E (η h)2 +. = E (η h) 2 + Ec(η a) 2. (26) iii) Now h b. The function t (t h) 2 + is increasing and X st Z. Therefore E (X h) 2 + E (Z h)2 + = (t h) 2 ν [p] (dt) = E (η h) 2 +, which completes the proof of (23) and (9). h Let us prove (10), that is, that Ef(X) Ef(η) for 3-convex f. It suffices to show that E (h X) 2 + E (h η) 2 +, for h R. (27) 10

11 Indeed, any 3-convex function f can be represented as (see Section 3) f(x) = P(x) (h x) 2 + λ(dh) + (x h) 2 + λ(dh), (28) (,x 0 ] (x 0, ) where λ is a positive locally finite Borel measure on R (actually 2λ = f in the generalized sense), the point x 0 R is arbitrary, and P(x) = c 2 x 2 + c 1 x +c 0 is a quadratic polynomial whose coefficients can depend on x 0 and f. Using (23) and (27), the representation (28) clearly implies Ef(X) Ef(η). It remains to prove (27). Using the obvious decomposition (h t) 2 + = (t h)2 (t h) 2 +, inequality (27) is equivalent to E (X h) 2 + E (X h) 2 + E (η h)2 + E (η h) 2 +. (29) Since EX = Eη and varx = var η, it follows that E (X h) 2 = E (η h) 2. Hence, (29) is equivalent to E (X h) 2 + E (η h) 2 + which we have proved earlier as (23). Proof of Theorem 8. We have to prove (11) and (12). Write η = η(m, σ 2, ν), ξ = η(0, σ 2, ν), ζ = η(0, s 2, ν), ϑ = η(m, s 2, ν). In view of Theorem 5 instead of (11) and (12) it suffices to prove that for m 0 E (η h) 2 + E (ξ h)2 +, (30) that for s 2 σ 2 E (ζ h) 2 + E (ξ h)2 +, (31) and that for m 0 and s 2 σ 2 Ef(ϑ) Ef(ξ) (32) if f is convex increasing and 3-convex. Let turn to the proof of (30). During the proof σ 2 is fixed since we are only interested in the dependence of the parameters on m 0, which is reflected in the notation supplying the parameters with the index m. For example, p = p m, a = a m. The function u defined by (22) as a function of m has negative partial derivative with respect to m. Indeed, m u = 2(pm m p )/q, 0 p 1, (33) and m u 0 since m 0 and m p 0. Hence, this function is a decreasing function of m. Recall that we define p = p m as a unique solution of the equation u = σ 2. Therefore it is clear that p m p 0. Hence, from the definition of the measure ν [p], it is clear that ν [pm] st ν [p 0]. 11

12 While proving (30) it is convenient to consider the cases i) h min{a m, a 0 }, ii) a m h a 0, iii) a 0 h a m, iv) h max{a m, a 0 } separately. Note that a m 0 b m and a 0 0 b 0. i) Now h a m. Obviously a m m. Hence m h 0 and (m h) 2 h 2 since h 0. Therefore, using in addition h a 0, we get E (η h) 2 + = E (η h)2 = σ 2 + (m h) 2 σ 2 + h 2 = E (ξ h) 2 = E (ξ h) 2 +. ii) Using a m h and ν [pm] st ν [p 0], we obtain E (η h) 2 + = (t h) 2 + ν[pm] (dt) (t h) 2 + ν[p 0] (dt) a 2 0 q 0 + (t h) 2 + ν[p 0] (dt) = E (ξ h) 2 +. iii) Now a 0 h a m. The function u(t) = max{c(t a 0 ) 2, (t h) 2 + }, c = (b 0 h) 2 /(b 0 a 0 ) 2, satisfies (t h) 2 + u(t). It is easy to verify that u(t) = c(t a 0 ) 2 for t b 0, u(t) = c(t h) 2 + for t b 0. Writing with we have u(t) = c(t a 0 ) 2 + v(t) v(t) = w(t)i{t b 0 }, w(t) = (t h) 2 c(t a 0 ) 2, E (η h) 2 + Eu(η) = c E(η a 0) 2 + Ev(η). Note that a 0 a m m. Hence m a 0 0, and (m a 0 ) 2 a 2 0 since a 0 0 and m 0. Thus we have c E (η a 0 ) 2 = c(σ 2 + (m a 0 ) 2 ) c(σ 2 + a 2 0) = c E (ξ a 0 ) 2. The function v is increasing and ν [pm] st ν [p 0]. Using v(t) = 0 for t b 0, observing that P{ξ {a 0 } [b 0, )} = 1, recalling that v(t) = w(t)i{t b 0 } and w(ξ)i{ξ = a 0 } = (ξ h) 2 I{ξ = a 0 } 0, 12

13 we get Ev(η) = v(t) ν [pm] (dt) v(t) ν [p 0] (dt) = Ev(ξ) = E w(ξ)i{ξ b 0 } Ew(ξ)I{ξ b 0 } + Ew(ξ)I{ξ = a 0 } = Ew(ξ) = E (ξ h) 2 c E(ξ a 0 ) 2. Combining the bounds, we obtain E (η h) 2 + E (ξ h) 2, concluding the proof of (30) in the case iii). iv) Now h max{a m, a 0 }. The function t (t h) 2 + is increasing and ν[pm] st ν [p 0]. Therefore E (η h) 2 + = (t h) 2 + ν[pm] (dt) (t h) 2 + ν [p0] (dt) = E (ξ h) 2 +. The proof of (30) is completed. Let us prove (31). During the proof we are interested in the dependence of the parameters on the variances s 2 and σ 2, which is reflected in the notation supplying the parameters with the index s or σ respectively. For example, p = p σ, a = a σ. Since now m = 0, we can rewrite the equations var η = σ 2, Eη = 0 defining the distribution of η = η(0, σ 2, ν) as a 2 q + σ 2 p = σ 2, aq + m p = 0. (34) It is clear from (34) that p σ is an increasing function of σ. Hence ν [ps] st ν [pσ]. It is clear as well that a σ a s. While proving (31) it is convenient to consider the cases i) h a σ, ii) a σ h a s, iii) h a s separately. i) We have E (ζ h) 2 + = E (ζ h)2 = s 2 + h 2 σ 2 + h 2 = E (ξ h) 2 = E (ξ h) 2 +. ii) Now a σ h a s. The function u(t) = max{c(t a σ ) 2, (t h) 2 + }, c = (b σ h) 2 /(b σ a σ ) 2, 13

14 satisfies (t h) 2 + u(t). Writing u(t) = c(t a σ ) 2 + v(t), with v(t) = w(t)i{t b σ }, w(t) = (t h) 2 c(t a σ ) 2, we have E (ζ h) 2 + Eu(ζ) = c E (ζ a σ) 2 + Ev(ζ). Using s 2 σ 2, we have c E(ζ a σ ) 2 = c(s 2 + a 2 σ) c(σ 2 + a 2 σ) = c E (ξ a σ ) 2. The function v is increasing and ν [ps] st ν [pσ]. Hence Ev(ζ) = v(t) ν [ps] (dt) v(t) ν [pσ] (dt) = Ev(ξ) = Ew(ξ)I{ξ b σ } Ew(ξ)I{ξ b σ } + Ew(ξ)I{ξ = a σ } = Ew(ξ) = E (ξ h) 2 c E (ξ a σ ) 2. Combining the bounds, we get E (ζ h) 2 + E (ξ h) 2. iii) The function t (t h) 2 + is increasing and ν[ps] st ν [pσ]. Hence we have E (ζ h) 2 + = (t h) 2 + ν[ps] (dt) (t h) 2 + ν [pσ] (dt) = E (ξ h) 2 +, completing the proof of (31). Let us prove (32), that is, that Ef(ϑ) Ef(ξ) assuming that f is convex and increasing and has a convex derivative f. The outline of the proof is as follows. Combining (30) and (31), the inequality (32) holds for semi-quadratic f(t) = (t h) 2 +. For general f we prove (32) approximating f by functions which can be represented as mixtures of semiquadratic functions. Let k = 1, 2,.... Introduce the functions f k (t) = f(t) for t k, f k (t) = f( k) + f ( k)(t + k) for t k. The function t f( k) + f ( k)(t + k) is tangent to f at t = k. Since f is convex, it follows that f k f and lim f k = f. A bit later we show that f k can be represented as k f k (x) = P(x) + (x h) 2 + λ k(dh) (35) ( 2k, ) 14

15 with P(x) = f ( k)(x+k)+f( k) and a positive locally finite Borel measure λ k. Using (35) it follows that Ef k (ϑ) Ef k (ξ). To see this, it suffices to use E (ϑ h) 2 + E (ξ h)2 + and to note that EP(ϑ) = (m + k)f ( k) + f( k) kf ( k) + f( k) = EP(ξ) since m 0, and f ( k) 0 by our assumption that f is increasing. Since f k f the inequality Ef k (ϑ) Ef k (ξ) implies that Ef k (ϑ) Ef(ξ). Passing now to the limit as k and using lim k f k = f, we derive Ef(ϑ) Ef(ξ). In order to complete the proof of (32) we have to derive (35). It is clear that the derivative f k is convex since f is convex and f k (t) = f (t) for t k, f k (t) = f ( k) for t k. We conclude that f k is 3-convex. Furthermore, f k (t) = f k (t) = 0 for t < k. Being 3-convex f k has an integral representation of type (52) given in Section 3. Choosing x 0 = 2k, this representation takes the form of (35) with some P(x) = c 2 x 2 +c 1 x+c 0. Note that in our particular case the integral involving (h x) 2 + in (52) vanishes because f k (t) = 0 for t < k. To define the coefficients c 2, c 1, c 0 note that for x < 2k the integral in (35) vanishes since (x h) + = 0 for x < 2k < h. Hence f k (x) = P(x) for x < 2k. Recalling that f k (x) = f( k) + f ( k)(x + k) for x k, it follows that c 2 x 2 + c 1 x + c 0 = f( k) + f ( k)(x + k) for x < 2k, whence c 2 = 0, c 1 = f ( k), c 0 = f( k) + f ( k)k. Let us turn to the proofs related to symmetric random variables. Proof of Lemma 9. Note that X st Z together with the symmetry assumption implies that Z st X. For X such that Z st X st Z and EX = 0, Lemma 1 yields EX 2 σ 2 max, σ2 max def = 2 t 2 ν [1/2] (dt). In order to establish the existence of the symmetric η = η(σ 2, ν) with L(η) = µ [p] +rδ+ν [p] and Eη 2 = σ 2, we rewrite the condition Eη 2 = σ 2 as u(p) def = 2 t 2 ν [p] (dt) = σ 2. The function p u(p) is increasing and continuous, u(0) = 0 and u(1/2) = σ 2 max. Since 0 σ 2 σ 2 max, there exists 0 p 1/2 such u(p) = σ2. Note that in cases where P{Z = 0} > 0 the number p is not necessarily defined in a unique way. Proof of Theorem 10. We have to prove (15) and (16). 15

16 Let us prove prove (15), that is, that E (X h) 3 + E (η h) 3 +. It is convenient to reformulate this inequality as an inequality for positive random variables. Namely, consider the inequality Ef h (Y ) Ef h (ξ), for h R, (36) where f h (t) = [ (t h) ( t h) 3 +] /2, a random variable Y satisfies Y 0, EY 2 = σ 2, P{Y x} 2ν([x, )), (37) and ξ is a positive random variable such that L(Y ) = rδ + 2ν [p]. Actually, one can easily show that (15) is equivalent to (36) using the fact that distributions of the symmetric random variables X and η can be recovered in a unique way from the distributions of X and η. Hence, instead of (15) it suffices to prove (36). In the proof of (36) we use a point b 0 such that ν [p] ((b, )) p ν [p] ([b, )), and the inequality Ev(Y ) Ev(ξ), (38) which holds for increasing differentiable functions v : R R such that v(t) = 0 for t b. Integrating by parts and using the stochastic boundedness condition (37), inequality (38) is equivalent to the obvious P{Y t}v (t) dt 2ν([t, ))v (t) dt. (0, ) While proving (36), we consider the cases separately. i) Now h b, and (0, ) i) h b, ii) b < h < 0, iii) 0 h b, iv) h b f h (t) = u(t) for 0 t h, u(t) def = 3ht 2 h 3 and We can write f h (t) = (t h) 3 /2 for t h. f h (t) = u(t) + v(t), v(t) = 2 1 ( (t h) 3 2u(t) ) I{t h}. It is easy to check that the function v is increasing. Using (38) we have Ev(Y ) Ev(ξ). Since EY 2 = Eξ 2 it follows that Eu(Y ) = Eu(ξ). Therefore Ef h (Y ) = Eu(Y ) + Ev(Y ) Eu(ξ) + Ev(ξ) = Ef h (ξ). 16

17 ii) Now 0 < h < b, and f h (t) = u(t) for 0 t h, u(t) def = 3ht 2 h 3 and Introduce the function f h (t) = (t h) 3 /2 for t h. g(t) = ct 2 h 3, c def = (h 3 + (b h) 3 /2)/b 2. The function g satisfies g(0) = u(0) and g(b) = (b h) 3 /2. Introduce the function w(t) = g(t) f h (t). Then w(0) = w(b) = 0. Let us prove that and w(t) 0 for 0 t b, (39) w(t) 0, w (t) 0 for t b. (40) In the proof of (39) we consider the cases a) 0 t h and b) h t b separately. a) Now w(t) = (c + 3h)t 2, and it suffices to show that c + 3h 0. Elementary transformations show that c + 3h 0 is equivalent to b + h 0 which holds since we assume that h < b. b) In this case w(t) = ct 2 h 3 (t h) 3 /2 and h t b. Elementary transformations show that w( h) = h 2 (b + h) 3 /(2b 2 ), and w( h) > 0. It is clear as well that w(b) = 0. We note that w is a quadratic concave function since w = 3. Using the obvious representation w ( h) = h(b + h) 3 /b 2 > 0 we conclude that w has two roots, say t 1, t 2, such that t 1 < h < t 2. If t 2 b then w (t) 0 for h t b, the function w is increasing on [ h, b], and we conclude that w(t) 0 for h t b since w( h) > 0. If t 2 < b then w is positive on [ h, t 2 ] and negative on [t 2, b]. Hence w is increasing on [ h, t 2 ] and decreasing on [t 2, b]. We conclude that w attains its minimal value at the endpoints of the interval [ h, b]. Since w( h) > 0 and w(b) = 0, we conclude that w is positive on [ h, b]. The proof of (39) is completed. Let us prove (40). Now w(t) = ct 2 h 3 (t h) 3 /2 and t b. As it is proved in b), we have w(b) = 0. Hence it suffices to show that w (t) 0 for t b. We have w (t) = 2ct 3(t h) 2 /2. Since w = 3, the function w is concave. Therefore in order to prove that the inequality w (t) 0 holds, it suffices to check that c) w (b) 0, and d) w (b) 0. c) We have w (b) = 2cb 3(b h) 2 /2. Using the definition of c, we can write w (b) = s(h)/b with s(h) = 2h 3 (h b) 2 (h + b/2). 17

18 The inequality w (b) 0 is equivalent to s(h) 0 for b h 0. We note that s( b) = 0 and s (h) = 3h(h + b) 0 for b h 0. Hence s(h) 0, and w (b) 0. d) We have w (t) = 2c 3(t h) and w (b) = 2c 3(b h) s(h)/b 2, s(h) def = h 3 2b 3 + 3bh 2. Thus, inequality w (b) 0 is equivalent to s(h) 0 for b h 0. Since s( b) = 0 and s(0) = 2b 3 < 0, and s is convex (noting that s (h) = 6(b + h) 0), if follows that s(h) 0 for all b h 0. Introduce the function In view of (39) and (40) we have We can decompose F as follows F(t) = max{g(t), f h (t)}, t 0. F(t) = g(t) for 0 t b, F(t) = f h (t) for t b. F(t) = g(t) + v(t), v(t) = (f h (t) g(t)) I{t b} w(t)i{t b}. (41) Using (40) it is clear that v is increasing positive function such that v(t) = 0 for t b. By (38) we have E v(y ) Ev(ξ). Since EY 2 = Eξ 2 it follows that Eg(Y ) = Eg(ξ). Therefore Ef h (Y ) EF(Y ) = Eg(Y ) + Ev(Y ) Eg(ξ) + Ev(ξ). Noting that f h (0) = g(0) and that L(ξ) = rδ + 2ν [p], we obtain Ev(ξ) = [f h (t) g(t)] 2ν [p] (dt) [b, ) = Ef h (ξ) rf h (0) [b, ) Combining the inequalities we derive Ef h (Y ) Ef h (ξ). g(t) 2ν [p] (dt) = Ef h (ξ) Eg(ξ). (42) iii) The proof is similar to the proof in the case ii), therefore we shall omit technical calculations. Now 0 h b, and f h (t) = 0 for 0 t h, f h (t) = (t h) 3 /2 for t h. Introduce the function g(t) = ct 2, c def = (b h) 3 /(2b 2 ). 18

19 The function g satisfies g(0) = 0 and g(b) = f h (b) = (b h) 3 /2. Let Then F(t) = max{g(t), f h (t)}, t 0. F(t) = g(t) for 0 t b, F(t) = f h (t) for t b. Now we can argue similar to (41) (42). iv) Now h b. Using the stochastic domination condition (37), we obtain Ef h (Y ) f h (t) 2ν(dt) = Ef h (ξ), completing the proof of (15). (h, ) Let us prove (16), that is, that Ef(X) Ef(η) for 4-convex f. It suffices to show that E (h X) 3 + E (h η)3 +, for h R. (43) Indeed, any 4-convex function f can be represented as (see Section 3) f(x) = P(x) + (h x) 3 + λ(dh) + (x h) 3 + λ(dh), (44) (,x 0 ] (x 0, ) where λ is a positive locally finite Borel measure on R (actually 6λ = f (4) in the generalized sense), the point x 0 R is arbitrary, and P(x) = c 3 x 3 + c 2 x 2 + c 1 x + c 0 is a cubic polynomial whose coefficients can depend on x 0 and f. Using (15) and (43), noting that EP(X) = EP(η) due to EX 2 = Eη 2 and symmetry of X and η, the representation (44) clearly implies Ef(X) Ef(η). It remains to prove (43). Using the obvious decomposition (h t) 3 + = (t h) 3 + (t h) 3, inequality (43) is equivalent to E (X h) 3 + E (X h)3 E (η h) 3 + E (η h)3. (45) Since X and η are symmetric and EX 2 = Eη 2, it follows that E (X h) 3 = E (η h) 3. Hence, (45) is equivalent to E (X h) 3 + E (η h)3 + which we have proved earlier as (15). Proof of Theorem 11. We have to prove (17) and (18). During the proof we write η = η(σ 2, ν), ξ = η(s 2, ν), s 2 σ 2. In view of Theorem 10 instead of (17) and (18) it suffices to prove that E (ξ h) 3 + E (η h)3 +, for h R, (46) 19

20 and that assuming that f is convex positive and increasing. Ef(ξ) Ef(η), (47) Let us prove (46). Using the symmetry of ξ and η we can rewrite (46) as Ef h ( ξ ) Ef h ( η ) with 2f h (t) = (t h) + 3 +( t h) 3 +. (48) In order to prove (48) it suffices to note that the function t f h (t) : [0, ) R is increasing and that ξ st η (we omit formal checking of these simple facts). Let us prove (47). Let k = 1, 2,.... Introduce the functions f k (t) = f(t) for t k, f k (t) = P(t) for t k, where P(t) = f( k) + f ( k)(t + k) + f ( k)(t + k) 2 /2. (49) It is clear that lim f k = f. One can check that f k f. A bit later we show that f k can k be represented as f k (x) = P(x) + (x h) 3 + λ k(dh) (50) ( 2k, ) with P defined by (49), and a positive locally finite Borel measure λ k. Using (50) it follows that Ef k (ϑ) Ef k (ξ). To see this, it suffices to use E (ξ h) 3 + E (η h)3 + and to note that EP(ξ) EP(η). The latter inequality can be easily checked using the symmetry of ξ and η, applying E ξ 2 Eη 2, and noting that f ( k) 0 since we assume that f 0. Using f k f, the inequality Ef k (ξ) Ef k (η) implies that Ef k (ξ) Ef(η). Passing now to the limit as k and using lim f k = f, we derive Ef(ξ) Ef(η). k In order to complete the proof of (47) we have to derive (50). It is clear that f k is convex since f is convex increasing and f k (t) = f (t) for t k, f k (t) = f ( k) for t k. We conclude that f k is 4-convex. Furthermore, f k (t) = f(4) k (t) = 0 for t < k. Being 4-convex f k has an integral representation of type (52) given in Section 3. Choosing x 0 = 2k, this representation takes the form of (50) with some P(t) = c 3 t 3 + c 2 t 2 + c 1 t + c 0. Note that in our particular case the integral involving (h x) 3 + in (52) vanishes because f (4) k (t) = 0 for t < k. To define the coefficients c 3, c 2, c 1, c 0 note that for x < 2k the integral in (35) vanishes since (x h) + = 0 for x < 2k < h. Hence f k (x) = P(x) for x < 2k. Therefore we can conclude that P is the polynomial given by (49). 20

21 3 Higher order convex functions Let s 2 be an integer. We say that f : R R is s-convex if f is s 2 times continuously differentiable and f (s 2) is a convex function. According to the definition 2-convex functions are just convex ones. For example, the function e t is s-convex for all s 2, and the function e t is s-convex iff s is an even integer. See Pečarić 1992 for alternative definitions and a discussion related to convexity of the higher order. We are going to give some useful representations of s-convex functions. Let us introduce the related notation. Let M loc (J) stand for the class of signed Borel measures on an interval J R such that λ M loc (J) have finite variation sup λ(b) < on compact subsets A J. The class of positive λ is denoted as M loc + (J). We abbreviate Mloc = M loc (R) B A and M loc + = M loc + (R). Let us recall that a measure λ M loc (J) is generalized (or Sobolev) derivative of µ M loc (J) if for all ϕ C 0 (J) ϕ(t) λ(dt) = ϕ (t) µ(dt). Here as usually C 0 (J) stands for the class of all infinitely many times differentiable functions with compact support. We write λ = µ. We identify absolutely continuous measures with their densities with respect to the Lebesgue measure. In particular, notation λ = f means that λ is the generalized derivative of the measure f(t) dt. Note that the density f as a function is defined almost everywhere. Henceforth, if exists, we always choose a continuous or right continuous version of f. Now we can give another definition of s-convexity. Namely, by the definition a function f is s-convex (s = 1, 2,...) if its generalized derivative f (s) is a positive measure, that is, f (s) M loc + (R). We omit a rather standard proof that the two definitions of s-convex functions are equivalent, for s 2. Note that 1-convex functions are increasing ones, 2-convex functions are convex ones, etc. If f is convex and x 0 R then there exist positive Borel measure λ M loc +, and a linear function P 1 such that f(x) = P 1 (x) + (h x) + λ(dh) + (x h) + λ(dh). (51) (,x 0 ] (x 0, ) Naturally, P 1 in (51) can depend on f and x 0. As λ one can take λ = f (2). Several representations close to (51) are given in B 2008b. The representation (51) has a counterpart for s-convex f. Namely, if f is s-convex and x 0 R then with λ = f (s) /(s 1)! and a polynomial P s 1 = a s 1 x s a 0 of degree at most s 1 we have f(x) = P s 1 (x) + ( 1) s (,x 0 ] (h x) s 1 + λ(dh) + 21 (x 0, ) (x h) s 1 + λ(dh), (52)

22 where P s 1 in (52) can depend on f, s and x 0. By our agreement we consider (if exists) the continuous versions of f. Therefore for s 2 the equality (52) holds for all x R. See (53) for a counterpart of (52) in the case s = 1. To prove (52) one can proceed as follows. Let F stand for the right hand side of (52). Differentiating s times the function F, we see that F (s) = f (s). Therefore the difference F f is a polynomial of degree at most s 1. Hence, changing, if necessary, the coefficients P s 1, we get (52). In the proof (s 1)-th and s-th derivatives one has to understand in the generalized sense. We omit a detailed exposition of technicalities related to the proof of (52). In the special case s = 1 the representation (52) can be written as f(x) = P 0 I{h > x} λ(dh) + I{x h} λ(dh), (53) (,x 0 ] where the constant P 0 in (53) can depend on f and x 0. (x 0, ) References [1] V. Bentkus, A remark on the inequalities of Bernstein, Prokhorov, Bennett, Hoeffding, and Talagrand, Lith. Math. J., 42(3): , [2] V. Bentkus, On Hoeffding s inequalities, Ann. Probab., 32(2): , [3] V. Bentkus, An extension of an inequality of Hoeffding to unbounded random variables, Lith. Math. J., 48(3): , 2008a. [4] Bentkus, V., Addendum to: An extension of an inequality of Hoeffding to unbounded random variables, Lith. Math. J., 48(2): , 2008b. [5] Bentkus, V., On stop loss premium for unbounded risks, manuscript, [6] V. Bentkus, N. Kalosha, and M. van Zuijlen, On domination of tail probabilities of (super)martingales: explicit bounds, Lith. Math. J., 46(1):1 43, [7] M. Denuit, J. Dhaene, M. Goovaerts, and R. Kaas Actuarial theory for dependent risks. Measures, orderrs, and models. J. Viley, New York, [8] De Vylder, F.; Goovaerts, M., Best bounds on the stop-loss premium in case of known range, expectation, variance and mode of the risk, Insurance Math. Econom., 2(4): , [9] W. Hoeffding, Probability inequalities for sums of bounded random variables, J. Am. Statist. Assoc., 58:13 30,

23 [10] Hürlimann, Werner, Extremal moment methods and stochastic orders: application in actuarial science. Chapters I, II and III, Bol. Asoc. Mat. Venez., 15:5 110, [11] Jansen, K.; Haezendonck, J.; Goovaerts, M.J., Upper bounds on stop-loss premiums in case of known moments up to the fourth order J.. Insurance Math. Econom., 5(4): , [12] Kaas, R.; Goovaerts, M. J., Extremal values of stop-loss premiums under moment constraints, Insurance Math. Econom., 5(4): , [13] C. McDiarmid, On the method of bounded differences, Surveys in combinatorics, 1989 (Norwich 1989), London Math. Soc. Lecture Note Ser., 141: , [14] I. Pinelis, Optimal tail comparison based on comparison of moments, High dimensional probability (Oberwolfach, 1996), Progr. Probab., 43: , [15] I. Pinelis, Fractional sums and integrals of r-concave tails and applications to comparison probability inequalities, Advances in stochastic inequalities (Atlanta, GA, 1997), Contemp. Math., 234: , Am. Math. Soc., Providence, RI [16] I. Pinelis, On normal domination of (super)martingales, Electron. J. Prabab, 11(39): , [17] I. Pinelis, Inequalities for sums of asymmetric random variables, with applications, Probab. Theory Related Fields, 139(3 4): , 2007a. [18] I. Pinelis, Toward the best constant factor for the Rademacher-Gaussian tail comparison, ESAIM Probab. Stat., 11: , 2007b. [19] M. Talagrand, The missing factor in Hoeffding s inequalities, Ann. Inst. H. Poincaré Probab. Statist., 31(4): , [20] W. Hoeffding, Probability inequalities for sums of bounded random variables, J. Am. Statist. Assoc., 58:13 30, [21] A. Marshall and I. Olkin, Inequalities: Theory of Majorization and Its Applications, Academic Press, New York San Francisco, [22] I. Pinelis, Optimal tail comparison based on comparison of moments, High dimensional probability (Oberwolfach, 1996), Progr. Probab., 43: , [23] I. Pinelis, Fractional sums and integrals of r-concave tails and applications to comparison probability inequalities, Advances in stochastic inequalities (Atlanta, GA, 1997), Contemp. Math., 234: , Am. Math. Soc., Providence, RI [24] M. Shaked and J. G. Shanthikumar, Stochastic orders, Springer, New York,

24 Vidmantas Bentkus Akademijos str. 4 Institute of Mathematics and Informatics Vilnius, Lithuania address: bentkus@ktl.mii.lt 24

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES Lithuanian Mathematical Journal, Vol. 4, No. 3, 00 AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES V. Bentkus Vilnius Institute of Mathematics and Informatics, Akademijos 4,

More information

Lithuanian Mathematical Journal, 2006, No 1

Lithuanian Mathematical Journal, 2006, No 1 ON DOMINATION OF TAIL PROBABILITIES OF (SUPER)MARTINGALES: EXPLICIT BOUNDS V. Bentkus, 1,3 N. Kalosha, 2,3 M. van Zuijlen 2,3 Lithuanian Mathematical Journal, 2006, No 1 Abstract. Let X be a random variable

More information

On Kusuoka Representation of Law Invariant Risk Measures

On Kusuoka Representation of Law Invariant Risk Measures MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

Characterization of Upper Comonotonicity via Tail Convex Order

Characterization of Upper Comonotonicity via Tail Convex Order Characterization of Upper Comonotonicity via Tail Convex Order Hee Seok Nam a,, Qihe Tang a, Fan Yang b a Department of Statistics and Actuarial Science, University of Iowa, 241 Schaeffer Hall, Iowa City,

More information

2 Statement of the problem and assumptions

2 Statement of the problem and assumptions Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on

More information

A new approach for stochastic ordering of risks

A new approach for stochastic ordering of risks A new approach for stochastic ordering of risks Liang Hong, PhD, FSA Department of Mathematics Robert Morris University Presented at 2014 Actuarial Research Conference UC Santa Barbara July 16, 2014 Liang

More information

arxiv: v1 [math.oc] 18 Jul 2011

arxiv: v1 [math.oc] 18 Jul 2011 arxiv:1107.3493v1 [math.oc] 18 Jul 011 Tchebycheff systems and extremal problems for generalized moments: a brief survey Iosif Pinelis Department of Mathematical Sciences Michigan Technological University

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

On the Bennett-Hoeffding inequality

On the Bennett-Hoeffding inequality On the Bennett-Hoeffding inequality Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058

More information

Practical approaches to the estimation of the ruin probability in a risk model with additional funds

Practical approaches to the estimation of the ruin probability in a risk model with additional funds Modern Stochastics: Theory and Applications (204) 67 80 DOI: 05559/5-VMSTA8 Practical approaches to the estimation of the ruin probability in a risk model with additional funds Yuliya Mishura a Olena Ragulina

More information

Stability of optimization problems with stochastic dominance constraints

Stability of optimization problems with stochastic dominance constraints Stability of optimization problems with stochastic dominance constraints D. Dentcheva and W. Römisch Stevens Institute of Technology, Hoboken Humboldt-University Berlin www.math.hu-berlin.de/~romisch SIAM

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

2019 Spring MATH2060A Mathematical Analysis II 1

2019 Spring MATH2060A Mathematical Analysis II 1 2019 Spring MATH2060A Mathematical Analysis II 1 Notes 1. CONVEX FUNCTIONS First we define what a convex function is. Let f be a function on an interval I. For x < y in I, the straight line connecting

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Module 3. Function of a Random Variable and its distribution

Module 3. Function of a Random Variable and its distribution Module 3 Function of a Random Variable and its distribution 1. Function of a Random Variable Let Ω, F, be a probability space and let be random variable defined on Ω, F,. Further let h: R R be a given

More information

On Kusuoka representation of law invariant risk measures

On Kusuoka representation of law invariant risk measures On Kusuoka representation of law invariant risk measures Alexander Shapiro School of Industrial & Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive, Atlanta, GA 3332. Abstract. In this

More information

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012 NOTES ON CALCULUS OF VARIATIONS JON JOHNSEN September 13, 212 1. The basic problem In Calculus of Variations one is given a fixed C 2 -function F (t, x, u), where F is defined for t [, t 1 ] and x, u R,

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

Calculation of Bayes Premium for Conditional Elliptical Risks

Calculation of Bayes Premium for Conditional Elliptical Risks 1 Calculation of Bayes Premium for Conditional Elliptical Risks Alfred Kume 1 and Enkelejd Hashorva University of Kent & University of Lausanne February 1, 13 Abstract: In this paper we discuss the calculation

More information

ON THE LINEAR INDEPENDENCE OF THE SET OF DIRICHLET EXPONENTS

ON THE LINEAR INDEPENDENCE OF THE SET OF DIRICHLET EXPONENTS ON THE LINEAR INDEPENDENCE OF THE SET OF DIRICHLET EXPONENTS Abstract. Given k 2 let α 1,..., α k be transcendental numbers such that α 1,..., α k 1 are algebraically independent over Q and α k Q(α 1,...,

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

ON THE UNIQUENESS PROPERTY FOR PRODUCTS OF SYMMETRIC INVARIANT PROBABILITY MEASURES

ON THE UNIQUENESS PROPERTY FOR PRODUCTS OF SYMMETRIC INVARIANT PROBABILITY MEASURES Georgian Mathematical Journal Volume 9 (2002), Number 1, 75 82 ON THE UNIQUENESS PROPERTY FOR PRODUCTS OF SYMMETRIC INVARIANT PROBABILITY MEASURES A. KHARAZISHVILI Abstract. Two symmetric invariant probability

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

1 An Elementary Introduction to Monotone Transportation

1 An Elementary Introduction to Monotone Transportation An Elementary Introduction to Monotone Transportation after K. Ball [3] A summary written by Paata Ivanisvili Abstract We outline existence of the Brenier map. As an application we present simple proofs

More information

Notes on Poisson Approximation

Notes on Poisson Approximation Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared

More information

EXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE

EXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE Annales Univ. Sci. Budapest., Sect. Comp. 42 2014) 109 125 EXPLICIT MULTIVARIATE BOUNDS OF CHEBYSHEV TYPE Villő Csiszár Budapest, Hungary) Tamás Fegyverneki Budapest, Hungary) Tamás F. Móri Budapest, Hungary)

More information

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals

Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals Acta Applicandae Mathematicae 78: 145 154, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 145 Asymptotically Efficient Nonparametric Estimation of Nonlinear Spectral Functionals M.

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Applications of axiomatic capital allocation and generalized weighted allocation

Applications of axiomatic capital allocation and generalized weighted allocation 6 2010 11 ( ) Journal of East China Normal University (Natural Science) No. 6 Nov. 2010 Article ID: 1000-5641(2010)06-0146-10 Applications of axiomatic capital allocation and generalized weighted allocation

More information

On an uniqueness theorem for characteristic functions

On an uniqueness theorem for characteristic functions ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of

More information

On duality theory of conic linear problems

On duality theory of conic linear problems On duality theory of conic linear problems Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 3332-25, USA e-mail: ashapiro@isye.gatech.edu

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS

SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS Journal of Applied Analysis Vol. 9, No. 2 (2003), pp. 261 273 SECOND-ORDER CHARACTERIZATIONS OF CONVEX AND PSEUDOCONVEX FUNCTIONS I. GINCHEV and V. I. IVANOV Received June 16, 2002 and, in revised form,

More information

DIEUDONNE AGBOR AND JAN BOMAN

DIEUDONNE AGBOR AND JAN BOMAN ON THE MODULUS OF CONTINUITY OF MAPPINGS BETWEEN EUCLIDEAN SPACES DIEUDONNE AGBOR AND JAN BOMAN Abstract Let f be a function from R p to R q and let Λ be a finite set of pairs (θ, η) R p R q. Assume that

More information

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ. Convexity in R n Let E be a convex subset of R n. A function f : E (, ] is convex iff f(tx + (1 t)y) (1 t)f(x) + tf(y) x, y E, t [0, 1]. A similar definition holds in any vector space. A topology is needed

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

On the minimum of certain functional related to the Schrödinger equation

On the minimum of certain functional related to the Schrödinger equation Electronic Journal of Qualitative Theory of Differential Equations 2013, No. 8, 1-21; http://www.math.u-szeged.hu/ejqtde/ On the minimum of certain functional related to the Schrödinger equation Artūras

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION

THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications

Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Asymptotics of random sums of heavy-tailed negatively dependent random variables with applications Remigijus Leipus (with Yang Yang, Yuebao Wang, Jonas Šiaulys) CIRM, Luminy, April 26-30, 2010 1. Preliminaries

More information

Joint Mixability. Bin Wang and Ruodu Wang. 23 July Abstract

Joint Mixability. Bin Wang and Ruodu Wang. 23 July Abstract Joint Mixability Bin Wang and Ruodu Wang 23 July 205 Abstract Many optimization problems in probabilistic combinatorics and mass transportation impose fixed marginal constraints. A natural and open question

More information

Risk Aggregation with Dependence Uncertainty

Risk Aggregation with Dependence Uncertainty Introduction Extreme Scenarios Asymptotic Behavior Challenges Risk Aggregation with Dependence Uncertainty Department of Statistics and Actuarial Science University of Waterloo, Canada Seminar at ETH Zurich

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Random convolution of O-exponential distributions

Random convolution of O-exponential distributions Nonlinear Analysis: Modelling and Control, Vol. 20, No. 3, 447 454 ISSN 1392-5113 http://dx.doi.org/10.15388/na.2015.3.9 Random convolution of O-exponential distributions Svetlana Danilenko a, Jonas Šiaulys

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS

LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS ANALYSIS FOR HIGH SCHOOL TEACHERS LECTURE 6. CONTINUOUS FUNCTIONS AND BASIC TOPOLOGICAL NOTIONS ROTHSCHILD CAESARIA COURSE, 2011/2 1. The idea of approximation revisited When discussing the notion of the

More information

Convergence of generalized entropy minimizers in sequences of convex problems

Convergence of generalized entropy minimizers in sequences of convex problems Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences

More information

Herz (cf. [H], and also [BS]) proved that the reverse inequality is also true, that is,

Herz (cf. [H], and also [BS]) proved that the reverse inequality is also true, that is, REARRANGEMENT OF HARDY-LITTLEWOOD MAXIMAL FUNCTIONS IN LORENTZ SPACES. Jesús Bastero*, Mario Milman and Francisco J. Ruiz** Abstract. For the classical Hardy-Littlewood maximal function M f, a well known

More information

1 Generating functions

1 Generating functions 1 Generating functions Even quite straightforward counting problems can lead to laborious and lengthy calculations. These are greatly simplified by using generating functions. 2 Definition 1.1. Given a

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Remarks on quantiles and distortion risk measures

Remarks on quantiles and distortion risk measures Remarks on quantiles and distortion risk measures Jan Dhaene Alexander Kukush y Daniël Linders z Qihe Tang x Version: October 7, 202 Abstract Distorted expectations can be expressed as weighted averages

More information

MATH5011 Real Analysis I. Exercise 1 Suggested Solution

MATH5011 Real Analysis I. Exercise 1 Suggested Solution MATH5011 Real Analysis I Exercise 1 Suggested Solution Notations in the notes are used. (1) Show that every open set in R can be written as a countable union of mutually disjoint open intervals. Hint:

More information

CHAPTER 6. Differentiation

CHAPTER 6. Differentiation CHPTER 6 Differentiation The generalization from elementary calculus of differentiation in measure theory is less obvious than that of integration, and the methods of treating it are somewhat involved.

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Weak and strong moments of l r -norms of log-concave vectors

Weak and strong moments of l r -norms of log-concave vectors Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Concentration of Measures by Bounded Couplings

Concentration of Measures by Bounded Couplings Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional

More information

Simple Integer Recourse Models: Convexity and Convex Approximations

Simple Integer Recourse Models: Convexity and Convex Approximations Simple Integer Recourse Models: Convexity and Convex Approximations Willem K. Klein Haneveld Leen Stougie Maarten H. van der Vlerk November 8, 005 We consider the objective function of a simple recourse

More information

arxiv: v1 [math.pr] 3 Jun 2011

arxiv: v1 [math.pr] 3 Jun 2011 Stochastic order characterization of uniform integrability and tightness Lasse Leskelä Matti Vihola arxiv:116.67v1 [math.pr] 3 Jun 211 June 6, 211 Abstract We show that a family of random variables is

More information

Asymptotics of minimax stochastic programs

Asymptotics of minimax stochastic programs Asymptotics of minimax stochastic programs Alexander Shapiro Abstract. We discuss in this paper asymptotics of the sample average approximation (SAA) of the optimal value of a minimax stochastic programming

More information

Constructive bounds for a Ramsey-type problem

Constructive bounds for a Ramsey-type problem Constructive bounds for a Ramsey-type problem Noga Alon Michael Krivelevich Abstract For every fixed integers r, s satisfying r < s there exists some ɛ = ɛ(r, s > 0 for which we construct explicitly an

More information

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE

A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE Journal of Applied Analysis Vol. 6, No. 1 (2000), pp. 139 148 A CHARACTERIZATION OF STRICT LOCAL MINIMIZERS OF ORDER ONE FOR STATIC MINMAX PROBLEMS IN THE PARAMETRIC CONSTRAINT CASE A. W. A. TAHA Received

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

Notes on the Risk Premium and Risk Aversion

Notes on the Risk Premium and Risk Aversion CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Notes on the Risk Premium and Risk Aversion KC Border May 1996 1 Preliminaries The risk premium π u (w, Z) for an expected

More information

Zeros of lacunary random polynomials

Zeros of lacunary random polynomials Zeros of lacunary random polynomials Igor E. Pritsker Dedicated to Norm Levenberg on his 60th birthday Abstract We study the asymptotic distribution of zeros for the lacunary random polynomials. It is

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Tail Mutual Exclusivity and Tail- Var Lower Bounds

Tail Mutual Exclusivity and Tail- Var Lower Bounds Tail Mutual Exclusivity and Tail- Var Lower Bounds Ka Chun Cheung, Michel Denuit, Jan Dhaene AFI_15100 TAIL MUTUAL EXCLUSIVITY AND TAIL-VAR LOWER BOUNDS KA CHUN CHEUNG Department of Statistics and Actuarial

More information

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction

NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS. Shouchuan Hu Nikolas S. Papageorgiou. 1. Introduction Topological Methods in Nonlinear Analysis Journal of the Juliusz Schauder Center Volume 34, 29, 327 338 NONTRIVIAL SOLUTIONS FOR SUPERQUADRATIC NONAUTONOMOUS PERIODIC SYSTEMS Shouchuan Hu Nikolas S. Papageorgiou

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

Upper stop-loss bounds for sums of possibly dependent risks with given means and variances

Upper stop-loss bounds for sums of possibly dependent risks with given means and variances Statistics & Probability Letters 57 (00) 33 4 Upper stop-loss bounds for sums of possibly dependent risks with given means and variances Christian Genest a, Etienne Marceau b, Mhamed Mesoui c a Departement

More information

Stochastic orders: a brief introduction and Bruno s contributions. Franco Pellerey

Stochastic orders: a brief introduction and Bruno s contributions. Franco Pellerey Stochastic orders: a brief introduction and Bruno s contributions. Franco Pellerey Stochastic orders (comparisons) Among his main interests in research activity A field where his contributions are still

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

Pedantic Notes on the Risk Premium and Risk Aversion

Pedantic Notes on the Risk Premium and Risk Aversion Division of the Humanities and Social Sciences Pedantic Notes on the Risk Premium and Risk Aversion KC Border May 1996 1 Preliminaries The risk premium π u (w, Z) for an expected utility decision maker

More information

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law On Isoerimetric Functions of Probability Measures Having Log-Concave Densities with Resect to the Standard Normal Law Sergey G. Bobkov Abstract Isoerimetric inequalities are discussed for one-dimensional

More information

Closest Moment Estimation under General Conditions

Closest Moment Estimation under General Conditions Closest Moment Estimation under General Conditions Chirok Han and Robert de Jong January 28, 2002 Abstract This paper considers Closest Moment (CM) estimation with a general distance function, and avoids

More information

On an Effective Solution of the Optimal Stopping Problem for Random Walks

On an Effective Solution of the Optimal Stopping Problem for Random Walks QUANTITATIVE FINANCE RESEARCH CENTRE QUANTITATIVE FINANCE RESEARCH CENTRE Research Paper 131 September 2004 On an Effective Solution of the Optimal Stopping Problem for Random Walks Alexander Novikov and

More information

Analysis II - few selective results

Analysis II - few selective results Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem

More information

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1

EC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if

More information

Concentration, self-bounding functions

Concentration, self-bounding functions Concentration, self-bounding functions S. Boucheron 1 and G. Lugosi 2 and P. Massart 3 1 Laboratoire de Probabilités et Modèles Aléatoires Université Paris-Diderot 2 Economics University Pompeu Fabra 3

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints

Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Valid Inequalities and Restrictions for Stochastic Programming Problems with First Order Stochastic Dominance Constraints Nilay Noyan Andrzej Ruszczyński March 21, 2006 Abstract Stochastic dominance relations

More information

PCA sets and convexity

PCA sets and convexity F U N D A M E N T A MATHEMATICAE 163 (2000) PCA sets and convexity by Robert K a u f m a n (Urbana, IL) Abstract. Three sets occurring in functional analysis are shown to be of class PCA (also called Σ

More information

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels.

(6, 4) Is there arbitrage in this market? If so, find all arbitrages. If not, find all pricing kernels. Advanced Financial Models Example sheet - Michaelmas 208 Michael Tehranchi Problem. Consider a two-asset model with prices given by (P, P 2 ) (3, 9) /4 (4, 6) (6, 8) /4 /2 (6, 4) Is there arbitrage in

More information

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS

MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS MONOTONICITY OF RATIOS INVOLVING INCOMPLETE GAMMA FUNCTIONS WITH ACTUARIAL APPLICATIONS EDWARD FURMAN Department of Mathematics and Statistics York University Toronto, Ontario M3J 1P3, Canada EMail: efurman@mathstat.yorku.ca

More information

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin

A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN. Dedicated to the memory of Mikhail Gordin A CLT FOR MULTI-DIMENSIONAL MARTINGALE DIFFERENCES IN A LEXICOGRAPHIC ORDER GUY COHEN Dedicated to the memory of Mikhail Gordin Abstract. We prove a central limit theorem for a square-integrable ergodic

More information

Admin and Lecture 1: Recap of Measure Theory

Admin and Lecture 1: Recap of Measure Theory Admin and Lecture 1: Recap of Measure Theory David Aldous January 16, 2018 I don t use bcourses: Read web page (search Aldous 205B) Web page rather unorganized some topics done by Nike in 205A will post

More information

APPROXIMATING BOCHNER INTEGRALS BY RIEMANN SUMS

APPROXIMATING BOCHNER INTEGRALS BY RIEMANN SUMS APPROXIMATING BOCHNER INTEGRALS BY RIEMANN SUMS J.M.A.M. VAN NEERVEN Abstract. Let µ be a tight Borel measure on a metric space, let X be a Banach space, and let f : (, µ) X be Bochner integrable. We show

More information

Discrete uniform limit law for additive functions on shifted primes

Discrete uniform limit law for additive functions on shifted primes Nonlinear Analysis: Modelling and Control, Vol. 2, No. 4, 437 447 ISSN 392-53 http://dx.doi.org/0.5388/na.206.4. Discrete uniform it law for additive functions on shifted primes Gediminas Stepanauskas,

More information