Uniformly Accurate Quantile Bounds Via The Truncated Moment Generating Function: The Symmetric Case
|
|
- Ariel Cooper
- 5 years ago
- Views:
Transcription
1 E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol. 2 (2007), Paper no. 47, pages Journal URL Uniformly Accurate Quantile Bounds Via The Truncated Moment Generating Function: The Symmetric Case Michael J. Klass University of California Departments of Mathematics and Statistics Berkeley, CA , USA klass@stat.berkeley.edu Krzysztof Nowicki Lund University Department of Statistics Box 743, S Lund, Sweden krzysztof.nowicki@stat.lu.se Abstract Let X,X 2,... be independent and symmetric random variables such that S n = X +...+X n converges to a finite valued random variable S a.s. and let S = sup n< S n (which is finite a.s.). We construct upper and lower bounds for s y and s y, the upper y th quantile of S y and S, respectively. Our approximations rely on an explicitly computable quantity q y for which we prove that 2 q y/2 < s y < 2q 2y and 2 q y 4 (+ 8/y) < s y < 2q 2y. The RHS s hold for y 2 and the LHS s for y 94 and y 97, respectively. Although our results are derived primarily for symmetric random variables, they apply to non-negative variates and extend to an absolute value of a sum of independent but otherwise arbitrary random variables. Key words: Sum of independent random variables, tail distributions, tail probabilities, quantile approximation, Hoffmann-Jørgensen/Klass-Nowicki Inequality. Supported by NSF Grant DMS
2 AMS 2000 Subject Classification: Primary 60G50, 60E5, 46E30; Secondary:46B09. Submitted to EJP on September 2, 2006, final version accepted September 24,
3 Introduction The classical problem of approximating the tail probability of a sum of independent random variables dates back at least to the famous central limit theorem of de Moivre (730 33) for sums of independent identically distributed (i.i.d.) Bernoulli random variables. As time has passed, increasingly general central limit theorems have been established and stable limit theorems proved. All of these results were asymptotic and approximated the distribution of the sum only sufficiently near its center. With the advent of the upper and lower exponential inequalities of Kolmogoroff (929), the possibility materialized of approximating the tail probabilities of a sum having a fixed, finite number of mean zero uniformly bounded random summands, over a much broader range of values. In the interest of more exact approximation, Esscher (932) recovered the error in the exponential bound by introducing a change of measure, thereby expressing the exact value of the tail probability in terms of an exponential upper-bound times an expectation factor (which is a kind of Laplace transform). Cramér (938) brought it to the attention of the probability community, showing that it can be effectively used to reduce the relative error in approximation of small tail probabilities. However, in the absence of special conditions, the expectation factor is not readily tractable, involving a function of the original sum transformed in such a way that the level to be exceeded by the original sum for the tail probability in question has been made equal or at least close to the mean of the sum of the transformed variable(s). Over the years various approaches have been used to contend with this expectation term. By slightly re-shifting the mean of the sum of i.i.d. non-negative random variables and applying the generalized mean value theorem, Jain and Pruitt (987) obtained a quite explicit tail probability approximation. Given any n, any exceedance level z and any non-degenerate i.i.d. random variables X,..., X n, Hahn and Klass (997) showed that for some unknown, universal constant > 0 n B 2 P( X j z) 2B. () The quantity B was determined by constructing a certain common truncation level and applying the usual exponential upper bound to the probability that the sum of the underlying variables each conditioned to remain below this truncation level had sum at least z. The level was chosen so that the chance of any X j reaching or exceeding that height roughly equalled the exponential upper bound then obtained. Employing a local probability approximation theorem of Hahn and Klass (995) it was found that the expectation factor in the Esscher transform could be as small as the exponential bound term itself but of no smaller order of magnitude as indicated by the LHS of (). By separately considering the contributions to S n of those X j at or below some truncation level t n and those above it, the method of Hahn and Klass bears some resemblance to work of Nagaev (965) and Fuk and Nagaev (97). Other methods required more restrictive distributional assumptions and so are of less concern to us here. The general (non-i.i.d.) case has been elusive, we now believe, because there is no limit to how much smaller the reduction factor may be compared to the exponential bound (of even suitably truncated random variables). 278
4 To see this, let wp p i X i = 0 wp 2p i wp p i and suppose p >> p 2 >>... >> p n > p n+ 0. Then, for k n P( n X j k) p p 2...p k n j=k+ Using exponential bounds, the bound for P( n X j k) is and for P( n X j > k) it is inf t>0 e tk n lim inf δց0 t>0 e t(k+δ) which is, in fact, the same exponential factor. Ee tx j n Ee tx j, ( 2p j ) Therefore in this second case the reduction factor is even smaller than p k+, which could be smaller than P α ( n X j k) for any α > 0 prescribed before construction of p k+. It is this additional reduction factor which the Esscher transform provides and which we now see can sometimes be far more accurate in identifying the order of magnitude of a tail probability than the exponential bound which it was thought to merely adjust a bit. Nor does the direct approach to tail probability approximation offer much hope because calculation of convolutions becomes unwieldy with great rapidity. As a lingering alternative one could try to employ characteristic functions. They have three principle virtues: They exist for all random variables. They retain all the distributional information. They readily handle sums of independent variables by converting a convolution to a product of marginal characteristic functions. Thus if S n = X X n where the X j are independent rv s, for any amount a with P(S n = a) = 0 the inversion formula is T P(S n a) = lim lim b T 2π T e ita e itb it n Ee itx j dt (2) Clearly, the method becomes troublesome when applied to marginal distributions X,..., X n for which at least one of the successive limits converges arbitrary slowly. In addition, formula (2) does not hold nor is it continuous at any atom a of S n. Moreover, as we have already witnessed, 279
5 the percentage drop in order of magnitude of the tail probability as a moves just above a given atom can be arbitrarily close to 00%. The very issues posed by both characteristic function inversion and change of measure have given rise to families of results: asymptotic results, large deviation results, moderate deviation results, steepest descent results, etc. Lacking are results which are explicit and non-asymptotic, applying to all fixed n sums of independent real-valued random variables without restrictions and covering essentially the entire range of the sum distribution without limiting or confining the order of magnitude of the tail probabilities. In 200 Hitczenko and Montgomery-Smith, inspired by a reinvigorating approach of Lata la (997) to uniformly accurate p-norm approximation, showed that for any integer n, any exceedance level z, and any n independent random variables satisfying what they defined to be a Levy condition, there exists a constructable function f(z) and a universal positive constant c > whose magnitude depends only upon the Levy-condition parameters such that c f(cz) P( n X j > z) cf(c z). (3) To obtain their results they employed an inequality due to Klass and Nowicki (2000) for sums of independent, symmetric random variables which they extended to sums of arbitrary random variables at some cost of precision. Using a common truncation level for the X j and a norm of the sum of truncated random variables, Hitczenko and Montgomery-Smith (999) previously obtained approximations as in (3) for sums of independent random variables all of which were either symmetric or non-negative. In this paper we show that the constant c in (3) can be chosen to be at most 2 for a slightly different function f if n X j is replaced by Sn = max k n S n and if the X j are symmetric. A slightly weaker result is obtained for S n itself. We obtain our function as the solution to a functional equation involving the moment generating function of the sum of truncated random variables. The accuracy of our results depends primarily upon an inequality pertaining to the number of event recurrences, as given in Klass and Nowicki (2003), a corollary of which extends the tail probability inequality of Klass and Nowicki (2000) in an optimal way from independent symmetric random elements to arbitrary independent random elements. In effect our approach has involved the recognition that uniformly good approximation of the tail probability of S n was impossible. However, if we switched attention to upper-quantile approximation, then uniformly good approximation could become feasible at least for sums of independent symmetric random variables. In this endeavor we have been able to obtain adequate precision from moment generating function information for sums of truncated rv s obviating the need to entertain the added complexity of transform methods despite their potentially greater exactitude. Although our results are derived for symmetric random variables, they apply to non-negative variates and have some extension to a sum of independent but otherwise arbitrary random variables via the concentration function C Z (y), where C Z (y) = inf{c : P( Z b c) y for some b R} (for y > ), (4) 280
6 by making use of the fact that the concentration function for any random variable Z possesses a natural approximation based on the symmetric random variable Z Z where Z is an independent copy of Z, due to the inequalities 2 s Z Z, y 2 C Z (y) s Z Z,y. (5) Thus, to obtain reasonably accurate concentration function and tail probability approximations of sums of independent random variables, it is sufficient to be able to obtain explicit upper and lower bounds of s P n X j if the X j are independent and symmetric. These results can be extended to Banach space settings. 2 Tail probability approximations Let X, X 2,... be independent, symmetric random variables such that letting S n = X +...+X n lim n S n = S a.s. and sup n S n = S < a.s. We introduce the upper y th quantile sy satisfying s y = sup{s : P(S s) }. (6) y For sums of random variables that behave like a normal random variable, s y is slowly varying. For sums which behave like a stable random variable with parametern α < 2, s y grows at most polynomially fast. However, if an X j has a sufficiently heavy tail, then s y can grow arbitrary rapidly. Analogously we define s y = sup{s : P(S s) }. (7) y To approximate s y and s y in the symmetric case, to which this paper is restricted, we utilize the magnitude t y, the upper y th quantile of the maximum of the Xj, defined as t y = sup{t : P( {X j t}) }. (8) y This quantity allows us to rewrite each X j in terms of the sum of two quantities: a quantity of relatively large absolute value, ( X j t y ) + sgn X j, and one of more typical size, ( X j t y )sgn X j. Take any y > and let X j,y = ( X j t y )sgn X j (9) S j,y = X,y X j,y (0) S y = lim n S n,y a.s. () S y = sup S n,y (2) n< s y = sup{s : P(S y s) y } (3) 28
7 s y = sup{s : P(S y s) y } (4) Since t y can be computed directly from the marginal distributions of the X j s, we regard t y as computable. At the very least, the tail probability functions defined in (6), (7), (3) and (4) require knowledge of convolutions. Generally speaking we regard such quantities as inherently difficult to compute. It is the object of this paper to construct good approximations to them. To simplify this task it is useful to compare them with one another. Notice that for y > s y s y, s y s y and s y s 2y. (5) The first two inequalities are trivial; the last one follows from Levy s inequality. Due to the effect of truncation, one may think that s y never exceeds s y. The example below provides a case to the contrary. In fact, s y/s y can equal +. Example 2.. Let, for y > 2, and 2 w p /y X = 0 w p 2 /y 2 w p /y w p /y X 2 = 0 w p 2 /y w p /y and, for 0 < ǫ < 2 ǫ w p ( δ)/2 X 3 = 0 w p δ ǫ w p ( δ)/2. Then t y = and P(S 0) > 2. For simplicity set p = 2 /y. Then P(S > 0) = P(S ǫ) = P(X = 2) + P(X = 0, X 2 = ) + P(X = X 2 = 0, X 3 = ǫ) = p 2 + p p 2 + p 2 δ 2 = p2 δ. 2 There exists 0 < δ < such that 2 ( p2 δ ) = y. To see this note that when δ = 0, 2 > y and when δ =, 2 ( p2 ) < y. Hence, when δ < δ <, s y = 0. It also follows that there exists a unique δ < δ < such that 2 ( p2 δ ( p)2 ( δ ) ) + = 4 2 y. 282
8 Hence for δ < δ < δ and when the random variables are truncated at t y = : P(S y ǫ) = P(X = ) + P(X = 0, X 2 = ) + P(X = X 2 = 0, X 3 = ǫ) + P(X =, X 2 =, X 3 = ǫ) = p 2 = p2 δ 2 y. + p p + p 2 δ ( p)2 + ( δ 4 2 ) So s y ǫ for δ < δ < δ. Consequently, s y/s y =. ( p)2 ( δ 4 2 ) Nevertheless, reducing y by a factor u y allows us to compare these quantities, as the following lemma describes. Lemma 2.. Suppose y 4 and let u y = y 2 ( 4 y ). Then for independent symmetric X j s, s y/u y s y s 2y. (6) Thus, for y 2 s y s y 2 /(y ). (7) Proof: To verify the RHS of (6), suppose there exists s 2y < s < s y. We have y P(S s) P(S 2y s, {X j t 2y }) + P( {X j > t 2y }) < 2y + 2y = y. which gives a contradiction. Hence s y s 2y. To prove the LHS of (6) by contradiction, suppose there exists s y < s < s y/u y. Let A y = {X j t y/uy }. 283
9 We have u y y P(S y/u y s) P(S y/u y s A y ) (because each of the marginal distributions ( X j t y/uy )sgn(x j ) is made stochastically larger by conditioning on A y while preserving independence) n P(sup (X j ( t y/uy ) A y ) = P(S s A y ) n = P(S s, A y ) P(A y ) However, by the quadratic formula, which gives a contradiction. Reparametrizing the middle of (6) gives (7) P(S s) P(A c y) < y u y /y. u y y (u y y )2 = y, Remark 2.. Removing the stars in the proof of (6) and (7) establishes two new chains of inequalities s y/uy s y s 2y, y 4 (8) and s y s y 2 /(y ), y 2. (9) Lemma 2.2. Take any y >. Then t y s 2y s 2y (20) and t y s 4y 2 /(2y ) s 4y 2 /(2y ) (2) These inequalities are essentially best possible, as will be shown by examples below. Proof: To prove (20) we first show that t y s 2y. Let τ = { last k < : X k t y if no such k exists. 284
10 Conditional on τ = k, S k is a symmetric random variable. Hence Thus s 2y t y. P(S t y ) P(S t y, τ = k) k= P(S k t y, τ = k) k= P(S k 0, τ = k) k= k= P(τ = k) (by conditional symmetry) 2 = 2 P( {X k t y }) (2y). k= The other inequality is proved similarly. To prove (2) let τ = { first k < : X k t y if no such k exists. The RHS of (2) is non-negative. Hence we may suppose t y > 0. P(S t y ) P(S t y, τ = k) k= P(S t y, τ = k, X k t y ) k= k= k= X j I( X j < t y ) + k P( 2 P(τ = k, X k t y ) = P(τ < ). 4 j=k+ X j 0, τ = k, X k t y ) To lower-bound P(τ < ) let P j,y + = P(X j t y ). Notice that since X is symmetric and t y > 0 we must have P j,y + 2. Given P( {X j t y }) y we have ( P j,y + ) = P( {X j t y }) y 285
11 so ( P j,y + ) y. Since 0 2P + j,y ( P + j,y )2 Consequently, ( 2P j,y + ) ( ( P j,n + )) 2 ( y )2. P(τ < ) = ( ( 2P j,y + )) ( y )2 = 2 y y 2 so that P(S t y ) 2y 4y 2. Therefore t y s 4y 2 /(2y ). By essentially the same reasoning t y s 4y 2 /(2y ). (20) is best possible for y > 4 3 as the following example demonstrates. Example 2.2. For any y > 4 3 and y < z < 2y we can have s z < t y and s z < t y. Let X be uniform on ( a, a) for any 0 < a. For j = 2, 3 let w p y X j = 0 w p 2 y w p y and X j = 0 for j 4. Then P( {X j }) = y so t y =. If s z t y or s z t y, then z P( max k 3 k X j ) = P(X 0, X 2 = ) + P(X 0, X 2 = 0, X 3 = ) + P(X < 0, X 2 =, X 3 = ) = 2 ( y )( + (2 y ) + y ) = 2 ( y )( + y ) = 2y which gives a contradiction. Next we show that 286
12 Example 2.3. For any fixed 0 < ǫ < there exists << y ǫ < such that for each y y ǫ we may have s 4y 2 /(2y +ǫ) < t y. Fix n large. Let X be uniform on ( a, a) for any 0 < a. For j = 2, 3,..., n + let and let X j = 0 for j > n +. Then so t z = for z y. w p ( y )/n X j = 0 w p 2( y )/n w p ( y )/n P( {X j }) = P n (X 2 < ) = ( y ) = y Fix any ǫ > 0 and suppose that s 4y 2 /(2y +ǫ). Then n+ 2y + ǫ 4y 2 P( X j ) As y the latter quantity equals n+ n+ = P(X 0, X j = ) + P( X j 2) j=2 n n+ 2 P(X 2 = )P( {X j = 0}) j=3 j=2 ( ) n+ n n+ + P 2 (X 2 = )P( {X j = 0}) + P( X j 3) 2 j=4 n ( 2 ln( y ))( y )2 + 2 (ln( y ))2 ( y )2 + O( y 3). j=2 2y 4y 2 + O( 2y + ǫ/2 y3) < 4y 2, for all y sufficiently large, which gives a contradiction. Moreover, as the next example demonstrates, t y divided by the upper y th -quantile of each of the above variables may be unbounded. It also shows that s y and s y (as well as s y and s y ) can be zero for arbitrary large values of y. (Note: These quantities are certainly non-negative for y 2.) 287
13 Example 2.4. Take any y 4 3. Let X and X 2 be i.i.d. with w p /y X = 0 w p 2 /y w p /y Also, set 0 = X 3 = X 4 =... Then, 0 = P( {X j > )}) < P( {X j }) = P 2 (X < ) = y. Therefore t y =. Observe that P(S > 0) = P(S ) = P(X = ) + P(X = 0, X 2 = ) = 2( /y) /y. This quantity is less than /y. Therefore s y = s y = 0. Also, since P(S y < 0) = P(S < 0) < y, s y = s y = 0. We want to approximate s y and s y based on the behavior of the marginal distributions of variables whose sum is S. Suppose we attempt to construct our approximation by means of a quantity q y which involves some reparametrization of the moment generating function of a truncated version of S. Inspired by Luxemburg s approach to constructing norms for functions in Orlicz space and affording ourselves as much latitude as possible, we temporarily introduce arbitrary positive functions f (y) and f 2 (y), defining q y as the real satisfying q y = sup{q > 0 : E(f (y)) S y /q f 2 (y)}. (22) To avoid triviality we also require that S y be non-constant. Since S y is symmetric, E(f (y)) S y /q = E(f (y)) S y /q >. Therefore we may assume that f (y) > and f 2 (y) >. Given f (y) and constant 0 < c <, we want to choose f 2 (y) so that s y < c q y and q y is as small as possible. Notice that a sum of independent, symmetric, uniformly bounded rv s converges a.s. iff its variance is finite. Consequently, by Lemma 3. to follow, h(q) < where h(q) = E(f (y)) S y /q. Clearly, h(q) is a strictly decreasing continuous function of q > 0 with range (, ), (see Remark 3.). Hence, there is a unique q y such that E(f (y)) S y /q y = f 2 (y). (23) Ideally, we would like to choose f 2 (y) so that s y = c q y. But since we can not directly compare s y and q y, we must content ourselves with selecting a value for f 2 (y) in (, y (f (y)) c ] because for these values we can demonstrate that s y < c q y. Since q y decreases as f 2 (y) increases, we will set f 2 (y) = y (f (y)) c. This gives the sharpest inequality our method of proof can provide. 288
14 Lemma 2.3. Fix any y >, any f (y) >, and any c > 0 such that (f (y)) c /y >. Define q y according to (22) with f 2 (y) = y (f (y)) c. Then, if t y 0, s y < c q y. (24) Proof: Since t y 0, S y is non-constant and so q y > 0. To prove (24) suppose s y c q y. Let τ = { st n : S y,n c q y if such n doesn t exist. We have y P(S y c q y ) = P(τ < ) E[(f (y)) (S y,n /q y ) c I(τ = n)] n= E[(f (y)) (S y /q y ) c I(τ = n)] n= = E[(f (y)) (S y /q y ) c I(τ < )] < E(f (y)) (S y /q y ) c (since P(τ = ) > 0 and S y is finite) y E[(f (y)) (S y /q y ) c I(τ = )] y, (25) which gives a contradiction. We can extend this inequality to the following theorem: Theorem 2.4. Let f(y) > and 0 < c < be such that f c (y) > y 2. Suppose t y > 0. Let q y = sup{q > 0 : E(f(y)) S y /q y f c (y)}. (26) Then max{s y, t y } < c q y. (27) Proof: We already know that s y < c q y. Suppose t y = c q y. Let ˆX j = t y (I(X j t y ) I(X j t y )). There exist 0 valued random variables δ j such that {X i, δ j : i, j < } are independent and P( {δ j ˆX j = t y }) = y. Setting p j = P(δ j ˆXj = t y )(= P(δ j ˆXj > 0)) we obtain y = ( p j). 289
15 Then let a = (f(y)) c 2 + (f(y)) c > 0 and notice that, for all y 2, + a/y > f c (y)/y. Moreover, ( p j) p j. Consequently, p j y. Clearly, y (f(y)) c = E(f(y)) S y /q y = E(f(y)) c S y /t y = E(f(y)) c X j,y /t y E(f(y)) c δ j ˆXj /t y = ( + p j ((f(y)) c 2 + (f(y)) c )) = ( + ap j ) + a p j + a y (28) > f(y)c y for all y 2. This gives a contradiction whenever c c. Since E(f(y)) S y /q y = E((f(y)) c ) S y /(c q y ), by redefining f(y) as (f(y)) c, q y is redefined in a way which makes c =. Hence Theorem 2.4 can be restated as Corollary 2.5. Take any y 2 such that S y is non-constant. Then with equality iff max{s y, t y } = esssup S y. max{s y, t y } inf z>y {q y(z) : Ez S y /qy(z) = z y } (29) Proof: (29) follows from (23) and (27). To get the details correct, note first that for any such y > if 0 < q <ess sup S y lim z Ez(S y /q) = P(S y = esssup S y ) if q=ess sup S y 0 if q > ess sup S y Hence q y (z) esssup S y as z and so inf {q y(z) : Ez S y /qy(z) = z z>y y } esssup S y. (30) Therefore, if max{s y, t y } = esssup S y we must have equality in (29). W.l.o.g. we may assume that max{s y, t y } < esssup S y. (3) There exist monotonic z n (y, ) such that q y (z n ) inf {q y(z) : Ez S y /qy(z) = z } <. (32) z>y y 290
16 Let z = lim n z n. If z = y then q y (z n ) which is impossible as indicated in (32). If z =, then lim n q y (z n ) esssup S y which gives strict inequality in (29) by application of (3). Finally, suppose z (y, ). By dominated convergence the equation Ez S y /qy(zn) n = zn y converges to equation Ez S y /qy(z ) = z y. Hence max{s y, t y } = inf z>y {q y (z) : Ez S y /qy(z) = z y } = q y (z ) > max{s y, t y } by Theorem 2.4. The example below verifies that inequality (29) is sharp. Example 2.5. Consider a sequence of probability distributions such that, for each n, X n, X n2,..., X nn are i.i.d. and P(X nj = / n) = P(X nj = / n) = 0.5, j =,..., n. Letting n tend to we find from (29) that where B(t) is standard Brownian motion and s y inf z>y {q y(z) : Ez B()/qy(z) = z y } (33) s y = sup{s : P( max B(t) s) }. (34) t y As is well known, Notice that whence s y 2 lny as y. (35) Ez B()/qy(z) = exp( ln2 (z) 2qy(z) 2 ), (36) q y (z) = ln(z). (37) 2 ln z y Noting that inf z>y q y (z) = q y (y 2 ) = 2 lny we find that q y (y 2 ) s y as y, (38) which implies that (29) is best possible. Proceeding, we seek a lower bound of max{s y, t y } in terms of q y. Theorem 2.6. Take any y 47 such that S y is non-constant. Let f(y) =.5/ ln( 2/y) and let q y = sup{q > 0 : E(f(y)) S y /q y f 2 (y))}. (39) 29
17 Then 2 q y < max{s y, t y } < 2q y (40) with the RHS holding for y 2. Moreover, if ty q y 0 as y then lim inf y s y 2 /(y ) q y lim inf y s y q y. (4) Proof: For the moment let f(y) > denote any real such that f 2 (y) > y. Then 0 < q y < satisfies (39) with equality. The RHS of (40) is contained in Theorem 2.4. Let τ = last j : there exists i : i < j and S j,y S i,y > s y. Then, let τ 0 = last i < τ : S τ,y S i,y > s y. Notice that P( = i<j< {S j,y S i,y > s y}) = P( {τ = j}) j P(τ 0 = i, τ = j) j=2 i= j=2 j P(τ 0 = i, τ = j, S i,y 0) j=2 i= j j=2 i= P(τ 0 = i, τ = j, max k< S k,y > s y) (42) 2P( max k< S k,y > s y) 2 y. Therefore, by (39) and (73) of Theorem 3.2, y (f(y)) 2 = E(f(y)) S y /q y E(f(y)) S y /q y < (f(y)) s y /q y( 2 y ) (f(y))(s y +ty)/q y. (43) Let c and c 2 satisfy t y = c q y and s y = c 2 q y. From (43) it follows that (f(y)) 2 c 2 ( 2 y )(f(y))c +c2 < y 2. (44) Suppose that the LHS of (40) fails. Then 0 < c c 2 2. Hence Since (45) holds for all choices of f(y) > y, putting (f(y)) 3/2 ( 2 y )f(y) < y 2. (45) f(y) = 3 2 ln( + 2 y 2 ) 292
18 we must have 3 ( ))3/2 2eln( + 2 < y 2. (46) y 2 This inequality is violated for y 47. Therefore c c 2 > 2 if y 47. As for (4) suppose c 0 as y. If there exist ǫ > 0 and y n such that c 2 for y n (call it c 2n and similarly let c n denote c for y n ) is at most ǫ then, since f(y n ) 3yn 4, the LHS of (44) is asymptotic to ( 3y n 4 )2 c 2n ( 2 y n ) (3yn/4)cn+c2n ( 3y n 4 )2 c 2n > ( 3y n 4 )+ǫ (47) thereby contradicting (44). Thus the RHS of (4) holds and its LHS follows by application of (7). Remark 2.2. Letting c be any real exceeding.5, the method of proof of Theorem 2.6 also shows that if we define f(y) = c /2 ln( 2 y ) and q y,c = sup{q > 0 : E(f(y)) S y /q y f c (y))} (48) there exists y c such that for y y c such that t y > 0 2 q y,c < max{s y, t y } < c q y,c. (49) Hence, as y our upper and lower bounds for max{s y, t y } differ by a factor which can be made to converge to 3. Theorem 2.6 implies the following bounds on s v and s v for v related to y. Corollary 2.7. Take any y 47 such that S y is non-constant. Then if q y is as defined in (39) and with the LHS s holding for y 2. 2 s y/2 < q y < 2s 2y (50) 2 s y/2 < q y < 2s 2y 2 y (5) Proof: To prove the LHS of (50) and (5) write 2q y > s y (by Theorem 2.6) s y/2 (by the RHS of (6) in Lemma 2.) s y/2. 293
19 To prove the RHS of (50) and (5) we show that q y < 2(s 2y s 2y 2 /(y )). (52) To do so first recall that, by Theorem 2.6, q y < 2(s y t y ). By Remark 2., s y s 2y. Moreover, so s y s 2y s 2y 2 /(y ). Finally, s y s y 2 y s 2y 2 y (by (7) in Lemma 2.2) (by (5)) t y s 2y s 4y 2 /(2y ) (by combining results in Lemma 2.2) s 2y s 2y 2 /(y ) (since s y is non-decreasing in y) so (52) and consequently the RHS of (50) and (5) hold. The LHS s of (50) and (5) are best possible in that cs y/2 may exceed q y as y if c > 2, since in Example 2.5 above, s y 2q y and s y/2 s y as y. Corollary 2.7 can be restated as Remark 2.3. Take any y 94. Then 2 q y/2 < s y < 2q 2y. (53) The RHS of (53) is valid for y 2. Moreover, take any y 97. Then The RHS of (54) is valid for y 2. 2 q y 4 (+ 8/y) < s y < 2q 2y. (54) Remark 2.4. To approximate s y we have set our truncation level at t y. Somewhat more careful analysis might, especially in particular cases, use other truncation levels for upper and lower bounds of s y and s y. Remark 2.2 suggests that there is a sharpening of Theorem 2.6 which can be obtained if we allow c to vary with y. Our next result gives one such refinement by identifying the greatest lower bound which our approach permits and then adjoining Corollary 2.7, which identifies the least such upper bound. Theorem 2.8. For any y 4 there is a unique w y > ln y y 2 ( w y eln y y 2 such that ) wy = y 2. (55) 294
20 Let γ y satisfy and z y > 0 satisfy γ y 2γ y = w y (56) zy 2γy = γ y 2γ y ln y. (57) y 2 Then z y > y for y 4. For z > y 4 let q y (z) be the unique positive real satisfying Ez S y /qy(z) = z y. (58) Then and γ y q y (z y ) max{s y, t y } inf z>y {q y(z)} ( q y (z y )) (59) lim y γ y = 3. (60) Proof: For y > 3 there is exactly one solution to equation (55). To see this consider ( w ea )w for fixed a > 0. The log of this is convex in w, strictly decreasing for 0 w a and stricly increasing for w a. Hence, sup 0<w a ( w ea )w = and sup w a ( w ea )w =. Consequently, for b > there is a unique w = w b such that ( w ea )w = b. Now (55) holds with a = ln y y 2 and b = y 2 > The RHS of (59) follows from Corollary 2.7. As for the LHS of (59), we employ the same notation and approach as used in the proof of Theorem 2.6. Next we show that z y > y for y 4. To obtain a contradiction, suppose z y y. Then y (y 2) /( γy) exp( ) inf 2γ (y y 0<γ< 2)/( γ) exp( ). (6) 2γ To minimize (y 2) /( γ) exp( 2γ ) set γ = /( + 2 ln(y 2)). Then, for y 4, inf (y 0<γ< 2)/( γ) exp( 2γ ) = exp( 2 ( + 2 ln(y 2)) 2 ) exp( 2 )exp( ln4)(y 2) > 3(y 2) > y, (62) giving a contradiction. Hence q y (z y ) exists. Put c = but incorporating (58), and consequently ty q and c y(z y) 2 = z y s y q y(z y). Using Theorem 3.2 and arguing as in (43) c +c 2 y < (z y) c 2 ( 2 y ) z y (63) z c 2 y ( 2 y )zc +c 2 y < y 2. (64) 295
21 To obtain a contradiction of (64) suppose Then max{c, c 2 } γ y and z c 2 y ( 2 +c 2 y )zc y max{s y, t y } γ y q y (z y ). (65) where the last equality follows since, using (57), (56), and then (55), z γy zy γy ( 2 y )z2γy y = y 2, (66) γ γy y y = ( 2γ y ln y ) 2γy = (y 2)exp(w y ) (67) y 2 and by (57) and (56), ( 2 y )z2γy y = exp( w y ). (68) This gives the desired contradiction. (60) holds by direct calculation, using the definition of γ y found in (56) and the fact that w y as y. The following corollary follows from the previous results. Corollary 2.9. and γ y q y (z y ) s 2y 2 s 2y (69) y s y inf z>2y q 2y(z) q 2y (z 2y ). (70) 3 Appendix Lemma 3.. Let Y, Y 2,... be independent, symmetric, uniformly bounded rv s such that σ 2 = EY 2 j is finite. Then, for all real t, E exp(t Y j ) <. (7) Proof: See, for example, Theorem in Prokhorov (959). Remark 3.. For all p > 0 the submartingale exp(t n Y j) convergences almost surely and in L p to exp(t Y j). Consequently, for all t > 0, E sup n exp(t n Y j) <. Therefore, by dominated convergence exp(t Y j) is continuous and the submartingale has moments of all orders. 296
22 Theorem 3.2. Let X, X 2,... be independent real valued random variables such that if S n = X +...+X n, then S n converges to a finite valued random variable S almost surely. For 0 i < j < let S (i,j] S j S i = X i X j and for any real a 0 > 0 let λ = P( 0 i<j< {S (i,j] > a 0 }). Further, suppose that the X j s take values not exceeding a. Then, for every integer k and all positive reals a 0 and a, P( sup S j > ka 0 + (k )a ) P(N ln( λ) k), (72) j< (with strict inequality for k 2) where N γ denotes a Poisson variable with parameter γ > 0. Then, for every b > 0 Ee b sup j< S j < e ba 0 ( λ) exp(b(a 0+a )). (73) Proof: (72) follows from Theorem proved in Klass and Nowicki (2003): To prove (73) denote W = sup j< S j. For k 2, P(N ln( λ) k) > P(W ka 0 + (k )a ) = P( W a 0 a 0 + a > k ) = P( (W a 0) + k). a 0 + a For k = we get equality. Letting W = (W a 0) + a 0 + a, W is a non-negative integer-valued random variable which is stochastically smaller than N ln( λ). Since exp(b(a 0 + a )x) is an increasing function of x, Ee bw = e ba 0 Ee b(w a 0) e ba 0 Ee b(a 0+a )W < e ba 0 Ee b(a 0+a )N ln( λ) = e ba 0 exp( ln( λ)(e b(a 0+a ) )). 4 Acknowledgment We would like to thank the referee for his thoughtful comments. They helped us to improve the presentation of the paper. 297
23 References [] H. Cramér, On a new limit in theory of probability, in Colloquium on the Theory of Probability, (938), Hermann, Paris. Review number not available. [2] F. Esscher, On the probability function in the collective theory of risk, Skand. Aktuarietidskr. 5 (932), Review number not available. [3] M. G. Hahn and M. J. Klass, Uniform local probability approximations: improvements on Berry-Esseen, Ann. Probab. 23 (995), no., MR (96d:60069) MR [4] M. G. Hahn and M. J. Klass, Approximation of partial sums of arbitrary i.i.d. random variables and the precision of the usual exponential upper bound, Ann. Probab. 25 (997), no. 3, MR (99c:62039) MR [5] Fuk, D. H.; Nagaev, S. V. Probabilistic inequalities for sums of independent random variables. (Russian) Teor. Verojatnost. i Primenen. 6 (97), MR (45 #2772) MR [6] P. Hitczenko and S. Montgomery-Smith, A note on sums of independent random variables, in Advances in stochastic inequalities (Atlanta, GA, 997), 69 73, Contemp. Math., 234, Amer. Math. Soc., Providence, RI. MR (2000d:60080) MR [7] P. Hitczenko and S. Montgomery-Smith, Measuring the magnitude of sums of independent random variables, Ann. Probab. 29 (200), no., MR82559 (2002b:60077) MR82559 [8] N. C. Jain and W. E. Pruitt, Lower tail probability estimates for subordinators and nondecreasing random walks, Ann. Probab. 5 (987), no., MR (88m:60075) MR [9] M. J. Klass, Toward a universal law of the iterated logarithm. I, Z. Wahrsch. Verw. Gebiete 36 (976), no. 2, MR (54 #3822) MR [0] M. J. Klass and K. Nowicki, An improvement of Hoffmann-Jørgensen s inequality, Ann. Probab. 28 (2000), no. 2, MR (200h:60029) MR [] M. J. Klass and K. Nowicki, An optimal bound on the tail distribution of the number of recurrences of an event in product spaces, Probab. Theory Related Fields 26 (2003), no., MR98632 (2004f:60038) MR98632 [2] A. Kolmogoroff, Über das Gesetz des iterierten Logarithmus, Math. Ann. 0 (929), no., MR52520 MR52520 [3] R. Lata la, Estimation of moments of sums of independent real random variables, Ann. Probab. 25 (997), no. 3, MR (98h:6002) MR [4] S. V. Nagaev, Some limit theorems for large deviations, Teor. Verojatnost. i Primenen 0 (965), MR (32 #306) MR [5] Yu. V. Prokhorov, An extremal problem in probability theory, Theor. Probability Appl. 4 (959), MR02857 (22 #2587) MR
4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationAN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES
Lithuanian Mathematical Journal, Vol. 4, No. 3, 00 AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES V. Bentkus Vilnius Institute of Mathematics and Informatics, Akademijos 4,
More informationON CONCENTRATION FUNCTIONS OF RANDOM VARIABLES. Sergey G. Bobkov and Gennadiy P. Chistyakov. June 2, 2013
ON CONCENTRATION FUNCTIONS OF RANDOM VARIABLES Sergey G. Bobkov and Gennadiy P. Chistyakov June, 3 Abstract The concentration functions are considered for sums of independent random variables. Two sided
More informationSelf-normalized Cramér-Type Large Deviations for Independent Random Variables
Self-normalized Cramér-Type Large Deviations for Independent Random Variables Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu 1. Introduction Let X, X
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationOn large deviations of sums of independent random variables
On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,
More informationOn the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables
On the Set of Limit Points of Normed Sums of Geometrically Weighted I.I.D. Bounded Random Variables Deli Li 1, Yongcheng Qi, and Andrew Rosalsky 3 1 Department of Mathematical Sciences, Lakehead University,
More informationTwo-Step Iteration Scheme for Nonexpansive Mappings in Banach Space
Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate
More informationOn Concentration Functions of Random Variables
J Theor Probab (05) 8:976 988 DOI 0.007/s0959-03-0504- On Concentration Functions of Random Variables Sergey G. Bobkov Gennadiy P. Chistyakov Received: 4 April 03 / Revised: 6 June 03 / Published online:
More informationarxiv:math/ v2 [math.pr] 16 Mar 2007
CHARACTERIZATION OF LIL BEHAVIOR IN BANACH SPACE UWE EINMAHL a, and DELI LI b, a Departement Wiskunde, Vrije Universiteit Brussel, arxiv:math/0608687v2 [math.pr] 16 Mar 2007 Pleinlaan 2, B-1050 Brussel,
More informationWald for non-stopping times: The rewards of impatient prophets
Electron. Commun. Probab. 19 (2014), no. 78, 1 9. DOI: 10.1214/ECP.v19-3609 ISSN: 1083-589X ELECTRONIC COMMUNICATIONS in PROBABILITY Wald for non-stopping times: The rewards of impatient prophets Alexander
More informationarxiv: v2 [math.pr] 4 Sep 2017
arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian
More informationAsymptotic efficiency of simple decisions for the compound decision problem
Asymptotic efficiency of simple decisions for the compound decision problem Eitan Greenshtein and Ya acov Ritov Department of Statistical Sciences Duke University Durham, NC 27708-0251, USA e-mail: eitan.greenshtein@gmail.com
More informationThe largest eigenvalues of the sample covariance matrix. in the heavy-tail case
The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)
More informationA generalization of Strassen s functional LIL
A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence
More informationNew Versions of Some Classical Stochastic Inequalities
New Versions of Some Classical Stochastic Inequalities Deli Li Lakehead University, Canada 19 July 2018 at The 14th WMPRT in Chengdu This is a joint work with Professor Andrew ROSALSKY at University of
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More informationNotes 6 : First and second moment methods
Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative
More informationA COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )
Electronic Journal of Differential Equations, Vol. 2006(2006), No. 5, pp. 6. ISSN: 072-669. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu (login: ftp) A COUNTEREXAMPLE
More informationOn the Bennett-Hoeffding inequality
On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationNotes on Poisson Approximation
Notes on Poisson Approximation A. D. Barbour* Universität Zürich Progress in Stein s Method, Singapore, January 2009 These notes are a supplement to the article Topics in Poisson Approximation, which appeared
More information4 Expectation & the Lebesgue Theorems
STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does
More informationMOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES
J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary
More informationOn the absolute constants in the Berry Esseen type inequalities for identically distributed summands
On the absolute constants in the Berry Esseen type inequalities for identically distributed summands Irina Shevtsova arxiv:1111.6554v1 [math.pr] 28 Nov 2011 Abstract By a modification of the method that
More informationON THE STRONG LIMIT THEOREMS FOR DOUBLE ARRAYS OF BLOCKWISE M-DEPENDENT RANDOM VARIABLES
Available at: http://publications.ictp.it IC/28/89 United Nations Educational, Scientific Cultural Organization International Atomic Energy Agency THE ABDUS SALAM INTERNATIONAL CENTRE FOR THEORETICAL PHYSICS
More informationAsymptotic behavior for sums of non-identically distributed random variables
Appl. Math. J. Chinese Univ. 2019, 34(1: 45-54 Asymptotic behavior for sums of non-identically distributed random variables YU Chang-jun 1 CHENG Dong-ya 2,3 Abstract. For any given positive integer m,
More informationTail inequalities for additive functionals and empirical processes of. Markov chains
Tail inequalities for additive functionals and empirical processes of geometrically ergodic Markov chains University of Warsaw Banff, June 2009 Geometric ergodicity Definition A Markov chain X = (X n )
More informationLarge Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n
Large Sample Theory In statistics, we are interested in the properties of particular random variables (or estimators ), which are functions of our data. In ymptotic analysis, we focus on describing the
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationZeros of lacunary random polynomials
Zeros of lacunary random polynomials Igor E. Pritsker Dedicated to Norm Levenberg on his 60th birthday Abstract We study the asymptotic distribution of zeros for the lacunary random polynomials. It is
More informationThe Central Limit Theorem: More of the Story
The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationThe Codimension of the Zeros of a Stable Process in Random Scenery
The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar
More informationA NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES
Sankhyā : The Indian Journal of Statistics 1998, Volume 6, Series A, Pt. 2, pp. 171-175 A NOTE ON A DISTRIBUTION OF WEIGHTED SUMS OF I.I.D. RAYLEIGH RANDOM VARIABLES By P. HITCZENKO North Carolina State
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 2: Introduction to statistical learning theory. 1 / 22 Goals of statistical learning theory SLT aims at studying the performance of
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More information17. Convergence of Random Variables
7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This
More informationLarge deviations for random walks under subexponentiality: the big-jump domain
Large deviations under subexponentiality p. Large deviations for random walks under subexponentiality: the big-jump domain Ton Dieker, IBM Watson Research Center joint work with D. Denisov (Heriot-Watt,
More informationTheory of Stochastic Processes 3. Generating functions and their applications
Theory of Stochastic Processes 3. Generating functions and their applications Tomonari Sei sei@mist.i.u-tokyo.ac.jp Department of Mathematical Informatics, University of Tokyo April 20, 2017 http://www.stat.t.u-tokyo.ac.jp/~sei/lec.html
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationTHE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION
THE LINDEBERG-FELLER CENTRAL LIMIT THEOREM VIA ZERO BIAS TRANSFORMATION JAINUL VAGHASIA Contents. Introduction. Notations 3. Background in Probability Theory 3.. Expectation and Variance 3.. Convergence
More informationConcentration of Measures by Bounded Couplings
Concentration of Measures by Bounded Couplings Subhankar Ghosh, Larry Goldstein and Ümit Işlak University of Southern California [arxiv:0906.3886] [arxiv:1304.5001] May 2013 Concentration of Measure Distributional
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationarxiv: v1 [math.pr] 6 Jan 2014
On Cox-Kemperman moment inequalities for independent centered random variables P.S.Ruzankin 1 arxiv:141.17v1 [math.pr] 6 Jan 214 Abstract In 1983 Cox and Kemperman proved that Ef()+Ef() Ef(+) for all functions
More informationRandom Bernstein-Markov factors
Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit
More informationWe introduce methods that are useful in:
Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more
More informationBOUNDS ON THE EXPECTATION OF FUNCTIONS OF MARTINGALES AND SUMS OF POSITIVE RV'S IN TERMS OF NORMS OF SUMS OF INDEPENDENT RANDOM VARIABLES
proceedings of the american mathematical society Volume 108, Number 1, January 1990 BOUNDS ON THE EXPECTATION OF FUNCTIONS OF MARTINGALES AND SUMS OF POSITIVE RV'S IN TERMS OF NORMS OF SUMS OF INDEPENDENT
More informationRandom convolution of O-exponential distributions
Nonlinear Analysis: Modelling and Control, Vol. 20, No. 3, 447 454 ISSN 1392-5113 http://dx.doi.org/10.15388/na.2015.3.9 Random convolution of O-exponential distributions Svetlana Danilenko a, Jonas Šiaulys
More information(A n + B n + 1) A n + B n
344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals
More informationGärtner-Ellis Theorem and applications.
Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with
More informationOn Kusuoka Representation of Law Invariant Risk Measures
MATHEMATICS OF OPERATIONS RESEARCH Vol. 38, No. 1, February 213, pp. 142 152 ISSN 364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/1.1287/moor.112.563 213 INFORMS On Kusuoka Representation of
More informationComparison of Orlicz Lorentz Spaces
Comparison of Orlicz Lorentz Spaces S.J. Montgomery-Smith* Department of Mathematics, University of Missouri, Columbia, MO 65211. I dedicate this paper to my Mother and Father, who as well as introducing
More informationX n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)
14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence
More informationLARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*
LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science
More informationRANDOM WALKS WHOSE CONCAVE MAJORANTS OFTEN HAVE FEW FACES
RANDOM WALKS WHOSE CONCAVE MAJORANTS OFTEN HAVE FEW FACES ZHIHUA QIAO and J. MICHAEL STEELE Abstract. We construct a continuous distribution G such that the number of faces in the smallest concave majorant
More informationEstimation of the Bivariate and Marginal Distributions with Censored Data
Estimation of the Bivariate and Marginal Distributions with Censored Data Michael Akritas and Ingrid Van Keilegom Penn State University and Eindhoven University of Technology May 22, 2 Abstract Two new
More informationConcentration of Measures by Bounded Size Bias Couplings
Concentration of Measures by Bounded Size Bias Couplings Subhankar Ghosh, Larry Goldstein University of Southern California [arxiv:0906.3886] January 10 th, 2013 Concentration of Measure Distributional
More information3. Probability inequalities
3. Probability inequalities 3.. Introduction. Probability inequalities are an important instrument which is commonly used practically in all divisions of the theory of probability inequality is the Chebysjev
More informationZygfryd Kominek REMARKS ON THE STABILITY OF SOME QUADRATIC FUNCTIONAL EQUATIONS
Opuscula Mathematica Vol. 8 No. 4 008 To the memory of Professor Andrzej Lasota Zygfryd Kominek REMARKS ON THE STABILITY OF SOME QUADRATIC FUNCTIONAL EQUATIONS Abstract. Stability problems concerning the
More informationx x 1 x 2 + x 2 1 > 0. HW5. Text defines:
Lecture 15: Last time: MVT. Special case: Rolle s Theorem (when f(a) = f(b)). Recall: Defn: Let f be defined on an interval I. f is increasing (or strictly increasing) if whenever x 1, x 2 I and x 2 >
More informationLogarithmic scaling of planar random walk s local times
Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract
More information14.1 Finding frequent elements in stream
Chapter 14 Streaming Data Model 14.1 Finding frequent elements in stream A very useful statistics for many applications is to keep track of elements that occur more frequently. It can come in many flavours
More informationI forgot to mention last time: in the Ito formula for two standard processes, putting
I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationShifting processes with cyclically exchangeable increments at random
Shifting processes with cyclically exchangeable increments at random Gerónimo URIBE BRAVO (Collaboration with Loïc CHAUMONT) Instituto de Matemáticas UNAM Universidad Nacional Autónoma de México www.matem.unam.mx/geronimo
More informationMeasurable functions are approximately nice, even if look terrible.
Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............
More informationComparison of Sums of Independent Identically Distributed Random Variables
Comparison of Sums of Independent Identically Distributed Random Variables S.J. Montgomery-Smith* Department of Mathematics, University of Missouri, Columbia, MO 65211. ABSTRACT: Let S k be the k-th partial
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationOn the expected diameter of planar Brownian motion
On the expected diameter of planar Brownian motion James McRedmond a Chang Xu b 30th March 018 arxiv:1707.0375v1 [math.pr] 1 Jul 017 Abstract Knownresultsshow that thediameter d 1 ofthetrace of planarbrownian
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationConditional moment representations for dependent random variables
Conditional moment representations for dependent random variables W lodzimierz Bryc Department of Mathematics University of Cincinnati Cincinnati, OH 45 22-0025 bryc@ucbeh.san.uc.edu November 9, 995 Abstract
More informationSection 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University
Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central
More informationOn large deviations for combinatorial sums
arxiv:1901.0444v1 [math.pr] 14 Jan 019 On large deviations for combinatorial sums Andrei N. Frolov Dept. of Mathematics and Mechanics St. Petersburg State University St. Petersburg, Russia E-mail address:
More informationWeighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing
Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationReliability of Coherent Systems with Dependent Component Lifetimes
Reliability of Coherent Systems with Dependent Component Lifetimes M. Burkschat Abstract In reliability theory, coherent systems represent a classical framework for describing the structure of technical
More informationConvexity of chance constraints with dependent random variables: the use of copulae
Convexity of chance constraints with dependent random variables: the use of copulae René Henrion 1 and Cyrille Strugarek 2 1 Weierstrass Institute for Applied Analysis and Stochastics, 10117 Berlin, Germany.
More informationLecture 1 Measure concentration
CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples
More informationCONVERGENCE OF THE STEEPEST DESCENT METHOD FOR ACCRETIVE OPERATORS
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 127, Number 12, Pages 3677 3683 S 0002-9939(99)04975-8 Article electronically published on May 11, 1999 CONVERGENCE OF THE STEEPEST DESCENT METHOD
More informationConsidering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.
Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and
More informationIn particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with
Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient
More informationExistence and Uniqueness
Chapter 3 Existence and Uniqueness An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect
More informationThe Canonical Gaussian Measure on R
The Canonical Gaussian Measure on R 1. Introduction The main goal of this course is to study Gaussian measures. The simplest example of a Gaussian measure is the canonical Gaussian measure P on R where
More informationMath 410 Homework 6 Due Monday, October 26
Math 40 Homework 6 Due Monday, October 26. Let c be any constant and assume that lim s n = s and lim t n = t. Prove that: a) lim c s n = c s We talked about these in class: We want to show that for all
More informationAN EXTENSION OF THE HONG-PARK VERSION OF THE CHOW-ROBBINS THEOREM ON SUMS OF NONINTEGRABLE RANDOM VARIABLES
J. Korean Math. Soc. 32 (1995), No. 2, pp. 363 370 AN EXTENSION OF THE HONG-PARK VERSION OF THE CHOW-ROBBINS THEOREM ON SUMS OF NONINTEGRABLE RANDOM VARIABLES ANDRÉ ADLER AND ANDREW ROSALSKY A famous result
More informationn! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2
Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,
More informationSOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES
Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER
More informationArithmetic progressions in sumsets
ACTA ARITHMETICA LX.2 (1991) Arithmetic progressions in sumsets by Imre Z. Ruzsa* (Budapest) 1. Introduction. Let A, B [1, N] be sets of integers, A = B = cn. Bourgain [2] proved that A + B always contains
More informationWeak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria
Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then
More informationLecture 4 Noisy Channel Coding
Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem
More informationIf g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get
18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in
More informationIntroduction to Self-normalized Limit Theory
Introduction to Self-normalized Limit Theory Qi-Man Shao The Chinese University of Hong Kong E-mail: qmshao@cuhk.edu.hk Outline What is the self-normalization? Why? Classical limit theorems Self-normalized
More informationThe Subexponential Product Convolution of Two Weibull-type Distributions
The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang
More informationSupermodular ordering of Poisson arrays
Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore
More informationMath Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah
Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber
More informationA new method to bound rate of convergence
A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution
More informationWeak and strong moments of l r -norms of log-concave vectors
Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure
More information