EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES WITH APPLICATIONS

Size: px
Start display at page:

Download "EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES WITH APPLICATIONS"

Transcription

1 The Annals of Applied Probability 28, Vol 18, No 5, DOI: 11214/7-AAP56 Institute of Mathematical Statistics, 28 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES WITH APPLICATIONS BY BERNARD BERCU AND ABDERRAHMEN TOUATI Université Bordeaux 1 and Faculté des Sciences de Bizerte We propose several exponential inequalities for self-normalized martingales similar to those established by De la Peña The keystone is the introduction of a new notion of random variable heavy on left or right Applications associated with linear regressions, autoregressive and branching processes are also provided 1 Introduction Let M n be a locally square integrable real martingale adapted to a filtration F = F n with M = The predictable quadratic variation and the total quadratic variation of M n are respectively given by n n M n = E[ Mk 2 F k 1] and [M] n = k=1 Mk 2 k=1 where M n = M n M n 1 The celebrated Azuma Hoeffding inequality [4, 16, 18] is as follows THEOREM 11 Azuma Hoeffding s inequality Let M n be a locally square integrable real martingale such that, for each 1 k n, a k M k b k as for some constants a k <b k Then, for all x, 2x 2 11 P M n x 2exp nk=1 b k a k 2 Another result which involves the predictable quadratic variation M n is the so-called Freedman inequality [13] THEOREM 12 Freedman s inequality Let M n be a locally square integrable real martingale such that, for each 1 k n, M k c as for some constant c> Then, for all x,y >, x 2 12 PM n x, M n y exp 2y + cx Received July 27; revised December 27 AMS 2 subject classifications Primary 6E15, 6G42; secondary 6G15, 6J8 Key words and phrases Exponential inequalities, martingales, autoregressive processes, branching processes 1848

2 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1849 Over the last decade, extensive study has been made to establish exponential inequalities for M n relaxing the boundedness assumption on its increments On the one hand, under the standard Bernstein condition that for n 1, p 2andfor some constant c>, n k=1 E[ M k p F k 1 ] cp 2 p! M n, 2 Pinelis [21] and De la Peña [8] recover 12 Van de Geer [12] also proves 12 replacing M n by a suitable increasing process On the other hand, if M n is conditionally symmetric which means that for n 1, the conditional distribution of M n given F n 1 is symmetric, then De la Peña [8] establishes the nice following result THEOREM 13 De la Peña s inequality Let M n be a locally square integrable and conditionally symmetric real martingale Then, for all x,y >, PM n x,[m] n y exp x2 13 2y Some extensions of the above inequalities in a more general framework including discrete-time martingales can also be found in [9, 11] where the conditionally symmetric assumption is still required for 13 We also refer the reader to the recent survey of De la Peña, Klass and Lai [1] By a careful reading of [8], one can see that 13 is a two-sided exponential inequality More precisely, if M n is conditionally symmetric, then, for all x,y >, P M n x,[m] n y 2exp x2 14 2y By comparing 14 and11, we are only halfway to Azuma Hoeffding s inequality which holds without the total quadratic variation [M] n The purpose of this paper is to establish several exponential inequalities in the spirit of the original work of De la Peña [8] In Section 2, we shall propose two-sided exponential inequalities involving M n as well as [M] n without any assumption on the martingale M n Section 3 is devoted to the introduction of a new concept of random variables heavy on left or right This notion is really useful if one is only interested in obtaining a one-sided exponential inequality for M n It also provides a clearer understanding of De la Peña s conditional symmetric assumption We shall show in Section 4 that this new concept allows us to prove 13 As in [8], we shall also propose exponential inequalities for M n self-normalized by [M] n or M n Section 5 is devoted to applications on linear regressions, autoregressive and branching processes All technical proofs are postponed to the Appendix

3 185 B BERCU AND A TOUATI 2 Two-sided exponential inequalities This section is devoted to two-sided exponential inequalities involving M n and [M] n We start with the following basic lemma LEMMA 21 Let X be a square integrable random variable with mean zero and variance σ 2 > For all t R, denote [ Lt = E exp tx t2 ] 21 2 X2 Then, we have for all t R, 22 Lt 1 + t2 2 σ 2 PROOF The proof is given in Appendix A Our first result, without any assumption on M n, is as follows THEOREM 21 all x,y >, 23 Let M n be a locally square integrable martingale Then, for P M n x,[m] n + M n y 2exp x2 REMARK 21 A similar result for continuous-time locally square integrable martingale may be found in the first part of Proposition 423 of Barlow, Jacka and Yor [5] For self-normalized martingales, we obtain the following result THEOREM 22 Let M n be a locally square integrable martingale Then, for all x,y >, a and b>, M n P x, M n [M] n + y 2exp x 2 ab + b2 y 24 a + b M n 2 Moreover, we also have M n P x,[m] n y M n a + b M n 25 [ x 2 2inf E exp p 1 p>1 1 + y PROOF The proof is given in Appendix B 2y ab + b2 ] 1/p 2 M n REMARK 22 It is not hard to see that 24 and25 also hold exchanging the roles of M n and [M] n

4 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES Random variables heavy on left or right This section deals with our new notion of random variables heavy on left or right It allows us to improve Lemma 21 DEFINITION 31 We shall say that an integrable random variable X is heavy on left if E[X]= and, for all a>, E[T a X] where T a X = min X,asignX is the truncated version of X Moreover, X is heavy on right if X is heavy on left REMARK 31 Let F be the cumulative distribution function associated with X Standard calculation leads to E[T a X] = Ha where H is the function defined, for all a>, by Ha= a F x 1 Fx dx where Fx stands for the left limit of F at point x Consequently, X is heavy on left if E[X]= and, for all a>, Ha Moreover, H is equal to zero at infinity as lim Ha= E[X]= a Furthermore, one can observe that a random variable X is symmetric if and only if X is heavy on left and on right The following lemma is the keystone of our one-sided exponential inequalities LEMMA 31 For a random variable X and for all t R, let [ Lt = E exp tx t2 ] 2 X2 1 If X is heavy on left, then for all t, Lt 1 2 If X is heavy on right, then for all t, Lt 1 3 If X is symmetric, then for all t R, Lt 1 PROOF The proof is given in Appendix A We shall now provide several examples of random variables heavy on left More details concerning these examples may be found in Appendix E We wish to point out that most of all positive random variables centered around their mean are heavy on left As a matter of fact, let Y be a positive integrable random variable with mean m and denote X = Y m

5 1852 B BERCU AND A TOUATI Discrete random variables 1 If Y has a Bernoulli distribution Bp with parameter <p<1, then X is heavy on left, heavy on right, or symmetric if p<1/2, p>1/2, or p = 1/2, respectively 2 If Y has a Geometric distribution Gp with parameter <p<1, then X is always heavy on left 3 If Y has a Poisson distribution P λ with parameter λ>, then X is heavy on left as soon as [λ] 2exp λ k= λ k k! 1 One can observe that this condition is always fulfilled if λ is a positive integer; see Lemma 1 of [1] Continuous random variables 1 If Y has an exponential distribution Eλ with parameter λ>, then X is always heavy on left 2 If Y has a Gamma distribution Ga, λ with parameters a,λ >, then X is always heavy on left 3 If Y has a Pareto distribution with parameters a,λ >, that is, Y = a expz where Z has an exponential distribution Eλ,thenX is always heavy on left 4 If Y has a log-normal distribution with parameters m R and σ 2 >, that is, Y = expz where Z has a Normal distribution N m, σ 2,thenX is always heavy on left 4 One-sided exponential inequalities Our next results are related to martingales heavy on left in the sense of the following definition DEFINITION 41 Let M n be a locally square integrable martingale adapted to a filtration F = F n We shall say that M n is heavy on left if all its increments are conditionally heavy on left In other words, for all n 1andforanya>, E[T a M n F n 1 ] Moreover, M n is heavy on right if M n is heavy on left We shall recover Theorem 13 under the assumption that M n is heavy on left THEOREM 41 Let M n be a locally square integrable martingale heavy on left Then, for all x,y >, PM n x,[m] n y exp x2 41 2y For self-normalized martingales, our results are as follows

6 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1853 THEOREM 42 Let M n be a locally square integrable martingale heavy on left Then, for all x>, a and b>, 42 M n P x a + b[m] n and, for all y>, 43 P M n a + b[m] n x,[m] n y [ inf E exp p 1x ab+ 2 b2 ] 1/p p>1 2 [M] n, exp x 2 ab + b2 y 2 Moreover, we also have M n P x,[m] n y M n a + b M n 44 [ inf E exp p 1 ab x2 + b2 ] 1/p p>1 y 2 M n PROOF The proof is given in Appendix C REMARK 41 In the particular case p = 2, Theorem 42 is due to De la Peña [8] under the conditional symmetric assumption on M n The only difference between 25 and44 isthat1 + y is replaced by y in the upper-bound of 44 REMARK 42 A locally square integrable martingale M n is Gaussian if, for all n 1, the distribution of its increments M n given F n 1 is N, M n Moreover, M n is called sub-gaussian if there exists some constant α>such that, for all n 1andt R, α 2 t 2 45 E[expt M n F n 1 ] exp 2 M n It is well known that if the increments of M n are bounded or if M n is Gaussian, then M n is sub-gaussian In addition, if M n satisfies 45, then inequalities 41, 42and43 hold with appropriate upper-bounds, replacing [M] n by M n everywhere For example, 42 can be rewritten as M n P x a + b M n 46 [ inf E exp p 1 ab x2 p>1 α 2 + b2 ] 1/p 2 M n

7 1854 B BERCU AND A TOUATI 5 Applications 51 Linear regressions Consider the stochastic linear regression given, for all n, by 51 X n+1 = θφ n + ε n+1 where X n, φ n and ε n are the observation, the regression variable and the driven noise, respectively We assume that φ n is a sequence of independent and identically distributed random variables We also assume that ε n is a sequence of identically distributed random variables, with mean zero and variance σ 2 > Moreover, we suppose that, for all n, the random variable ε n+1 is independent of F n where F n = σφ,ε 1,,φ n 1,ε n In order to estimate the unknown parameter θ, we make use of the least-squares estimator θ n given, for all n 1, by 52 nk=1 φ k 1 X k θ n = nk=1 φk 1 2 It immediately follows from 51 and52 that θ n θ = σ 2 M n 53 M n where n M n = φ k 1 ε k and M n = σ 2 n φk 1 2 k=1 k=1 Let H and L be the cumulant generating functions of the sequences φn 2 and ε2 n, respectively given, for all t R,by Ht= log E[exptφn 2 ] and Lt = log E[exptε2 n ] COROLLARY 51 Assume that L is finite on some interval [,c] with c> and denote by I its Fenchel Legendre transform on [,c], Ix= sup {xt Lt} t c Then, for all n 1, x>and y>, we have P θ n θ x 54 n 2inf exp p>1 p H p 1x2 2σ y + exp ni σ 2 y n REMARK 51 Corollary 51 is also true if φ n,ε n is a sequence of independent and identically distributed random vectors of R 2 such that the marginal distribution of ε n is symmetric By use of 44, inequality 54 holds replacing 1 + y by y in the argument of H

8 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1855 REMARK 52 As soon as the sequence ε n is bounded, the right-hand side of 54 vanishes since we may directly compare [M n ] with M n For example, assume that ε n is distributed as a centered Bernoulli Bp distribution with parameter <p<1 If r = maxp, q, we clearly have for all n, [M] n r2 pq M n Consequently, we immediately infer from 25 that for all n 1andx>, n P θ n θ x 2exp 2 H x2 4r 2 Furthermore, assume that φ n is distributed as a normal N,τ 2 distribution with variance τ 2 > Then, we deduce that for all n 1andx>, P θ n θ x 2exp n 1 4 log + τ 2 x 2 2r 2 PROOF OF COROLLARY 51 It follows from 25 that,foralln 1, x > and y>, P θ n θ x = P M n x σ 2 M n P n x, y + Q n y where Q n y = P[M] n >y M n and [ P n x, y = 2inf E exp p 1 p>1 n = 2inf exp p>1 p H p 1x2 2σ y x 2 2σ y M n In addition, for all y>and t c, n [ n Q n y P εk 2 >σ2 y exp σ 2 tye exp t k=1 k=1 exp σ 2 ty + nlt σ 2 y exp ni, n which achieves the proof of Corollary 51 ] 1/p 52 Autoregressive processes Consider the autoregressive process given, for all n, by ε 2 k ] 55 X n+1 = θx n + ε n+1

9 1856 B BERCU AND A TOUATI where X n and ε n are the observation and the driven noise, respectively We assume that ε n is a sequence of independent and identically distributed random variables with standard N,σ 2 distribution where σ 2 > The process is said to be stable if θ < 1, unstable if θ =1 and explosive if θ > 1 We can estimate the unknown parameter θ by the least-squares or the Yule Walker estimators given, for all n 1, by 56 θ n = nk=1 X k 1 X k nk=1 X 2 k 1 nk=1 and θ X k 1 X k n = nk= Xk 2 It is well known that θ n and θ n both converge almost surely to θ and their fluctuations can be found in [22] In the stable case θ < 1, the large deviation principles were established in [6] More precisely, set a = θ θ and b = θ + θ Assume that X is independent of ε n with N,σ 2 /1 θ 2 distribution Then, θ n and θ n satisfy large deviation principles with good rate functions respectively given by θ 2 Ix= 2 log 2θx 1 x 2, if x [a,b], log θ 2x, otherwise, θ 2 Jx= 2 log 2θx 1 x 2, if x ] 1, 1[, +, otherwise It is only recently that sharp large deviation principles were established for the Yule Walker estimator θ n in the stable, unstable and explosive cases [7] Much work remains to be done for the least-squares estimator θ n Our goal is to propose, whatever the value of θ is, a very simple exponential inequality for both θ n and θ n For the sake of simplicity, we assume that X is independent of ε n with N,τ 2 distribution where τ 2 σ 2 COROLLARY 52 For all n 1 and x>, we have P θ n θ x 2exp nx y x where y x is the unique positive solution of the equation hy x = x 2 and h is the function hx = 1 + xlog1 + x x Moreover, for all n 1 and x>, we also have P θ n θ x + θ 2exp nx y x

10 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1857 PROOF The proof is given in Appendix D REMARK 53 Inequality 57 can be very simple if x is small enough As a matter of fact, one can easily see that for all <x<1, hx < x 2 /4 Consequently, it immediately follows from 57 that, for all <x<1/2, P θ n θ x 2exp nx x Moreover, if θ>, we can deduce from 56 that, for all x>, P θ n θ x exp nx y x 53 Branching processes Consider the Galton Watson process starting from X = 1 and given, for all n 1, by 59 X n 1 X n = Y n,k k=1 where Y n,k is a sequence of independent and identically distributed, nonnegative integer-valued random variables The distribution of Y n,k, with finite mean m and variance σ 2, is commonly called the offspring or reproduction distribution Hereafter, we shall assume that m>1 In order to estimate the offspring mean m, we can make use of the Lotka Nagaev or the Harris estimators given, for all n 1, by 51 m n = X n X n 1 and m n = nk=1 X k nk=1 X k 1 Without loss of generality, we can suppose that the set of extinction of the process X n is negligible Consequently, the Lotka Nagaev estimator m n is always well defined It is well known that m n and m n both converge almost surely to m and their fluctuations are given in [3, 14, 15] Moreover, the large deviation properties associated with m n may be found in [2, 19, 2] Our goal is now to establish, as in the previous sections, exponential inequalities for both m n and m n Denote by L the cumulant generating function associated with the centered offspring distribution given, for all t R,byLt = log E[exptY n,k m] COROLLARY 53 Assume that L is finite on some interval [ c,c] with c> and let I be its Fenchel Legendre transform, Then, for all n 1 and x>, 511 Ix= sup c t c {xt Lt} P m n m x 2E[exp JxX n 1 ]

11 1858 B BERCU AND A TOUATI where Jx= mini x, I x Moreover, we also have [ ] 1/p 512 P m n m x 2inf E exp p 1J xxn 1 p>1 In addition, if S n = n k= X k, we have for all n 1 and x>, 513 [ ] 1/p P m n m x 2inf E exp p 1J xsn 1 p>1 PROOF The proof is given in Appendix E REMARK 54 On the one hand, inequality 512 obviously holds for the Harris estimator m n since we always have S n X n On the other hand, in order to specify the right-hand side of 511, 512 or513, it is necessary to find an upper-bound or to provide an explicit expression of the moment generating function of X n One can easily carry out this calculation when the offspring distribution is the geometric Gp distribution with parameter <p<1 As a matter of fact, in that particular case, the offspring mean m = 1/p and it follows from formula 73 of [15] that for all <s<1, E[s X n ] pn s 1 s Consequently, for all n 1andx>, we obtain the simple inequality P m n m x 2pn exp Jx p1 exp J x If the offspring distribution is not geometric, one can precisely estimate the moment generating function of X n using Theorem 1, page 8 of [3] which gives a good approximation of the distribution of X n based on the limiting distribution X n W = lim n m n as APPENDIX A This appendix is devoted to the proofs of Lemma 21 and Lemma 31 Lemma 21 immediately follows from Jensen s inequality As a matter of fact, 21 implies that for all t R, Lt exp E [tx t2 2 X2 ] Consequently, we obtain that for all t R, exp t2 2 σ 2 A1 Lt 1 t2 2 σ 2

12 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1859 Furthermore, for all t R, [ Lt + L t= 2E exp t2 ] A2 2 X2 coshtx 2 by the well-known inequality coshx expx 2 /2 Hence, we obtain from A1 together with A2 that for all t R, Lt 2 L t 1 + t2 2 σ 2 Lemma 31 is much more difficult to prove Let f be the function defined, for all x R,by fx= exp x x2 2 We clearly have f x = 1 xfx and f x = 1 + xf x Weshall also make use of the functions a and b defined, for all x R, byax = f x and bx = f x f x One can realize that, for all x>, <ax<1, <bx<2anda x < asa x = 2x + x 2 f x After those simple preliminaries, we are in position to prove Lemma 31 Forallt R, [ Lt = E exp tx t2 X 2 ] = ftxdfx 2 R where F is the distribution function associated with X Integrating by parts, we have for all t R, Lt = t f txfxdx A3 = t = t R + f txfxdx t + f txf xdx t f txfxdx + f txfxdx Consequently, as [ t f tx dx = exp tx t2 x 2 ] + = 1, 2 we obtain from A3 that,forallt R, Lt = 1 tit where A4 It= + = At + Bt f txf xdx + f tx 1 Fx dx

13 186 B BERCU AND A TOUATI with + At = Bt = + atx F x 1 Fx dx, btx 1 Fx dx First of all, assume that X is heavy on left Our goal is to show that, for all t>, the integral It is nonnegative We obviously have, for all t>, Bt as btx > In addition, for any a>, let Ha= a F x 1 Fx dx Since H a = F a 1 Faalmost everywhere, integrating once again by parts, we find that + + At = atxh x dx =[atxh x] + ta txhxdx A5 = t + a txhxdx as H = andh vanishes at infinity Hereafter, as X is heavy on left, Ha foralla Moreover, we recall that, for all x>, a x < Hence, we immediately deduce from A5that, forallt>, At Consequently, relation A4 leads to It andlt 1forallt>, which completes the proof of part 1 of Lemma 31 Next, if X is heavy on right, X is heavy on left Hence, we immediately infer from 21 and part 1 of Lemma 31 that Lt 1forallt< Finally, part 3 of Lemma 31 follows from the conjunction of parts 1 and 2 Another straightforward way to prove part 3 is as follows If X is symmetric, we have for all t R, + Lt = ftxdfx= ftx+ f tx dfx = 2 R + exp t 2 x 2 /2 coshtx df x 1 by the well-known inequality coshx expx 2 /2 APPENDIX B In order to prove Theorems 21 and 22, we shall often make use of the following lemma LEMMA B1 Let M n be a locally square integrable martingale For all t R and n, denote V n t = exp tm n t2 2 [M] n + M n

14 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1861 Then, for all t R, V n t is a positive supermartingale with E[V n t] 1 PROOF Forall t R and n 1, we have V n t = V n 1 t exp t M n t2 2 [M] n + M n where M n = M n M n 1, [M] n = Mn 2 and M n = E[ Mn 2 F n 1] Hence, we deduce from Lemma 21 that for all t R, E[V n t F n 1 ] V n 1 t exp t2 2 M n 1 + t2 2 M n V n 1 t Consequently, for all t R, V n t is a positive supermartingale such that, for all n 1, E[V n t] E[V n 1 t] which implies that E[V n t] E[V t]=1 We are now in position to prove Theorems 21 and 22 inspired by the original work of De la Peña [8] First of all, denote For all x,y >, let Z n =[M] n + M n A n ={ M n x,z n y} We have the decomposition A n = A + n A n where A+ n ={M n x,z n y} and A n ={M n x,z n y} By Markov s inequality, we have for all t>, t PA + n [exp E 2 M n tx ] 1 2 A +n [ t E exp 2 M n t2 t 2 4 Z n exp 4 Z n tx ] 1 2 A +n t 2 y exp 4 tx E[V n t]pa + n 2 Hence, we deduce from Lemma B1 that for all t>, t PA + 2 n exp y 4 tx B1 PA + n 2 Dividing both sides of B1 by PA + n and choosing the value t = x/y,wefind that PA + n exp x2 2y We also find the same upper-bound for PA n which immediately leads to 23

15 1862 B BERCU AND A TOUATI We next proceed to the proof of Theorem 22 in the special case a = andb = 1 inasmuch as the proof for the general case follows exactly the same lines For all x,y >, let where B n ={ M n x M n, M n [M] n y}=b + n B n B + n ={M n x M n, M n [M] n y}, B n ={M n x M n, M n [M] n y} By Cauchy Schwarz s inequality, we have for all t>, [exp t PB n + E 2 M n tx ] 2 M n 1 B +n B2 [ t E exp 2 M n t2 t 4 Z n exp 4 t 2x M n + t2 ] 4 [M] n 1 B +n Consequently, we obtain from B2 with the particular choice t = x that PB n + exp x2 y B3 PB n + 4 Therefore, if we divide both sides of B3 by PB n +,wefindthat PB n + exp x2 y 2 The same upper-bound holds for PBn which clearly implies 24 Furthermore, for all x,y >, let C n ={ M n x M n, [M] n y M n }=C n + C n where C n + ={M n x M n, [M] n y M n }, Cn ={M n x M n, [M] n y M n } By Hölder s inequality, we have for all t>andq>1, [exp t PC n + E q M n tx ] q M n 1 C +n B4 [ t E exp q M n t2 2q Z n [ tp E exp 2q t 2x + ty M n t exp 2q t 2x + ty M n ] 1/p 1 C +n ]

16 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1863 Consequently, as p/q = p 1, we can deduce from B4 and the particular choice t = x/1 + y that [ PC n + inf x 2 ] 1/p E exp p 1 p> y M n We also find the same upper-bound for PCn which completes the proof of Theorem 22 APPENDIX C The proofs of Theorems 41 and 42 are based on the following lemma LEMMA C1 Let M n be a locally square integrable martingale For all t R and n, denote W n t = exp tm n t2 2 [M] n 1 If M n is heavy on left, then for all t, W n t is a supermartingale with E[W n t] 1 2 If M n is heavy on right, then for all t, W n t is a supermartingale with E[W n t] 1 3 If M n is conditionally symmetric, then for all t R, W n t is a supermartingale with E[W n t] 1 PROOF Lemma C1 part 3 is due to De la Peña [8], Lemma 61 Our approach is totally different as it mainly relies on Lemma 31 Assume that M n is heavy on left For all t R and n 1, we have W n t = W n 1 t exp t M n t2 2 [M] n where [M] n = Mn 2 We infer from Lemma 31 part 1 that for all n 1and for all t, [ E exp t M n t2 Fn 1 ] 2 M2 n 1 Consequently, for all t, W n t is a positive supermartingale such that, for all n 1, E[W n t] E[W n 1 t] which leads to E[W n t] E[W t]=1 The rest of the proof is also a straightforward application of Lemma 31 By use of Lemma C1, the proof of Theorem 41 is quite analogous to that of Theorem 21 and therefore is left to the reader We shall proceed to the proof of

17 1864 B BERCU AND A TOUATI Theorem 42 in the special case a = andb = 1 For all x>, let A n ={M n x[m] n } By Hölder s inequality, we have for all t>andq>1, [ t PA n E exp q M n tx ] q [M] n 1 An C1 [ t E exp q M n t2 ] t 2q [M] n exp 2q t 2x[M] n 1 An [ ] tp 1/p E exp 2q t 2x[M] n E[W n t] 1/q Since M n is heavy on left, it follows from Lemma C1 that for all t, E[W n t] 1 Consequently, as p/q = p 1, we can deduce from C1 andthe particular choice t = x that [ PA n inf E exp p 1 x2 ] 1/p p>1 2 [M] n Furthermore, for all x,y >, let B n ={M n x[m] n, [M] n y} As before, we find that for all <t<2x, [ t PB n E exp 2 M n t2 ] t 4 [M] n exp 4 t 2x[M] n 1 Bn [ ty t exp 4 t 2x E exp 2 M n t2 ] 4 [M] n 1 Bn exp ty 4 t 2x PBn exp x2 y 2, choosing the value t = x Finally, the last inequality of Theorem 42 is left to the reader as its proof follows exactly the same arguments as 43 APPENDIX D We shall now focus our attention on the proof of Corollary 52 It immediately follows from 55 together with 56 that for all n 1, D1 where θ n θ = σ 2 M n M n n M n = X k 1 ε k and M n = σ 2 n Xk 1 2 k=1 k=1

18 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1865 The driven noise ε n is a sequence of independent and identically distributed random variables with N,σ 2 distribution Consequently, for all n 1, the distribution of the increments M n = X n 1 ε n given F n 1 is N,σ 2 Xn 1 2 which implies that M n is a Gaussian martingale Therefore, we infer from inequality 46 that for all n 1andx>, P θ n θ x = P M n x σ 2 M n = 2P M n x σ 2 M n D2 [ 2inf E exp p 1 x2 ] 1/p p>1 2σ 4 M n Similar result may be found in [17, 23, 24] We are now halfway to our goal and it remains to find a suitable upper-bound for the right-hand side of D2 For all t R such that 1 2σ 2 t>, if α = 1/ 1 2σ 2 t, we deduce from 55 that,for all n 1, E[exptX 2 n F n 1]=exptθ 2 X 2 n 1 E[exp2θtX n 1ε n + tε 2 n F n 1] = exptθ 2 X 2 n 1 σ 2π R exp x2 2α 2 σ 2 exp2θtx n 1 xdx Hence, if β = 2tασθX n 1, we find via the change of variables y = x/ασ that E[exptXn 2 F n 1]= α exptθ 2 Xn 1 2 exp y2 2π R 2 + βy dy = α exp tθ 2 2n 1 β2 X + = α exptα 2 θ 2 Xn 1 2 2, which implies that, for all t<andn 1, D3 E[exptXn 2 F n 1] α Furthermore, as X is N,τ 2 distributed with τ 2 σ 2, E[exptX 2 ] α It immediately follows from D3 together with the tower property of the conditional expectation that for all t<andn, D4 E[expt M n ] 1 2σ 4 t n/2 Consequently, we deduce from the conjunction of D2 andd4 with the value t = p 1x 2 /2σ 4 and the change of variables y = p 1x 2 that for all x> and n 1, P θ n θ x 2inf exp nx2 y> 2 ly where the function l is given by log1 + y ly = x 2 + y

19 1866 B BERCU AND A TOUATI We clearly have l y = x 2 hy 1 + yx 2 + y 2 where hy = 1 + ylog1 + y y One can observe that the function h is the Cramer transform of the centered Poisson distribution with parameter 1 Let y x be the unique positive solution of the equation hy x = x 2 Thevaluey x maximizes the function l and this natural choice clearly leads to 57 Finally, it follows from 56 and57 that for all x>andn 1, P θ n θ + θf n x 2exp nx2 D y x where the random variable f n 1 Hence, D5 implies 58 which completes the proof of Corollary 52 APPENDIX E We shall now proceed to the proof of Corollary 53 We only focus our attention on the Harris estimator inasmuch as the proof for the Lotka Nagaev estimator follows essentially the same lines First of all, relation 59 can be rewritten as E1 X n = mx n 1 + ξ n where ξ n = X n E[X n F n 1 ] Consequently, we obtain from 51 together with E1 that for all n 1, m n m = M n n where M n = ξ k S n 1 Moreover, for all n 1and t c, E[exptξ n F n 1 ]=expx n 1 Lt which implies that E2 E [ exp tm n LtS n 1 ] = 1 We are in position to prove 513 For all x>, let D n ={ m n m x}wehave the decomposition D n = D n + D n where D+ n ={ m n m x} and Dn ={ m n m x} By Hölder s inequality together with E2, we have for all t c and q>1, [exp t PD n + E q M n tx ] q S n 1 1 D +n E3 [ t E exp q M n Lt q S n 1 [ ] p 1/p E exp Lt tx Sn 1 q k=1 ] 1 exp Lt tx Sn 1 1 q D +n

20 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1867 Taking the infimum over the interval [,c], we infer from E3 that E4 PD + n E [ exp p 1I xs n 1 ] 1/p Along the same lines, we also find that E5 PD n E [ exp p 1I xs n 1 ] 1/p Finally, 513 immediately follows from E4 ande5 APPENDIX F This appendix is devoted to some justifications about the examples of random variables heavy on left or right Consider an integrable random variable X with zero mean and denote by F its cumulative distribution function Let H be the function defined, for all a>, by Ha= a F x 1 Fx dx We already saw that X is heavy on left if, for all a>, Ha while X is heavy on right if, for all a>, Ha Let Y be a positive integrable random variable with mean m and denote X = Y m Discrete random variables Assume that Y is a discrete random variable taking its values in NForalln, let n s n = PY = k After some straightforward calculations, we obtain that, for all a>, Ha= a + [m+a] k=[m a] k= s k s [m+a] +{m + a}s [m+a] {m a}s [m a] where, for all x R, [x] stands for the integer part of x and its fractional part {x} is given by {x}=x [x] and, of course, s n = foralln< Continuous random variables Assume that Y is a real random variable absolutely continuous with respect to the Lebesgue measure Denote by g its probability density function It is not hard to see that, for all a>, Ha= a + 2a am gxdx + m+a a m m + a xgxdx where a m = inf{m a,} Consequently, in order to check that X isheavyonleft, it is only necessary to show that, for all a>, Ha

21 1868 B BERCU AND A TOUATI Acknowledgments The authors would like to thank V De la Peña, T L Lai and M Ledoux for fruitful discussions REFERENCES [1] ADELL, JAandJODRÁ, P 25 The median of the Poisson distribution Metrika MR22338 [2] ATHREYA, K B 1994 Large deviation rates for branching processes I Single type case Ann Appl Probab MR [3] ATHREYA, KBandNEY, P E 1972 Branching Processes Springer, Berlin MR3734 [4] AZUMA, K 1967 Weighted sums of certain dependent random variables Tôkuku Math J MR [5] BARLOW, M T, JACKA, S D and YOR, M 1986 Inequalities for a pair of processes stopped at a random time Proc London Math Soc MR [6] BERCU, B, GAMBOA, F and ROUAULT, A 1997 Large deviations for quadratic forms of stationary Gaussian processes Stochastic Process Appl MR14864 [7] BERCU, B 21 On large deviations in the Gaussian autoregressive process: Stable, unstable and explosive cases Bernoulli MR [8] DE LAPEÑA, V H 1999 A general class of exponential inequalities for martingales and ratios Ann Probab MR [9] DE LAPEÑA, V H, KLASS, M J and LAI, T L 24 Self-normalized processes: Exponential inequalities, moments bounds and iterated logarithm law Ann Probab MR [1] DE LAPEÑA, V H, KLASS, M J and LAI, T L 27 Pseudo-maximization and selfnormalized processes Probab Surveys MR [11] DZHAPARIDZE, K and VAN ZANTEN, J H 21 On Bernstein-type inequalities for martingales Stochastic Process Appl MR [12] VAN DE GEER, S 1995 Exponential inequalities for martingales, with application to maximum likelihood estimation for counting processes Ann Statist MR13737 [13] FREEDMAN, D A 1975 On tail probabilities for martingales Ann Probab MR38971 [14] GUTTORP, P 1991 Statistical Inference for Branching Processes Wiley, New York MR [15] HARRIS, T E 1963 The Theory of Branching Processes Springer, Berlin MR [16] HOEFFDING, W J 1963 Probability inequalities sums of bounded random variables J Amer Statist Assoc MR [17] LIPTSER, R and SPOKOINY, V 2 Deviation probability bound for martingales with applications to statistical estimation Statist Probab Lett MR [18] MCDIARMID, C 1998 Concentration In Probabilistic Methods for Algorithmic Discrete Mathematics M Habib, C McDiarmid, T Ramirez-Alfonsin and B Reed, eds Springer, Berlin MR [19] NEY, PEandVIDYASHANKAR, A N 23 Harmonic moments and large deviation rates for supercritical branching processes Ann Appl Probab MR [2] NEY, P E and VIDYASHANKAR, A N 24 Local limit theory and large deviations for supercritical branching processes Ann Appl Probab MR [21] PINELIS, I 1994 Optimum bounds for the distributions of martingales in Banach spaces Ann Probab MR [22] WHITE, J S 1958 The limit distribution of the serial correlation in the explosive case Ann Math Statist MR1952

22 EXPONENTIAL INEQUALITIES FOR SELF-NORMALIZED MARTINGALES 1869 [23] WORMS, J 1999 Moderate deviations for stable Markov chains and regression models Electron J Probab MR [24] WORMS, J 21 Large and moderate deviations upper bounds for the Gaussian autoregressive process Statist Probab Lett MR INSTITUT DE MATHÉMATIQUES DE BORDEAUX UNIVERSITÉ BORDEAUX 1 UMR COURS DE LA LIBÉRATION 3345 TALENCE CEDEX FRANCE BernardBercu@mathu-bordeaux1fr DÉPARTEMENT DE MATHÉMATIQUES FACULTÉ DES SCIENCES DE BIZERTE 721 ZARZOUNA TUNISIE AbderTouati@fsbrnutn

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES

AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES Lithuanian Mathematical Journal, Vol. 4, No. 3, 00 AN INEQUALITY FOR TAIL PROBABILITIES OF MARTINGALES WITH BOUNDED DIFFERENCES V. Bentkus Vilnius Institute of Mathematics and Informatics, Akademijos 4,

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Some functional (Hölderian) limit theorems and their applications (II)

Some functional (Hölderian) limit theorems and their applications (II) Some functional (Hölderian) limit theorems and their applications (II) Alfredas Račkauskas Vilnius University Outils Statistiques et Probabilistes pour la Finance Université de Rouen June 1 5, Rouen (Rouen

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS Pliska Stud. Math. Bulgar. 16 (2004), 43-54 STUDIA MATHEMATICA BULGARICA RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS Miguel González, Manuel Molina, Inés

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

Remarks on Bronštein s root theorem

Remarks on Bronštein s root theorem Remarks on Bronštein s root theorem Guy Métivier January 23, 2017 1 Introduction In [Br1], M.D.Bronštein proved that the roots of hyperbolic polynomials (1.1) p(t, τ) = τ m + m p k (t)τ m k. which depend

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

SpringerBriefs in Mathematics

SpringerBriefs in Mathematics SpringerBriefs in Mathematics Series Editors Nicola Bellomo Michele Benzi Palle E.T. Jorgensen Tatsien Li Roderick Melnik Otmar Scherzer Benjamin Steinberg Lothar Reichel Yuri Tschinkel G. George Yin Ping

More information

MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES

MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES PENKA MAYSTER ISET de Rades UNIVERSITY OF TUNIS 1 The formula of Faa di Bruno represents the n-th derivative of the composition of two functions

More information

Conditional independence, conditional mixing and conditional association

Conditional independence, conditional mixing and conditional association Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Chapter 1. Poisson processes. 1.1 Definitions

Chapter 1. Poisson processes. 1.1 Definitions Chapter 1 Poisson processes 1.1 Definitions Let (, F, P) be a probability space. A filtration is a collection of -fields F t contained in F such that F s F t whenever s

More information

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE)

KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE ANTONY G AU T I E R (LILLE) PROBABILITY AND MATHEMATICAL STATISTICS Vol 29, Fasc 1 (29), pp 169 18 KALMAN-TYPE RECURSIONS FOR TIME-VARYING ARMA MODELS AND THEIR IMPLICATION FOR LEAST SQUARES PROCEDURE BY ANTONY G AU T I E R (LILLE)

More information

A sequential hypothesis test based on a generalized Azuma inequality 1

A sequential hypothesis test based on a generalized Azuma inequality 1 A sequential hypothesis test based on a generalized Azuma inequality 1 Daniël Reijsbergen a,2, Werner Scheinhardt b, Pieter-Tjerk de Boer b a Laboratory for Foundations of Computer Science, University

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

A Durbin-Watson serial correlation test for ARX processes via excited adaptive tracking

A Durbin-Watson serial correlation test for ARX processes via excited adaptive tracking A Durbin-Watson serial correlation test for ARX processes via excited adaptive tracking Bernard Bercu, Bruno Portier, Victor Vazquez To cite this version: Bernard Bercu, Bruno Portier, Victor Vazquez.

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS

MAXIMAL COUPLING OF EUCLIDEAN BROWNIAN MOTIONS MAXIMAL COUPLING OF EUCLIDEAN BOWNIAN MOTIONS ELTON P. HSU AND KAL-THEODO STUM ABSTACT. We prove that the mirror coupling is the unique maximal Markovian coupling of two Euclidean Brownian motions starting

More information

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s. 20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

A note on the convex infimum convolution inequality

A note on the convex infimum convolution inequality A note on the convex infimum convolution inequality Naomi Feldheim, Arnaud Marsiglietti, Piotr Nayar, Jing Wang Abstract We characterize the symmetric measures which satisfy the one dimensional convex

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space

Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space Mathematica Moravica Vol. 19-1 (2015), 95 105 Two-Step Iteration Scheme for Nonexpansive Mappings in Banach Space M.R. Yadav Abstract. In this paper, we introduce a new two-step iteration process to approximate

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES

COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES Hacettepe Journal of Mathematics and Statistics Volume 43 2 204, 245 87 COMPLETE QTH MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF ROW-WISE EXTENDED NEGATIVELY DEPENDENT RANDOM VARIABLES M. L. Guo

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Diffusion Approximation of Branching Processes

Diffusion Approximation of Branching Processes UDC 519.218.23 Diffusion Approximation of Branching Processes N. Limnios Sorbonne University, Université de technologie de Compiègne LMAC Laboratory of Applied Mathematics of Compiègne - CS 60319-60203

More information

arxiv:math.pr/ v1 17 May 2004

arxiv:math.pr/ v1 17 May 2004 Probabilistic Analysis for Randomized Game Tree Evaluation Tämur Ali Khan and Ralph Neininger arxiv:math.pr/0405322 v1 17 May 2004 ABSTRACT: We give a probabilistic analysis for the randomized game tree

More information

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY

ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY J. Korean Math. Soc. 45 (2008), No. 4, pp. 1101 1111 ON THE COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF DEPENDENT RANDOM VARIABLES UNDER CONDITION OF WEIGHTED INTEGRABILITY Jong-Il Baek, Mi-Hwa Ko, and Tae-Sung

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Independence of some multiple Poisson stochastic integrals with variable-sign kernels

Independence of some multiple Poisson stochastic integrals with variable-sign kernels Independence of some multiple Poisson stochastic integrals with variable-sign kernels Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological

More information

Random Infinite Divisibility on Z + and Generalized INAR(1) Models

Random Infinite Divisibility on Z + and Generalized INAR(1) Models ProbStat Forum, Volume 03, July 2010, Pages 108-117 ISSN 0974-3235 Random Infinite Divisibility on Z + and Generalized INAR(1) Models Satheesh S NEELOLPALAM, S. N. Park Road Trichur - 680 004, India. ssatheesh1963@yahoo.co.in

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments

Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Austrian Journal of Statistics April 27, Volume 46, 67 78. AJS http://www.ajs.or.at/ doi:.773/ajs.v46i3-4.672 Maximum Likelihood Drift Estimation for Gaussian Process with Stationary Increments Yuliya

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½

Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 1998 Asymptotic Nonequivalence of Nonparametric Experiments When the Smoothness Index is ½ Lawrence D. Brown University

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

(A n + B n + 1) A n + B n

(A n + B n + 1) A n + B n 344 Problem Hints and Solutions Solution for Problem 2.10. To calculate E(M n+1 F n ), first note that M n+1 is equal to (A n +1)/(A n +B n +1) with probability M n = A n /(A n +B n ) and M n+1 equals

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Introduction. log p θ (y k y 1:k 1 ), k=1

Introduction. log p θ (y k y 1:k 1 ), k=1 ESAIM: PROCEEDINGS, September 2007, Vol.19, 115-120 Christophe Andrieu & Dan Crisan, Editors DOI: 10.1051/proc:071915 PARTICLE FILTER-BASED APPROXIMATE MAXIMUM LIKELIHOOD INFERENCE ASYMPTOTICS IN STATE-SPACE

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

On Kummer s distributions of type two and generalized Beta distributions

On Kummer s distributions of type two and generalized Beta distributions On Kummer s distributions of type two and generalized Beta distributions Marwa Hamza and Pierre Vallois 2 March 20, 205 Université de Lorraine, Institut de Mathématiques Elie Cartan, INRIA-BIGS, CNRS UMR

More information

arxiv: v1 [stat.ml] 5 Jan 2015

arxiv: v1 [stat.ml] 5 Jan 2015 To appear on the Annals of Statistics SUPPLEMENTARY MATERIALS FOR INNOVATED INTERACTION SCREENING FOR HIGH-DIMENSIONAL NONLINEAR CLASSIFICATION arxiv:1501.0109v1 [stat.ml] 5 Jan 015 By Yingying Fan, Yinfei

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes

The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes The Stein and Chen-Stein methods for functionals of non-symmetric Bernoulli processes Nicolas Privault Giovanni Luca Torrisi Abstract Based on a new multiplication formula for discrete multiple stochastic

More information

Hoeffding s inequality for supermartingales

Hoeffding s inequality for supermartingales Hoeffding s inequality for supermartingales Xiequan Fan, Ion Grama, Quansheng Liu To cite this version: Xiequan Fan, Ion Grama, Quansheng Liu. Hoeffding s inequality for supermartingales. Stochastic Processes

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Fréchet algebras of finite type

Fréchet algebras of finite type Fréchet algebras of finite type MK Kopp Abstract The main objects of study in this paper are Fréchet algebras having an Arens Michael representation in which every Banach algebra is finite dimensional.

More information

Long Run Average Cost Problem. E x β j c(x j ; u j ). j=0. 1 x c(x j ; u j ). n n E

Long Run Average Cost Problem. E x β j c(x j ; u j ). j=0. 1 x c(x j ; u j ). n n E Chapter 6. Long Run Average Cost Problem Let {X n } be an MCP with state space S and control set U(x) forx S. To fix notation, let J β (x; {u n }) be the cost associated with the infinite horizon β-discount

More information

Self-normalized laws of the iterated logarithm

Self-normalized laws of the iterated logarithm Journal of Statistical and Econometric Methods, vol.3, no.3, 2014, 145-151 ISSN: 1792-6602 print), 1792-6939 online) Scienpress Ltd, 2014 Self-normalized laws of the iterated logarithm Igor Zhdanov 1 Abstract

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University

Section 27. The Central Limit Theorem. Po-Ning Chen, Professor. Institute of Communications Engineering. National Chiao Tung University Section 27 The Central Limit Theorem Po-Ning Chen, Professor Institute of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 3000, R.O.C. Identically distributed summands 27- Central

More information

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1

The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September

More information

Lecture 6 Basic Probability

Lecture 6 Basic Probability Lecture 6: Basic Probability 1 of 17 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 6 Basic Probability Probability spaces A mathematical setup behind a probabilistic

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information