CONDITIONAL LIMIT THEOREMS FOR PRODUCTS OF RANDOM MATRICES

Size: px
Start display at page:

Download "CONDITIONAL LIMIT THEOREMS FOR PRODUCTS OF RANDOM MATRICES"

Transcription

1 CONDITIONAL LIMIT THEOREMS FOR PRODUCTS OF RANDOM MATRICES ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Abstract. Consider the product G n = g n...g 1 of the random matrices g 1,..., g n in GL d, R and the random process G n v = g n...g 1 v in R d starting at point v R d {}. It is well known that under appropriate assumptions, the sequence log G n v n 1 behaves like a sum of i.i.d. r.v. s and satisfies standard classical properties such as the law of large numbers, law of iterated logarithm and the central limit theorem. Denote by B the closed unit ball in R d and by B c its complement. For any v B c define the exit time of the random process G n v from B c by τ v = min {n 1 : G n v B}. We establish the asymptotic as n of the probability of the event {τ v > n} and find the limit law for the quantity 1 n log G n v conditioned that τ v > n. 1. Introduction Let G = GL d, R be the general linear group of d d invertible matrices w.r.t. ordinary d matrix multiplication. The Euclidean norm in V = R d is denoted by v = i=1 v2 i, for gv v V. Denote by g = sup v V {} the operator norm of an element g of G and endow v the group G with the usual Borel σ-algebra w.r.t.. Suppose that on the probability space Ω, F, Pr we are given an i.i.d. sequence g n n 1 of G-valued random elements of the same law Pr g 1 dg = µ dg, where µ is a probability measure on G. Consider the product G n = g n...g 1 of the random matrices g 1,..., g n and the random process G n v = g n...g 1 v in V starting at point v V {}. The object of interest is the size of the vector G n v which is controlled by the quantity log G n v. It follows from the results of Le Page [31] that, under appropriate assumptions, the sequence log G n v n 1 behaves like a sum of i.i.d. r.v. s and satisfies standard classical properties such as the law of large numbers, law of iterated logarithm and the central limit theorem. Further results and a discussion of the assumptions under which these results hold true can be found in Furstenberg and Kesten [16], Bougerol and Lacroix [7], Guivarc h and Raugi [26], Benoist and Quint [1], Hennion [27], Jan [3], Goldsheid and Margulis [2]. Denote by B the closed unit ball in V and by B c its complement. For any v B c define the exit time of the random process G n v from B c by τ v = min {n 1 : G n v B}. 1 This is the short version of the paper file GLDR4.tex Date: January 7, Mathematics Subject Classification. Primary 6F17, 6J5, 6J1. Secondary 37C3. Key words and phrases. Products of random matrices, random walks, Markov chains, spectral gap, exit time, limit theorems. 1

2 2 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ The goal of this paper is to establish the asymptotic as n of the probability of the event {τ v > n} = {G 1 v B c,..., G n v B c } and find the limit law for the quantity 1 n log G n v conditioned that τ v > n. The study of related problems for random walks in R has attracted much attention. We refer the reader to Spitzer [38], Iglehart [28], Bolthausen [6], Bertoin and Doney [2], Doney [11], Borovkov [3], [4], Vatutin and Wachtel [4], Caravenna [8] and to references therein. Random walks in R d conditioned to stay in a cone have been considered in Shimura [37], Garbit [17], [18], Echelsbacher and König [14] and Denisov and Wachtel [9], [1]. The case of Markov chains with bounded jumps were considered by Varapoulos [39] who obtained upper and lower bounds for the probability of the exit time. However, to the best of our knowledge, the exact asymptotic of these probabilities for Markov chains has not yet been studied in the litterature. To formulate precisely our main results we introduce the following conditions. Let N g = max { g, g 1} and P V be the projective space of V. Our first condition requires exponential moments of log N g. P1. There exists δ > such that N g δ µ dg <. G Denote by Γ µ the smallest closed semigroup which contains the support of µ. The second condition requires, roughly speaking, that the dimension of Γ µ cannot be reduced. P2 Strong irreducibility. The support of µ acts strongly irreducibly on V, i.e. no proper union of finite vector subspaces of V is invariant with respect to all elements of Γ µ.. Any g n G admits a polar decomposition: g n = h 1 nexpa n h 2 n, where h 1 n, h 2 n are orthogonal matrices and a n is a diagonal matrix with entries a n 1 a n d > on the diagonal. We say that the sequence g n n 1 is contracting for the projective space P V if lim a n 1 a n 2 =. P3 Proximality. The semigroup Γ µ contains a contracting sequence for the projective space P V. Condition P3 is verified if Γ µ contains a matrix with a unique eigenvalue counting multiplicities of maximal modulus. For details on the conditions P2 and P3 we refer the reader to [7]. It follows from [2] that P2 and P3 are satisfied if and only if the action of the Zariski closure of Γ µ in G is strongly irreducible on V and proximal on PV. In the sequel for any v V {} we denote by v = Rv P V its direction and for any direction v P V we denote by v a vector in V {} of direction v. For any g G and v P V denote by g v the element of the projective space P V associated to the product gv, i.e. g v = gv. On the product space G P V define the function ρ called norm cocycle by setting 1.1 ρ g, v := log gv, for g, v G P V. v It is well known see Le Page [31] and Bougerol and Lacroix [7] that under conditions P1-P4 there exists an unique µ-invariant measure ν on P V such that, for any continuous function

3 ϕ on P V, 1.2 µ ν ϕ = G PRODUCTS OF RANDOM MATRICES 3 PV Moreover the upper Lyapunov exponent 1.3 γ µ = ϕ g v ν dv µ dg = G PV PV ρ g, v µ dg ν dv ϕ v ν dv = ν ϕ. is finite and there exists a constant σ > such that for any v V {} and any t R, log lim Pr Gn v nγ µ σ t = Φ t, n where Φ is the standard normal distribution. Throughout the paper we assume: P4. The upper Lyapunov exponent γ µ is equal to. Hypothesis P4 does not imply that the events {τ v > n} = {G k v B c : k = 1,..., n}, n 1, occur with positive probability for any v B c ; to ensure this we need the following additional condition: P5. There exists δ > such that inf s S d 1 µ g : log gs > δ >. It is shown in the Appendix that a measure µ on G satisfying P1-P5 and such that γ µ = exists. Assume P1-P5. From Theorems 2.2 and 2.3 we deduce that, for any v B c, 1.4 Pr τ v > n = 2V v σ 1 + o 1 as n, 2πn where V is a positive function on B c with following properties: for any s S d 1 the function t V ts is increasing on 1,, log t a V ts c 1 + log t for t > 1 and some constant a >, and lim t V ts log t = 1. Moreover, in Theorem 2.4 we prove that the limit law of the quantity 1 σ log G n nv, given the event {τ v > n} coincides with the Rayleigh distribution Φ + t = 1 exp t2 2 : for any v B c and for any t, log Gn v 1.5 lim Pr σ t n τ v > n = Φ + t. The usual way for obtaining such type of results for a classical random walk on the real line is the Wiener-Hopf factorization see Feller [15]. Unfortunately, the Wiener-Hopf factorization is not suited for studying the exit time probabilities for random walks in R d or walks based on dependent variables. Alternative approaches have been developed recently: we refer to [39], [14], [9] and [1]. For the results 1.4 and 1.5 we rely, on the one hand, upon these developments, and, on the other hand, upon a key strong approximation result for dependent random variables established separately in [21]. As it is shown in Le Page [31], the study of a product of random matrices is reduced to the study of a specially designed Markov chain. One of the difficulties is the construction of the harmonic function for the obtained chain. To crop

4 4 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ with this, we show that, under assumptions P1-P4, its perturbed transition operator satisfies a spectral gap property on an appropriately chosen Banach space. This property allows to built a martingale approximation, from which we derive the existence of the harmonic function. In the second part of the paper we transfer the properties of the exit time from the Gaussian sequence to the associated Markov walk based on the strong approximation [21]. We end this section by recalling some standard notations. Throughout the paper c, c, c,... with or without indices denote absolute constants. By c ε, c ε,δ,... we denote constants depending only on theirs indices. All these constants are not always the same when used in different formulas. Other constants will be specifically indicated. By φ σ t = 1 2π exp t2 and 2σ 2 Φ σ t = t φ σ t du we denote respectively the normal density and distribution functions of mean and variance σ 2 on the real line R. The identity matrix in G is denoted by I and g is the transpose of g G. For two sequences a n n 1 and b n n 1 in R + the equivalence a n b n a as n means lim n bn 1 =. 2. Main results Consider the homogenous Markov chain X n n with values in the product space X = G P V and initial value X = g, v X by setting X 1 = g 1, g v and 2.1 X n+1 = g n+1, g n...g 1 g v, n 1. The transition probability of X n n is given by 2.2 Pf g, v = f g 1, g v µ dg 1, G for any g, v X and any bounded measurable function f on X. On the space X define the probability measure 2.3 λ dg, dv = µ dg ν dv, where ν is the µ-invariant measure defined by 1.2. It can be shown by standard methods that under conditions P1-P4 the measure λ is stationary for the Markov chain X k k, i.e. that λ Pf = λ f for any bounded measurable function f on X. Denote by P x the probability measure generated by the finite dimensional distributions of X k k starting at X = x X and by its the corresponding expectation; for any probability measure ν on X, we set P ν = P X xνdx. Then for any bounded measurable function f on X, f X 1 = Pf x and f X n = P n f x, n 2. Let v V {} be a starting vector and v be its direction. From 1.1 and 2.1, iterating the cocycle property ρ g g, v = ρ g, g v + ρ g, v one gets the basic representation n 2.4 log G n gv = y + ρ X k, n 1, k=1 where y = log gv determines the size of the vector gv. In the sequel we will deal with the random walk y + S n n associated to the Markov chain X n n, where X = x = g, v is

5 PRODUCTS OF RANDOM MATRICES 5 an arbitrary element of X, y is any real number and n 2.5 S =, S n = ρ X k, n 1. k=1 In the proof of our main results we will make use of the following CLT which can be deduced from Theorem 2 p. 273 in [31] and where we assume that γ µ =. Theorem 2.1. Assume P1-P4. Then there exists a constant σ, such that uniformly in x X and t >, lim P Sn x σ n t = Φ t. Using 2.4 and the results from [31] it is easy to obtain the well known expression 2.6 σ 2 = Var Pλ ρ X Cov Pλ ρ X 1, ρ X k <. k=2 For any y > denote by τ y the first time when the Markov walk y + S n n becomes negative: τ y = min {n 1 : y + S n }. From Lemma 5.1 of Section 5 it follows that, for any y > and x X, the stopping time τ y is P x -a.s. finite. To state our first order asymptotic of the probability P x τ y > n we need an harmonic function which we proceed to introduce. For any x, y X R denote by Q x, y, dx dy = P x X 1 dx, y + S 1 dy the transition probability of the two dimensional Markov chain X n, y + S n n under the measure P x. Consider the transition kernel x, y X R + Q + x, y, on X R + defined by Q + x, y, = 1 X R + Q x, y,. Note that Q + is not a Markov kernel. A Q + -positive harmonic function V is any function V : X R + R + satisfying 2.7 Q + V = V. Given such a function V, let Q + be the Markov kernel defined by 2.8 Q + ϕ = 1 V Q + V ϕ for any bounded measurable function ϕ on X R +. The kernels Q +, Q + and the harmonic function V introduced above are related to the exit time τ y by the following identities: for any x X, y >, n 1 and bounded measurable function ϕ on X R +, V Q n +ϕ x, y = Q n + V ϕ x, y = V ϕ X n, y + S n ; τ y > n. With ϕ = 1 we see that V is Q + -harmonic iff V x, y = V X 1, y + S 1 ; τ y > 1. The following theorem proves the existence of a Q + -harmonic function. We also establish some of its important properties such as linear behavior as y.

6 6 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Theorem 2.2. Assume hypotheses P1-P5. 1. For any x X and y > the limit V x, y = lim y + S n ; τ y > n exists and satisfies V x, y >. Moreover, for any x X the function V x, is increasing on R +, satisfies y a V x, y c 1 + y for any y > and some a >, and V x,y lim y = 1. y 2. The function V is Q + -harmonic, i.e. for any x X and y >, V X 1, y + S 1 ; τ y > 1 = V x, y. The proof of this theorem is given in Section 5 see Propositions 5.11 and 5.12 with a = 2 Pθ, where the function θ is the solution of the Poisson equation θ Pθ = ρ see Section 4 for the existence of the function θ and its properties. Now we state our main result concerning the limit behavior of the exit time τ y. Theorem 2.3. Assume hypotheses P1-P5. Then, for any x X and y >, P x τ y > n 2V x, y σ 2πn as n. The following theorem establishes a central limit theorem for the sum y + S n conditioned to stay positive. Theorem 2.4. Assume hypotheses P1-P5. For any x X, y > and t >, lim y + P Sn x σ n t τ y > n Φ + t =, where Φ + t = 1 exp t2. 2 The results for log G n v stated in the previous section are obtained by taking X = x = I, v as the initial state of the Markov chain X n n and setting V v = V x, ln v. 3. Banach space and spectral gap conditions In this section we verify the spectral gap properties M1-M3 of the perturbed transition operator of the Markov chain X n n acting on a Banach space to be introduced below; for more details we refer to [31]. Under these properties and some additional moment conditions M4-M5 stated below, in the paper [21] we have established a Komlos-Major-Tusnady type strong approximation result for Markov chains see Proposition 6.3 which is one of the crucial points in the proof of the main results of the paper. The conditions M1-M5 are also used to show the existence of the solution θ of the Poisson equation ρ = θ Pθ which is used in the next section to construct a martingale approximation of the Markov walk S n n. u v On the projective space P V define the distance d u, v = u v. Let C b be the vector space of continuous bounded functions f : X R endowed with the supremum norm f =

7 PRODUCTS OF RANDOM MATRICES 7 sup g,v X f g, v. Let δ > and < ε < δ. For any f C b set 3.1 k ε f = f g, u f g, v f g, u f h, u sup u v, g G d u, v ε N g 4ε + sup g h, u PV g h ε N g N h 3ε. Define the vector space B = B ε := {f C b : k ε f < }. Endowed with the norm 3.2 f B = f + k ε f the space B becomes a Banach space. Note also that f 1, f 2 B implies f 1 f 2 B with f 1 f 2 B f 1 B f 2 B, so that B is also a Banach algebra. Denote by B = L B, C the topological dual of B equipped with the norm B : ψ B = ψf sup f B 1, for any linear functional ψ B. For any linear operator A from B to B, its f B Af B f B operator norm on B is A B B := sup f B 1. The Dirac measure δ x at x X is defined by δ x f = f x for any f B. The unit function e on X is defined by e x = 1 for x X. Using the techniques of the paper [31], it can be checked that under P1-P4 the condition M1 below is satisfied: M1 Banach space: a The unit function e belongs to B. b For every x X the Dirac measure δ x belongs to B, with sup x X δ x B 1. c B L 1 P x, for every x X. d There exists a constant η, 1 such that for any t [ η, η ] and f B the function e itρ f belongs to B. Condition M1 c implies that the operator P defined by f B Pf = fg, vp, dg, dv is well defined. Moreover, it follows from M1 d that the perturbed operator P t f = Pe itρ f is well defined for any t [ η, η ] and f B notice that P = P. Again using the techniques from [31] one can show that conditions of theorem of Ionescu Tulcea and Marinescu [29] are satisfied see also Norman [35], Section 3.2; consequently we deduce the conditions M2-M3 below: M2 Spectral gap: a The map f Pf is a bounded operator on B. b There exist constants C Q > and r, 1 such that X P = Π + R, where Π is a one dimensional projector, ΠB = {f B : Pf = f} and R is an operator on B satisfying ΠR = RΠ = and R n B B C Q r n, n 1. M3 Perturbed transition operator: There exists a constant C P > such that, for all n 1, 3.3 sup t η P n t B B C P.

8 8 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Since e is the eigenfunction corresponding to eigenvalue 1 of P, the space ΠB is generated by e. Moreover for any f B, 3.4 Πf = λ f e, where λ is the stationary measure defined by 2.3. Indeed, for any f B, there exists c f R such that Πf = c f e. By condition M2 it follows that λ f = λ P n f = λ Πf + λ R n f, for any n 1. Taking the limit as n and using again M2 we obtain λ f = λ Πf = c f. Using condition P1, we readily deduce the conditions M4-M5 below: M4 Moment condition: For any p > 2, it holds M5: The stationary probability measure λ satisfies sup x X 4. Martingale approximation sup E 1/p x ρ X n p <. n sup P n ρ 2 x λ dx <. n The goal of this section is to construct a martingale approximation for the Markov walk S n n. All over the section it is assumed that hypotheses M1-M3 hold true. Following Gordin [19], we define the function θ as the solution of the Poisson equation ρ = θ Pθ and the approximating martingale by M n = n k=1 θ X k Pθ X k 1, n 1. Note that, the norm cocycle 1.1 does not belong to the Banach space B, so that the existence of the solution of the Poisson equation does not follow directly from condition M3. Lemma 4.1. The sum θ = ρ + n=1 Pn ρ exists and satisfies the Poisson equation ρ = θ Pθ. Moreover, 4.1 sup θ x ρ x <. x X Proof. Let us emphasize that the function ρ may be unbounded and so may not belong to B. The key point in what follows is that ρ = Pρ B, which can be established by standard methods see Le Page [31]; by condition M2, this readily implies P n ρ = P n 1 ρ = λ ρ + R n 1 ρ = R n 1 ρ, where, by the stationarity of λ we have λ ρ = λ Pρ = λ ρ = γ µ =. Since the spectral radius of the R is less than 1, it holds P n ρ P n ρ B = R n 1 ρ B. Consequently, by M2 b, there exists a real number c 1 > such that, for any x X and n 1, 4.2 P n ρ x c 1 r n with r, 1. It readily follows that the sum θ = ρ + n=1 Pn ρ exists. The Poisson equation ρ = θ Pθ is obvious. Finally, for any x X, one gets θ x ρ x n=1 Pn ρ x c 1 n=1 rn <, which proves 4.1. Corollary 4.2. For any p > 2, it holds In particular Pθ = sup x X Pθ x <. sup n 1 sup θ X k p < +. x X

9 PRODUCTS OF RANDOM MATRICES 9 Proof. The first assertion follows from the moment condition M4 and the bound 4.1. Since Pθ x = θ X 1 the second assertion follows from the first one. Let F be the trivial σ-algebra and F n = σ {X k : k n} for n 1. Let M n n be defined by n 4.3 M = and M n = {θ X k Pθ X k 1 }, n 1. k=1 By the Markov property we have θ X k F k 1 = Pθ X k 1, which implies that the sequence M n, F n n is a mean P x -martingale. The following lemma shows that the difference S n M n n is bounded. Lemma 4.3. Let a = 2 Pθ. It holds sup S n M n a P x -a.s. for any x X. n Proof. Using the Poisson equation ρ = θ Pθ we obtain, for k 1, ρ X k = θ X k Pθ X k = {θ X k Pθ X k 1 } + {Pθ X k 1 Pθ X k }. Summing in k from 1 to n we get n n S n = ρ X i = {θ X k Pθ X k 1 } + Pθ X Pθ X n, i=1 k=1 which implies S n M n = Pθ X Pθ X n, n. The assertion of the theorem now follows from Corollary 4.2. The following simple consequence of Burkholder s inequality will be used repeatedly in the paper. Lemma 4.4. For any p > 2, it holds sup n 1 1 n sup p/2 x X M n p < +. Proof. Denoting ξ k = θ X k Pθ X k 1 and applying Burkholder s inequality one gets n p/2 4.4 M n p 1/p c p ξk 2 1/p. By Hölder s inequality, one gets n ξk 2 with sup 1 k n k=1 p/2 n p/2 1 n k=1 k=1 ξ k p n p/2 sup ξ k p 1 k n ξ k p 1/p 2 sup θ X k p 1 p since Ex Pθ X k 1 p = θ X k p. The de- k 1 sired inequality follows from Corollary 4.2.

10 1 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ 5. Existence of the harmonic function We start with a series of auxiliary assertions. Assume hypotheses M1-M4. Recall that by Lemma 4.3 the differences S n M n are bounded P x -a.s. for any x X and n 1. For any y > denote by T y the first time when the martingale y + M n n 1 exits R + =,, T y = min {n 1 : y + M n }. The following assertion shows that the stopping times τ y and T y are P x -a.s. finite. Lemma 5.1. For any x X and y > it holds P x τ y < = P x T y < = 1. Proof. The first claim is a consequence of the law of iterated logarithm for S n n established in Theorem 5 of [31]; the second claim follows from Lemma 4.3. Lemma 5.2. There exist c > and ε > such that for any ε, ε, n 1, x X and y n 1/2 ε, 5.1 y + MTy ; Ty n c y n ε. Proof. Consider the event A n = { max 1 k n ξ k n 1/2 2ε}, where ξ k = θ X k Pθ X k 1 ; one gets y + MTy ; Ty n = y + MTy ; Ty n, A n + y + M Ty ; T y n, A c n 5.2 = J 1 x, y + J 2 x, y. We bound first J 1 x, y. Since T y is the first time when y + M Ty becomes negative and the size of the jump ξ Ty of y + M Ty at time T y does not exceed n 1/2 2ε on the event A n, it follows that J 1 x, y n 1/2 2ε P x T y n; A n n 1/2 2ε. Therefore, for any y n 1/2 ε and x X 5.3 J 1 x, y y n ε. Now we bound J 2 x, y. We set M n = max 1 k n M k ; since y + MTy y + M n on the event {T y n}, it is clear that, for any x X, 5.4 J 1 x, y yp x A c n + M n; A c n. The probability P x A c n in 5.4 can be bounded as follows: P x A c n = P x max ξ k > n 1/2 2ε 1 k n P x ξk > n 1/2 2ε 5.5 = 1 k n 1 n 1/2 2εp 2 p n 1/2 2εp c p n p/2 1 2εp, 1 k n 1 k n ξ k p θ X k p

11 PRODUCTS OF RANDOM MATRICES 11 where the last inequality follows from Corollary 4.2. Similarly, we have 5.6 with M n; A c n M n ; M n > n 1/2+2ε, A c n + n 1/2+2ε P x A c n n 1/2+2ε P x M n > t dt + 2n 1/2+2ε P x A c n, 5.7 2n 1/2+2ε P x A c n < 2yn 3ε P x A c n 4c p y n p/2 1 2εp 3ε. Now we analyze the integral in 5.6. Using Doob s maximal inequality for martingales and Lemma 4.4 we get P x Mn > t 1 t E p x M n p n p/2 c p, therefore 5.8 t p n 1/2+2ε P x M n > t dt c p n p/2 n 1/2+2ε 1 t p dt c p n p/2 n 1/2+2εp 1 c p y n 2εp 3ε. Taking 5.6, 5.8 and 5.7 altogether, one gets 5.9 Mn; A c y n c p n + y. 2εp 3ε n p/2 1 2εp 3ε Implementing the bounds 5.5 and 5.9 in 5.4, we obtain, y 5.1 J 2 x, y c p n + y p/2 1 2εp n + y 2εp 3ε n p/2 1 2εp 3ε Finally, from 5.2, 5.3, 5.1, we get, for any y n 1/2 ε, y + MTy ; Ty n y n + c y ε p n + p/2 1 2εp y n 2εp 3ε + y. n p/2 1 2εp 3ε Choose p > 2. Then there exist c > and ε > such that for any ε, ε and y n 1/2 ε, y + MTy ; Ty n c y n ε which proves the lemma. Let ε >, y >. Consider the first time ν n when y + M k exceeds 2n 1/2 ε : 5.11 ν n = ν n,y,ε = min { k 1 : y + M k 2n 1/2 ε}. Lemma 5.3. There exists ε 1 > such that for any ε, ε 1, x X and y > it holds P x lim ν n,y,ε = = 1.

12 12 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Proof. Let N > and M n = max 1 k n M k. Using Doob s maximal inequality for martingales and Lemma 4.4, we get, for n sufficiently large namely 2n 1/2 ε 2y P x ν n,y,ε N P x M N 2n 1/2 ε y P x M N n 1/2 ε p 2 E 2n 1/2 ε x M N p c p N p/2 n 1/2 εp. Since p > 2, choosing < ε < ε 1 = p 2, we have p 1/2 ε > 1, which implies that the series 2p n=1 P x ν n,y,ε N is convergent. The assertion of the lemma follows by the Borel-Cantelli lemma. Lemma 5.4. For any ε, 1 2, there exists c ɛ > such that for any n 1, x X and y > P x νn,y,ε > n 1 ε exp c ε n ε. Proof. Let m = [b 2 n 1 2ε ] and K = [n ε /b 2 ], where b will be chosen later on. By Lemma 4.3, for any x X we have S n M n a n 1/2 ε P x -a.s. with a = 2 Pθ. So, for n sufficiently large and for any y >, P x νn,y,ε > n 1 ε = P x max y + M k 2n 1/2 ε 1 k n 1 ε P x max y + M km 2n 1/2 ε 1 k K 5.12 P x max y + S km 3n 1/2 ε. 1 k K Using the Markov property, it follows that, for any x X, P x max y + S km 3n 1/2 ε 1 k K P x max y + S km 3n 1/2 ε sup P x z + Sm 3n 1/2 ε, 1 k K 1 z R, x X from which iterating, we get 5.13 P x max y + S km 3n 1/2 ε 1 k K sup P x y + Sm 3n 1/2 ε K. y R, x X Denote B y r = {z : y + z r}. Then, for any x X and y R, P x y + Sm 3n 1/2 ε Sm = P x B m y/ m r n,

13 PRODUCTS OF RANDOM MATRICES 13 where r n = 3n 1/2 ε / m. Using the central limit theorem for S n Theorem 2.1 we have, as n, 5.14 sup y R, x X P Sm x B m y/ m r n φ σ 2 u du B y/ m r n. From 5.14 we deduce that, as n, sup y R, x X P x y + Sm 3n 1/2 ε sup y R B y/ m r n φ σ 2 u du + o 1. Since r n c 1 b 1, we get sup y R B y/ m r n φ σ 2 u du rn r n φ σ 2 u du c 2 r n c 3 b. Choosing b large, for some q ε < 1 and n large enough, we obtain sup P x y + Sm 3n 1/2 ε q ε. y R, x X Implementing this bound in 5.13 and using 5.12 it follows that which proves the lemma. sup P x νn,y,ε > n 1 ε qε K exp c ε n ε, y>, x X Lemma 5.5. There exists c > such that for any ε, 1 2, n 1, x X and y >, for some c ε >. sup y + Mk ; ν n,y,ε > n 1 ε c 1 + y exp c ε n ε 1 k n Proof. By Cauchy-Schwartz inequality, for any n 1, 1 k n, x X and y >, y + Mk ; ν n,y,ε > n 1 ε Ex 1/2 y + Mk 2 Px 1/2 νn,y,ε > n 1 ε. By Minkowsky s inequality and Lemma 4.4 E 1/2 x The claim follows by Lemma 5.4. y + Mk 2 y + M 2 k y + M 3 k 1/3 y + cn 1/2. Lemma 5.6. There exists c > such that for any n 1, x X and y >, 5.15 y + M n ; T y > n c 1 + y. Proof. First we prove that there exist c > and ε > such that for any x X, ε, ε and y n 1/2 ε, 5.16 y + M n ; T y > n 1 + c y. n ε

14 14 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Since M n, F n n 1 is a zero mean P x -martingale, we have M n = and y + M n ; T y n = y + MTy ; T y n ; so 5.17 y + M n ; T y > n = y + M n y + M n ; T y n = y y + M n ; T y n = y y + MTy ; T y n = y + y + M Ty ; T y n. By Lemma 5.2, there exist c > and ε > such that y + MTy ; Ty n c n ε y, for any y n 1/2 ε and ε, ε. Implementing this inequality in 5.17, we obtain Now we show 5.15 for any x X and y >. We use a recursive argument based on the Markov property coupled with the bound First we note that, for any ε >, 5.18 y + M n ; T y > n = y + Mn ; T y > n, ν n,y,ε n 1 ε By Lemma 5.5, for some ε, ε, To control J 1 x, y, write it in the form 5.19 J 1 x, y = + y + Mn ; T y > n, ν n,y,ε > n 1 ε = J 1 x, y + J 2 x, y. J 2 x, y c 1 + y exp c ε n ε. [n 1 ε ] k=1 y + M n ; T y > n, ν n,y,ε = k. By the Markov property of the chain X n n 1, y + M n ; T y > n, ν n,y,ε = k = y + M n k ; T y > n k 5.2 P x X k dx, y + M k dy ; T y > k, ν n,y,ε = k = U n k X k, y + M k ; T y > k, ν n,y,ε = k, where U m x, y = y + M m ; T y > m for any m 1. Since {ν n,y,ε = k} { y + M k n 1/2 ε}, using 5.16, on the event {T y > k, ν n,y,ε = k} we have 5.21 U n k X k, y + M k 1 + Inserting 5.21 into 5.2, we obtain 5.22 c n k ε y + M n ; T y > n, ν n,y,ε = k 1 + c n k ε y + M k. y + M k ; T y > k, ν n,y,ε = k.

15 PRODUCTS OF RANDOM MATRICES 15 Combining 5.22 and 5.19 it follows that, for n sufficiently large, J 1 x, y [n 1 ε ] k= c ε n ε c n k ε y + M k ; T y > k, ν n,y,ε = k [n 1 ε ] k=1 y + M k ; T y > k, ν n,y,ε = k. Since y + M n 1 {Ty>n} is a submartingale, for any x X, and 1 k n 1 [n1 ε ] This implies y + M k ; T y > k, ν n,y,ε = k y + M[n 1 ε ]; T y > [ n 1 ε], ν n,y,ε = k. J 1 x, y 1 + c ε n ε [n 1 ε ] k=1 y + M[n 1 ε ]; T y > [ n 1 ε], ν n,y,ε = k 1 + c ε E n ε x y + M[n 1 ε ]; T y > [ n 1 ε]. Implementing the bounds for J 1 x, y and J 2 x, y in 5.18 we obtain y + M n ; T y > n 1 + c ε y + M[n 1 ε ]; T y > [ n 1 ε] n ε c 1 + y exp c ε n ε. [ Let k j = n 1 εj] for j. Note that [ [ ] [ ] k 1 ε j = n 1 εj] 1 ε [ n 1 εj+1] = k j+1. Then, using the bound 5.23 and the fact that y + M n 1 {Ty>n} is a submartingale, we get y + M k1 ; T y > k c ε E k1 ε x y + M 1 ε [k 1 ] ; T y > [ ] k1 1 ε + c 1 + y exp c ε k1 ε 1 + c ε E k1 ε x y + M k2 ; T y > k 2 + c 1 + y exp c ε k1 ε. Substituting this bound in 5.23 and continuing in the same way, after m iterations, we obtain 5.24 y + M n ; T y > n A m y + M km ; T y > k m + c 1 + y B m, where 5.25 A m = and 5.26 B m = m j=1 1 + c ε kj 1 ε m exp c ε kj 1 ε. j=1 n 1

16 16 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Letting α j = n ε1 εj 1, we obtain 5.27 A m exp 2 ε c ε m α j. We choose m = m n such that k m = [ ] n 1 εm n k m 1, where n 1 is a constant. Let q = n ε2. Note that, for any j satisfying j m, it holds α j = n ε2 j 1 1 ε n ε2 1 ε m 1 n ε2 = q < 1. α j+1 Therefore This implies 5.28 j=1 α j q m j α m = q m j n ε1 εm 1 q m j n ε. m j=1 In the same way we can check that α j n ε m q m j n ε j=1 m 5.29 B m c 1 n ε1 εj 1 c 2 j=1. 1 n ε2 The assertion of the lemma follows taking into account that n ε 1 n ε2 y + M km ; T y > k m = y + M n ; T y > n y + M n y + c. Corollary 5.7. There exists c > such that for any n 1, x X and y >, and y + S n ; τ y > n c 1 + y y + M n ; τ y > n c 1 + y. Proof. Let a = Pθ and x X. By Lemma 4.3, sup n S n M n a, P x -a.s., which implies P x τ y T y+a = 1. Since y + a + M n y + S n > on {τ y > n}, P x -a.s., for any x X, using Lemma 5.6, we get, 5.3 y + a + M n ; τ y > n y + a + M n ; T y+a > n Using the bound 5.3 it follows that and c y + a c y. y + S n ; τ y > n y + a + M n ; τ y > n c y y + M n ; τ y > n y + a + M n ; τ y > n c y..

17 PRODUCTS OF RANDOM MATRICES 17 Lemma 5.8. There exists c > such that for any x X and y >, y + MTy c 1 + y <. Proof. Let x X. As M n, F n n 1 is a zero mean P x -martingale, we have M n = and y + M n ; T y n = y + MTy ; T y n ; so, since y >, y + MTy n = y + M n ; T y > n y + MTy ; T y n = y + M n ; T y > n y + M n ; T y n = 2 y + M n ; T y > n y 2 y + M n ; T y > n. Taking into account that by Lemma 5.6, y + M n ; T y > n c 1 + y, we get y + M Ty n 2c 1 + y. Since y + MTy ; Ty n y + MTy n 2c 1 + y, by Lebesgue s monotone convergence theorem it follows that y + MTy = lim y + MTy ; Ty n 2c 1 + y <. Corollary 5.9. There exists c > such that for any x X and y >, and y + S τy c 1 + y < y + Mτy c 1 + y <. Proof. For any x X and y >, 5.31 y + S τy n = y + S n ; τ y > n y + Sτy ; τ y n. By Lemma 4.3 we have sup n S n M n 2 Pθ = a <. Note also that M n = and y + M n ; τ y n = y + Mτy ; τ y n ; therefore the second term is bounded as follows: y + Sτy ; τ y n y + Mτy ; τ y n + a = y + M n ; τ y n + a = y + M n ; τ y > n y + M n + a y + M n ; τ y > n + a y + S n ; τ y > n + 2a. Substituting this bound in 5.31 we get y + Sτy n 2Ex y + S n ; τ y > n + 2a, from which, by Corollary 5.7, we obtain y + Sτy n c 1 + y + 2a.

18 18 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Since y + Sτy ; τy > n y + Sτy n c 1 + y + 2a, by Lebesgue s monotone convergence theorem it follows that y + Sτy = lim y + Sτy ; τy > n c 1 + y + 2a <. The second assertion follows from the first one since P x supn S n M n a = 1. Lemma 5.1. Uniformly in x X, lim y 1 y lim y + M n ; T y > n = 1. Proof. Let x X and y >. Let n be a constant and m = m n be such that [ ] [ n 1 εm = n. Consider the sequence k j = n 1 εj], j =,..., m. From 5.24 it follows y + M n ; T y > n A m y + M n ; T y > n + c 1 + y B m, where A m and B m are defined by 5.25 and Let δ >. From 5.27, 5.28 and 5.29, choosing n sufficiently large, one gets A m 1 + δ and B m δ uniformly in m and thus in n sufficiently large. This gives 5.32 y + M n ; T y > n 1 + δ y + M n ; T y > n + c 1 + y δ. Since y + M n 1 {Ty>n} is a submartingale, the sequence E n 1 x y + M n ; T y > n is increasing and bounded by Lemma 5.6: it thus converges as n and one gets lim y + M n ; T y > n 1 + δ y + M n ; T y > n + c 1 + y δ. From 5.17 we have the lower bound y + M n ; T y > n y; we obtain y lim y + M n ; T y > n 1 + δ y + M n + c 1 + y δ and the claim follows since δ > is arbitrary. For any x X denote { Ex M V x, y = τy if y >, if y. The following proposition presents some properties of the function V. Proposition The function V satisfies: 1. For any y > and x X, V x, y = lim y + M n ; τ y > n = lim y + S n ; τ y > n. 2. For any y > and x X, y a V x, y c 1 + y, where a = 2 Pθ. V x,y 3. For any x X, lim y = 1. y 4. For any x X, the function V x, is increasing.

19 PRODUCTS OF RANDOM MATRICES 19 Proof. Let x X and y >. Since M n, F n n 1 is a zero mean P x -martingale, 5.33 y + M n ; τ y > n = y + M n y + M n ; τ y n = y y + Mτy ; τ y n. Proof of the claim 1. According to Corollary 5.9 one gets sup x X y + Mτy c 1 + y < ; thus, by Lebesgue s dominated convergence theorem, for any x X, lim E x y + Mτy ; τ y n = y + Mτy = y V x, y. Therefore, from 5.33, it follows lim E x y + M n ; τ y > n = y y + Mτy = V x, y. Since S n M n 2 Pθ and lim P x τ y > n = one obtains lim y + S n ; τ y > n = V x, y. Taking into account that y + S n 1 {τy>n}, we have V x, y, which proves the first claim. Proof of the claim 2. Corollary 5.7 implies that for any x X, y > and n 1, y + S n ; τ y > n c 1 + y. Taking the limit as n, we obtain V x, y c 1 + y, which proves the upper bound. From 5.33, taking into account the bound S n M n 2 Pθ = a, we get y + M n ; τ y > n y y + Sτy ; τ y n a y a. The expected lower bound follows letting n. V x,y Proof of the claim 3. From the claim 2 it follows that lim y 1. Let a = 2 Pθ y and y >. Since τ y T y+a, we have y + S n ; τ y > n y + M n + a; τ y > n y + a + M n ; T y+a > n. Taking into account the claim 1 and Lemma 5.1, we obtain lim y V x,y y 1. Proof of the claim 4. It is clear that y y implies τ y τ y. Therefore y + S n ; τ y > n y + S n ; τ y > n y + S n ; τ y > n. Taking the limit as n one gets the claim 4. In the following proposition we prove that V is Q + -harmonic. Proposition For any x X and y > it holds Q + V x, y = V X 1, y + S 1 ; τ y > 1 = V x, y >. Proof. Let x X and y > and set V n x, y = y + S n ; τ y > n, for any n 1. By the Markov property we have V n+1 x, y = y + S n+1 ; τ y > n + 1 = V n X 1 ; y + S 1 ; τ y > 1.

20 2 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ By Corollary 5.7, we have sup x X V n x, y sup x X sup y + S n ; τ y > n c 1 + y. n This implies that V n X 1 ; y + S 1 1 {τy>1} is dominated by c 1 + y + S 1 1 {τy>1} which is integrable. Taking the limit as n, by Lebesgue s dominated convergence theorem, we get 5.34 V x, y = lim V n X 1 ; y + S 1 ; τ y > 1 = lim V n X 1 ; y + S 1 ; τ y > 1 = V X 1, y + S 1 1 {τy>1} = Q + V x, y. To prove that V is strictly positive on X R + we first iterate 5.34: for any n 1, 5.35 V x, y = Q n +V x, y = V X n, y + S n ; τ y > n. Let ε >. From 5.35 using the claim 2 of Proposition 5.11 we conclude that, for any x X and n 1, P x -a.s. V X n, y + S n ; τ y > n y + S n a 1 τ y > n εp x {y + S 1 >,..., y + S n 1 >, y + S n > a + ε}. According to condition P5 there exists δ > such that q δ := inf x X P x ρ X 1 δ >. Choose n sufficiently large such that nδ > a + ε. Then By the Markov property 5.36 = P x y + S 1 >,..., y + S n 1 >, y + S n > a + ε P x y + S 1 > δ, y + S 2 > 2δ,..., y + S n > nδ. P x y + S 1 > δ, y + S 2 > 2δ,..., y + S n > nδ... Q x n 1, y n 1, dx n, dy n δ X n 1δ X nδ Q x n 2, y n 2, dx n 1, dy n 1... Q x, y, dx 1, dy 1, where Q x, y, dx, dy = P x X 1 dx, y + ρ X 1 dy. For any 1 m n, y m 1 m 1 δ and x X we have Q x m 1, y m 1, dx m, dy m = P xm 1 ρ X 1 mδ y m 1 mδ X X inf P x ρ X 1 δ x X = q δ >. Inserting consecutively these bounds in 5.36, it readily follows that P x y + S 1 > δ, y + S 2 > 2δ,..., y + S n > nδ q n δ >, which proves that V is strictly positive on X R +.

21 PRODUCTS OF RANDOM MATRICES Coupling argument and proof of Theorem 2.3 Let B t t be a standard Brownian motion on the probability space Ω, F, Pr. For any y > define the exit time τ bm y = inf {t : y + σb t < }, where σ > is given by 2.6. The following well known formulas are due to Levy [34] Theorem 42.I, pp Lemma 6.1. Let y >. The stopping time τy bm 1. For any n 1, 6.1 Pr τ bm y > n = Pr σ inf B u > y = u n 2. For any a, b satisfying a < b < and n 1, 6.2 Pr τ bm y > n, y + σb n [a, b] = From Lemma 6.1 we easily deduce: 1 b has the following properties: a 2 y e s y2 2nσ 2 e s2 2nσ 2 ds. e s+y2 2nσ 2 1 {s } ds. Lemma 6.2. The stopping time τy bm has the following properties: for any y > and n 1, 6.3 Pr τy bm > n y c nσ and, for any sequence of real numbers α n n such that α n, as n, Pr τ bm y > n 6.4 sup 2y 1 = O α n. y [,α n n] We transfer the properties of the exit time τ bm y to the exit time τ y for large y using the following coupling result proved in [21], Theorem 2.1. Let Ω = R R and for any ω = ω 1, ω 2 Ω define the coordinate processes Ỹi = ω 1,i and W i = ω 2,i for i 1. Proposition 6.3. Assume that the Markov chain X i i and the function ρ satisfy hypotheses M1-M4. Let p > 2 and Ω < α < p 2. Then, there exists a Markov transition kernel x P 2 x from X, B X to Ω, B such that: 1. For any x X the distribution of Ỹi under P x coincides with the distribution of ρ X i i 1 under P x ; i 1 2. For any x X the W i, i 1, are i.i.d. standard normal random variables under P x ; 3. For any ε, 1 α there exist a constant C depending on ε, α and p and an absolute 2 1+2α constant c such that for any x X and n 1, P x n 1/2 k sup Ỹi σ W i > cn ε 1+α α Cn 1+2α +ε2+2α. 1 k n i=1

22 22 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Remark 6.4. In Theorem 2.1 of [21], the constant C depends on δ x B and on µ p x = sup k 1 E 1/p x ρ X k p. By conditions M1 and M4 we have sup x X δ x B 1 and sup x X µ p x < which implies that we can choose the constant C to be independent of x. Note that the constant C depends also on other constants introduced so far, in particular on the variance σ and the constants in conditions M1-M4. Without loss of generality we shall consider in the sequel that g X i = Ỹi, B i = i W j= j and P x = P x. Choosing α < p 2 and p sufficiently large in Proposition 6.3, it follows that there 2 exists ε > such that, for any ε, ε, x X and n 1, 6.5 P x sup t 1 S[nt] σb nt > n 1/2 2ε c ε n 2ε, where c ε depends on ε. To pass from the Brownian motion in discrete time to those in continuous time one can use standard bounds for the oscillation of B t t from Revuz and Yor [36]. In the proof of Theorem 2.3 we use the following auxiliary result: Lemma 6.5. Let ε, ε and θ n n 1 be a sequence of positive numbers such that θ n and θ n n ε/4 as n. Then: 1. There exists a constant c > such that, for n sufficiently large, sup x X, y [n 1/2 ε,θ nn 1/2 ] P x τ y > n 2y 1 cθ n. 2. There exists a constant c ε > such that for any n 1 and y n 1/2 ε, y P x τ y > n c ε n. sup x X Proof. We start with the claim 1. Let y n 1/2 ε. Denote y + = y+n 1/2 2ε and y = y n 1/2 2ε. Let { } A n = S[nt] σb nt n 1/2 2ε. sup t 1 Using 6.5, we have P x A c n c ε n 2ε, for any x X. Since we obtain, for any x X and y n 1/2 ε, 6.6 {τ y > n} A n { τ bm y + > n} A n P x τ y > n P x τ y > n, A n + P x A c n P x τ bm y + > n + c ε n 2ε. In the same way we get, for any x X and y n 1/2 2ε, 6.7 P x τ y > n P x τ bm y > n c ε n 2ε. Combining 6.6 and 6.7, for any x X and y n 1/2 2ε, 6.8 Px τ y > n P x τ bm y > n ± cε n 2ε.

23 For any y [ n 1/2 ε, θ n n 1/2] and n large enough, PRODUCTS OF RANDOM MATRICES y ± = y ± n 1/2 2ε = y1 + β n n ε 2θ n n 1/2, with some β n satisfying β n 1. Using 6.4 and 6.9, for any x X and y [ ] n 1/2 ε, θ n n, we obtain 6.1 P x τ bm y > n = 2y 1 + O θ ± n, where the constant in O is absolute. Since θ n n ε/4, we have 6.11 θ n y n n1/2 ε n ε n = n 2ε, for n sufficiently large. From 6.8 and 6.1, taking into account 6.11, it follows that, for any x X and y [ n 1/2 ε, θ n n ], P x τ y > n = 2y 1 + O θ n, as n, where the constant in O does not depend on x and y. This proves the claim 1. Proof of the claim 2. Note first that for any y n 1/2 ε and n large enough 6.12 y y + y + n 1/2 2ε y + yn ε 2y; consequently, from 6.6 and 6.3, it follows that P x τ y > n P x τ bm y > n + c + ε n 2ε y c + c ε n 2ε nσ with n 2ε y n, for n large enough. y n 1 2 ɛ. This yields P x τ y > n c ε y n for any x X and Now we proceed to prove Theorem 2.3. Let ε, ε and θ n n 1 be a sequence of positive numbers such that θ n and θ n n ε/4 as n. Let x X and y >. Recall that ν n = min { k 1 : y + M k 2n 1/2 ε}. A simple decomposition gives 6.13 P n x, y := P x τ y > n = P x τy > n, ν n > n 1 ε + P x τy > n, ν n n 1 ε. The first probability in the right hand side of 6.13 is estimated using Lemma 5.4, 6.14 sup P x τy > n, ν n > n 1 ε sup P x νn > n 1 ε = O e cnε. x X, y> x X, y> For the second probability in 6.13 we have by the Markov property, P x τy > n, ν n n 1 ε 6.15 = Pn νn X νn, y + S νn ; τ y > ν n, ν n n 1 ε = Pn νn X νn, y + S νn ; y + S νn θ n n 1/2, τ y > ν n, ν n n 1 ε + Pn νn X νn, y + S νn ; y + S νn > θ n n 1/2, τ y > ν n, ν n n 1 ε = J 1 x, y + J 2 x, y.

24 24 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ By Lemma 4.3 one gets y + S νn y + M νn a, P x -a.e., where a = 2 Pθ. By the definition of ν n, we have, for n sufficiently large, P x -a.e y + S νn y + M νn a 2n 1/2 ε a n 1/2 ε. On the other hand, obviously, for 1 k n 1 ε, 6.17 P x τ y > n P n k x, y P x τy > n n 1 ε. Using 6.16, the two sided bounds of 6.17 and the claim 1 of Lemma 6.5 with θ n replaced by n 1/2 { θ n n n, we deduce that on the event F = y + 1 ε Sνn θ n n 1/2, τ y > ν n, ν n n 1 ε}, for n sufficiently large, P x -a.e P n νn X νn, y + S νn = 2 y + S ν n 1 + o1 ; in particular, 6.18 implies that, on this event F, we have, P x -a.e P n νn X νn, y + S νn c y + S ν n. n Implementing 6.18 in the expression for J 1 we obtain J 1 x, y = o 1 y + Sνn ; y + S νn θ n n 1/2, τ y > ν n, ν n n 1 ε = o 1 y + Sνn ; τ y > ν n, ν n n 1 ε o 1 + J 3 x, y, where J 3 x, y = y + Sνn ; y + S νn > θ n n 1/2, τ y > ν n, ν n n 1 ε. Similarly, implementing 6.19 in the expression for J 2, we have 6.21 J 2 x, y c J 3 x, y. n From 6.13, 6.14, 6.15, 6.2 and 6.21 we get P x τ y > n = o 1 y + Sνn ; τ y > ν n, ν n n 1 ε +O n 1/2 J 3 x, y + O e cnε. The assertion of Theorem 2.3 follows if we show that y + S νn ; τ y > ν n, ν n n 1 ε converges to V x, y and that J 3 x, y = o1 as n. This is proved in Lemmas 6.6 and 6.7 below. Note that the convergence established in these lemmas is not uniform in x X which explains why the convergence in Theorem 2.3 is not uniform. Lemma 6.6. Let ε, ε and θ n n 1 be a sequence of positive numbers such that θ n and θ n n ε/4 as n. For any x X and y >, 6.22 lim y + Sνn ; τ y > ν n, ν n n 1 ε = V x, y.

25 PRODUCTS OF RANDOM MATRICES 25 Proof. First we prove 6.22 for the martingale M n. For any x X and y >, y + Mνn ; τ y > ν n, ν n n 1 ε = y + Mνn [n 1 ε ]; τ y > ν n n 1 ε, ν n n 1 ε = y + Mνn [n 1 ε ]; τ y > ν n n 1 ε y + Mνn [n 1 ε ]; τ y > ν n n 1 ε, ν n > n 1 ε. Using Lemma 5.5, the expectation in 6.24 is bounded as follows: y + Mνn [n 1 ε ]; τ y > ν n n 1 ε, ν n > n 1 ε 6.25 = y + M[n 1 ε ]; τ y > n 1 ε, ν n > n 1 ε c 1 + y exp c ε n ε. The expectation in 6.23 is decomposed into two terms: y + Mνn [n 1 ε ]; τ y > ν n n 1 ε 6.26 = y + Mνn [n 1 ε ] Ex y + Mνn [n 1 ε ]; τ y ν n n 1 ε. Since M n n 1 is a martingale, y + Mνn [n 1 ε ] = y and 6.27 y + Mνn [n 1 ε ]; τ y ν n n 1 ε = y + Mτy ; τ y ν n n 1 ε. By Corollary 5.9, M τy is integrable, consequently lim E x y + Mτy ; τ y ν n n 1 ε = y + Mτy, which, together with 6.27 and 6.26, implies 6.28 lim y + Mνn [n 1 ε ]; τ y > ν n n 1 ε = y y + Mτy = V x, y. From 6.25 and 6.28 it follows that 6.29 lim y + Mνn ; τ y > ν n, ν n n 1 ε = V x, y. Now we extend 6.29 to S n using the fact that the difference R n = S n M n is P x -a.s. bounded. For this we write y + Sνn [n 1 ε ]; τ y > ν n n 1 ε, ν n n 1 ε 6.3 = y + Mνn ; τ y > ν n, ν n n 1 ε + Rνn ; τ y > ν n, ν n n 1 ε. Note that P x -a.s. we have τ y <, sup n S n M n a = 2 P θ and ν n, which implies that, as n, 6.31 Rνn ; τ y > ν n, ν n n 1 ε ap x τ y > ν n. The assertion of the lemma follows from 6.29, 6.3 and Lemma 6.7. Let ε, ε and θ n n 1 be a sequence of positive numbers such that θ n and θ n n ε/4 as n. For any x X and y >, lim n2ε y + Sνn ; y + S νn > θ n n 1/2, τ y > ν n, ν n n 1 ε =.

26 26 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Proof. Let y = y + a, where a = 2 Pθ. Using the fact that τ y T y and that the sequence y + M n 1 {Ty >n} is a submartingale, we have n 1 y + Sνn ; y + S νn > θ n n 1/2, τ y > ν n, ν n n 1 ε y + M νn ; y + M νn > θ n n 1/2, T y > ν n, ν n n 1 ε using submartingale property y + M [n 1 ε ]; y + M νn > θ n n 1/2, T y > n 1 ε, ν n n 1 ε y + M [n 1 ε ]; sup y + M k > θ n n 1/2, T y > n 1 ε. 1 k [n 1 ε ] From now on, we use the notation M n := sup 1 k n M k ; since θ n n ε/4 as n let us remark that the condition θ n n ε at this step would not be sufficient, to finish the proof it is enough to show that, for any δ > and x X, 6.32 lim n 2ε y + M n ; M n > n 1/2+δ, T y > n =. We have 6.33 with 6.34 y + M n ; M n > n 1/2+δ, T y > n y + M n; M n > n 1/2+δ y P x M n > n 1/2+δ + M n ; M n > n 1/2+δ, M n ; M n > n 1/2+δ = n 1/2+δ P x M n > n 1/2+δ By Doob s maximal inequality for martingales + n 1/2+δ P x M n > t dt P x M n > t 1 t p M n p c np/2 t p. Implementing 6.35 in 6.33 and 6.34, y + M n ; M n > n 1/2+δ, T y > n c y + n 1/2+δ n p/2 + cnp/2 t p dx np/2+pδ n 1/2+δ c y + n 1/2+δ n pδ + cn p/2 n 1/2+δp 1 c y + n 1/2+δ n pδ + cn pδ+1/2+δ. Since p can be taken arbitrarily large we get The small rate of convergence of order n 2ε obtained in the previous lemma will be used in the proof of Theorem 2.4 see next section.

27 We first state the following lemma. PRODUCTS OF RANDOM MATRICES Proof of Theorem 2.4 Lemma 7.1. Let ε, ε, t > and θ n n 1 be a sequence such that θ n and θ n n ε/4 as n. Then 7.1 lim sup P x τ y > n k, y+s n k n 2y 2πn 1 σ 3 t u exp u2 t 1 2σ du =, 2 where sup is taken over x X, k n 1 ε and n 1/2 ε y θ n n 1/2. Proof. Denote y + = y + n 1/2 2ε, y = y n 1/2 2ε, t + = t + n 2ε and t = t n 2ε. Let m n = n k, where 1 k n 1 ε. As in the previous section, set { A n = sup t 1 } S[nt] σb nt n 1/2 2ε. Since on this set it holds {τ y > m n } { τ bm y + > m } n and { y + Smn n } { } y + σbmn t t +, n we obtain 7.2 P x τy > m n, y + S mn nt P x τ y > m n, y + S m n n t, A n + P x A c n P x τ bm y > m n, y+ + σb mn t +, A + n + P x A c n n P x τ bm y > m n, y+ + σb mn t + + P + x A c n n. In the same way we get 7.3 P x τy > m n, y + S mn nt P x τ bm y > m n, y + σb mn t P x A c n n.

28 28 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ Now we deal with the first probability in 7.2. By Lemma 6.1, = P x τ bm y > m n, y+ + σb mn t + + n [ 1 nt + 2πmn σ substituting s = u n e s y + 2 2mnσ 2 e s+y + ] 2 2mnσ 2 ds 7.4 y + / n 2 = e 2σ 2 mn/n t + 2πmn /nσ e u 2 2σ 2 mn/n [ e uy + / n σ 2 mn/n e uy + / n σ 2 mn/n ] du. Note that, uniformly in k n 1 ε and n 1/2 ε y θ n n 1/2, we have, as n, 7.5 m n /n = n k/n = 1 O n ε and 7.6 y + y = y + n1/2 2ε y = 1 + O n ε. Therefore, uniformly in k n 1 ε, n 1/2 ε y θ n n 1/2 and u t +, v n = uy+ / n σ 2 m n /n = O t + n 2ε θ n = o 1 as n. Since, by Taylor s expansion, e vn e vn = 2v n 1 + o 1 as v n, using again 7.5, 7.6 we get, uniformly in k n 1 ε, n 1/2 ε y θ n n 1/2 and u t +, e uy + / n σ 2 mn/n e uy + / n σ 2 mn/n = 2uy + / n σ 2 m n /n 1 + o 1 = 2uy σ o 1, n as n. Similarly, uniformly in k n 1 ε, n 1/2 ε y θ n n 1/2 and u t +, 7.7 e y + / n 2 2σ 2 mn/n = 1 + o 1 and e u2 2σ 2 mn/n From 7.4, 7.5, 7.7 and 7.7, we obtain P x τ bm y + > m n, y + + σb mn nt + = = e u2 2σ o 1. 2y t + e u2 2σ 2 udu 1 + o 1, 3

29 as n. Since t + e u 2 = 2σ 2 udu = t PRODUCTS OF RANDOM MATRICES 29 u 2 e 2σ 2 udu + o n 2ε and n 1/2 ε y θ n n 1/2 we get P x τ bm y > m n, y+ + σb mn t + + n t 2y 3 use y n 1/2 ε e u2 2σ 2 udu y o n 1/2 2ε 1 + o = 2y 3 t e u2 2σ 2 udu 1 + o 1. Similarly, we obtain 7.9 P x τ bm y > m n, y + σb mn t = n Taking into account the bound 6.5, we have 2y P x A c n = O n 2ε. From 7.2, 7.3, 7.8, 7.9 and 7.1 it follows that P x τ y > m n, y + S m n n t = = 2y 3 2y 3 t t t e u2 2σ 2 udu 1 + o 1. e u2 2σ 2 udu 1 + o 1 + O n 2ε e u2 2σ 2 udu 1 + o 1, where for the last line we used the fact that, by 6.11, n 2ε = o y/n 1/2. This proves 7.1. We now proceed to prove Theorem 2.4. Let ε, ε. Note that 7.11 y+s P n x n t, τ y > n P x τ y > n = P x y + S n nt, τ y > n, y + S νn θ n n, νn n 1 ε P x τ y > n + P x y + S n nt, τ y > n, y + S νn > θ n n, νn n 1 ε P x τ y > n + P x y + S n nt, τ y > n, ν n > n 1 ε P x τ y > n = T n,1 + T n,2 + T n,3.

30 3 ION GRAMA, EMILE LE PAGE, AND MARC PEIGNÉ The terms T n,2 and T n,3 in this decomposition are negligible. Let us control first the term T n,2 ; we have θ n > n ε/4, so T n,2 P x τ y > n, y + S νn > θ n n, νn n 1 ε P x τ y > n n 1/2+ε/4 E x y + S νn ; τ y > ν n, y + S νn > θ n n, νn n 1 ε. P x τ y > n Taking into account the convergence rate of order n 2ε in Lemma 6.7 and Theorem 2.3, we get 7.12 lim T n,2 =. By Lemma 5.4 and Theorem 2.3, as n, 7.13 T n,3 P x ν n > n 1 ε P x τ y > n = O exp cnε P x τ y > n. Let us now control the term T n,1 which gives the main contribution. For the sake of brevity set H m x, y = P x τ y > m, y + S m nt ; by the Markov property, one gets R n,1 := P x y + Sn nt, τ y > n, y + S νn θ n n, νn n 1 ε = Hn νn X νn, y + S νn ; τ y > ν n, y + S νn θ n n, νn n 1 ε = [n 1 ε ] k=1 Hn k X k, y + S k ; τ y > k, y + S k θ n n, νn = k. In order to obtain a first order asymptotic we are going to expand H n k X k, y + S k using Lemma 7.1 with n, x, y replaced by m n = n k, X k and y + S k respectively. We verify that on the event A k = {τ y > k, y + S k θ n n, νn = k} the conditions of Lemma 7.1 are satisfied. For this, note that on the event A k one has the lower bound y + S k = y + S k y + M k a n 1/2 ɛ m 1/2 ɛ n and the upper bound y + S k θ m n m n 1/2, where θ m n = θ n n/m n 1/2 is such that θ m n m ε/4 n = θ n n ε/4 n/m n 1/2 ε/4, uniformly in 1 k n 1 ε. By Lemma 7.1, on the event A n one gets P x -a.s., as n, which yields H n k X k, y + S k = 2y + S k 2πn kσ 3 R n,1 = [n 1 ε ] k=1 2 y + S k 3 t t u exp u 2 /2σ 2 du 1 + o 1, u exp u 2 /2σ 2 du; τ y > k, y + S k θ n n, νn = k 1 + o 1 where = 2R n,2 3 t u exp u 2 /2σ 2 du 1 + o 1, R n,2 = y + Sνn ; τ y > ν n, y + S νn θ n n, νn n 1 ε.

31 By Lemmas 6.6 and 6.7, it follows, as n, PRODUCTS OF RANDOM MATRICES 31 R n,2 = y + Sνn ; τ y > ν n, ν n n 1 ε + y + Sνn ; τ y > ν n, y + S νn > θ n n, νn n 1 ε = V x, y 1 + o 1. Implementing the obtained expansion in the expression for R n,1, it follows, as n, R n,1 = Using Theorem 2.3, this yields 7.14 T n,1 = 2V x, y 3 R n,1 P x τ y > n = t u exp u 2 /2σ 2 du 1 + o 1. 2V x,y t u exp 3 u2 du 2σ 2 = 1 t u exp σ 2 P x τ y > n u2 2σ 2 as n. Combining 7.11, 7.12, 7.13 and 7.14, we get y+s P x nn t, τ y > n lim = 1 t P x τ y > n σ 2 which ends the proof of Theorem 2.4. = 1 exp 1 + o 1 du 1 + o 1, u exp u2 2σ 2 u2 2σ 2, du 8. Appendix We prove the existence of a measure µ satisfying conditions P1-P5. Let µ be a probability measure on G satisfying conditions P1-P4 which admits ν as invariant measure and whose upper Lyapunov exponent is. Let λ > 1. Define the measure 1 µ λ dg = αδ λi dg + 1 α µ λ dg, where α, 1. Then µ λ satisfies conditions P1-P3 and µ λ ν = ν, i.e. ν is µ λ -invariant measure. Moreover, the upper Lyapunov exponent of µ λ is, i.e. ρ g, v µ λ dg ν dv = and G PV inf µ λ g : log gv > ln λ α > v S d 1 which means that conditions P4 and P5 are satisfied.

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

JUHA KINNUNEN. Harmonic Analysis

JUHA KINNUNEN. Harmonic Analysis JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Real Analysis Notes. Thomas Goller

Real Analysis Notes. Thomas Goller Real Analysis Notes Thomas Goller September 4, 2011 Contents 1 Abstract Measure Spaces 2 1.1 Basic Definitions........................... 2 1.2 Measurable Functions........................ 2 1.3 Integration..............................

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

CENTRAL LIMIT THEOREM FOR LINEAR GROUPS

CENTRAL LIMIT THEOREM FOR LINEAR GROUPS CENTRAL LIMIT THEOREM FOR LINEAR GROUPS YVES BENOIST AND JEAN-FRANÇOIS QUINT Abstract. We prove a central limit theorem for random walks with finite variance on linear groups. Contents 1. Introduction

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing.

5 Measure theory II. (or. lim. Prove the proposition. 5. For fixed F A and φ M define the restriction of φ on F by writing. 5 Measure theory II 1. Charges (signed measures). Let (Ω, A) be a σ -algebra. A map φ: A R is called a charge, (or signed measure or σ -additive set function) if φ = φ(a j ) (5.1) A j for any disjoint

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011 LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

A SHORT INTRODUCTION TO BANACH LATTICES AND

A SHORT INTRODUCTION TO BANACH LATTICES AND CHAPTER A SHORT INTRODUCTION TO BANACH LATTICES AND POSITIVE OPERATORS In tis capter we give a brief introduction to Banac lattices and positive operators. Most results of tis capter can be found, e.g.,

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik

Stochastic Processes. Winter Term Paolo Di Tella Technische Universität Dresden Institut für Stochastik Stochastic Processes Winter Term 2016-2017 Paolo Di Tella Technische Universität Dresden Institut für Stochastik Contents 1 Preliminaries 5 1.1 Uniform integrability.............................. 5 1.2

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004)

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 34 (2005), 43 60 ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS N. Katilova (Received February 2004) Abstract. In this article,

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

16 1 Basic Facts from Functional Analysis and Banach Lattices

16 1 Basic Facts from Functional Analysis and Banach Lattices 16 1 Basic Facts from Functional Analysis and Banach Lattices 1.2.3 Banach Steinhaus Theorem Another fundamental theorem of functional analysis is the Banach Steinhaus theorem, or the Uniform Boundedness

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Strong approximation for additive functionals of geometrically ergodic Markov chains

Strong approximation for additive functionals of geometrically ergodic Markov chains Strong approximation for additive functionals of geometrically ergodic Markov chains Florence Merlevède Joint work with E. Rio Université Paris-Est-Marne-La-Vallée (UPEM) Cincinnati Symposium on Probability

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS

DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS DYNAMICAL CUBES AND A CRITERIA FOR SYSTEMS HAVING PRODUCT EXTENSIONS SEBASTIÁN DONOSO AND WENBO SUN Abstract. For minimal Z 2 -topological dynamical systems, we introduce a cube structure and a variation

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Combinatorics in Banach space theory Lecture 12

Combinatorics in Banach space theory Lecture 12 Combinatorics in Banach space theory Lecture The next lemma considerably strengthens the assertion of Lemma.6(b). Lemma.9. For every Banach space X and any n N, either all the numbers n b n (X), c n (X)

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks

Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks Course Description - Master in of Mathematics Comprehensive exam& Thesis Tracks 1309701 Theory of ordinary differential equations Review of ODEs, existence and uniqueness of solutions for ODEs, existence

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

An introduction to Birkhoff normal form

An introduction to Birkhoff normal form An introduction to Birkhoff normal form Dario Bambusi Dipartimento di Matematica, Universitá di Milano via Saldini 50, 0133 Milano (Italy) 19.11.14 1 Introduction The aim of this note is to present an

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

An invariance result for Hammersley s process with sources and sinks

An invariance result for Hammersley s process with sources and sinks An invariance result for Hammersley s process with sources and sinks Piet Groeneboom Delft University of Technology, Vrije Universiteit, Amsterdam, and University of Washington, Seattle March 31, 26 Abstract

More information

Banach Spaces II: Elementary Banach Space Theory

Banach Spaces II: Elementary Banach Space Theory BS II c Gabriel Nagy Banach Spaces II: Elementary Banach Space Theory Notes from the Functional Analysis Course (Fall 07 - Spring 08) In this section we introduce Banach spaces and examine some of their

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

arxiv: v1 [math.pr] 22 May 2008

arxiv: v1 [math.pr] 22 May 2008 THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by L p Functions Given a measure space (, µ) and a real number p [, ), recall that the L p -norm of a measurable function f : R is defined by f p = ( ) /p f p dµ Note that the L p -norm of a function f may

More information

Analysis Comprehensive Exam Questions Fall 2008

Analysis Comprehensive Exam Questions Fall 2008 Analysis Comprehensive xam Questions Fall 28. (a) Let R be measurable with finite Lebesgue measure. Suppose that {f n } n N is a bounded sequence in L 2 () and there exists a function f such that f n (x)

More information

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France

The Hopf argument. Yves Coudene. IRMAR, Université Rennes 1, campus beaulieu, bat Rennes cedex, France The Hopf argument Yves Coudene IRMAR, Université Rennes, campus beaulieu, bat.23 35042 Rennes cedex, France yves.coudene@univ-rennes.fr slightly updated from the published version in Journal of Modern

More information

CHARACTERISTIC POLYNOMIAL PATTERNS IN DIFFERENCE SETS OF MATRICES

CHARACTERISTIC POLYNOMIAL PATTERNS IN DIFFERENCE SETS OF MATRICES CHARACTERISTIC POLYNOMIAL PATTERNS IN DIFFERENCE SETS OF MATRICES MICHAEL BJÖRKLUND AND ALEXANDER FISH Abstract. We show that for every subset E of positive density in the set of integer squarematrices

More information

Advanced Probability Theory (Math541)

Advanced Probability Theory (Math541) Advanced Probability Theory (Math541) Instructor: Kani Chen (Classic)/Modern Probability Theory (1900-1960) Instructor: Kani Chen (HKUST) Advanced Probability Theory (Math541) 1 / 17 Primitive/Classic

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

A generalization of Strassen s functional LIL

A generalization of Strassen s functional LIL A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence

More information

Series of Error Terms for Rational Approximations of Irrational Numbers

Series of Error Terms for Rational Approximations of Irrational Numbers 2 3 47 6 23 Journal of Integer Sequences, Vol. 4 20, Article..4 Series of Error Terms for Rational Approximations of Irrational Numbers Carsten Elsner Fachhochschule für die Wirtschaft Hannover Freundallee

More information

CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS

CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS CHILE LECTURES: MULTIPLICATIVE ERGODIC THEOREM, FRIENDS AND APPLICATIONS. Motivation Context: (Ω, P) a probability space; σ : Ω Ω is a measure-preserving transformation P(σ A) = P(A) for all measurable

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes

Stochastic Processes II/ Wahrscheinlichkeitstheorie III. Lecture Notes BMS Basic Course Stochastic Processes II/ Wahrscheinlichkeitstheorie III Michael Scheutzow Lecture Notes Technische Universität Berlin Sommersemester 218 preliminary version October 12th 218 Contents

More information

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION)

PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION) PCMI LECTURE NOTES ON PROPERTY (T ), EXPANDER GRAPHS AND APPROXIMATE GROUPS (PRELIMINARY VERSION) EMMANUEL BREUILLARD 1. Lecture 1, Spectral gaps for infinite groups and non-amenability The final aim of

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge

The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge Vladimir Kozlov (Linköping University, Sweden) 2010 joint work with A.Nazarov Lu t u a ij

More information

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.

More information