Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Size: px
Start display at page:

Download "Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R."

Transcription

1 Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92

2 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 2 / 92

3 Overview Stationary sequence: such that {X n+k ; n 0} (d) = {X n ; n 0}. Main result: law of large numbers of the type 1 n n f (X k ) k=1 exists a.s Samy T. Ergodic theorems Probability Theory 3 / 92

4 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 4 / 92

5 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 5 / 92

6 Markov chain Definition 1. Let X = {X n ; n 0} be a process. 1 X is a Markov chain if P (X n+1 = j X 0 = i 0,..., X n = i n ) = P (X n+1 = j X n = i n ) for all n 0, i 0,..., i n, j E. 2 The Markov chain is homogeneous whenever P (X n+1 = j X n = i) = P (X 1 = j X 0 = i) Samy T. Ergodic theorems Probability Theory 6 / 92

7 Law of the Markov chain Proposition 2. Let X be a homogeneous Markov chain with initial law ν and transition p. 1 For n N and i 0,..., i n E, we have P (X 0 = i 0,..., X n = i n ) = ν(i 0 ) p(i 0, i 1 ) p(i n 1, i n ) 2 The law of X is characterized by ν and p. Samy T. Ergodic theorems Probability Theory 7 / 92

8 Law of the Markov chain: general state space Proposition 3. Let X be a homogeneous Markov chain on (S, S) with initial law ν and transition p. 1 For n N and ϕ C b (S), we have E [ϕ(x 0,..., X n )] = ϕ(x 0,..., x n )ν(dx 0 ) p(x 0, x 1 ) p(x n 1, x n ) (1) E n+1 2 The law of X is characterized by ν and p. Samy T. Ergodic theorems Probability Theory 8 / 92

9 A criterion for Markovianity Proposition 4. Let Z : Ω E random variable. F countable set. {Y n ; n 1} i.i.d sequence, with Y Z and Y n F. f : E F E. We set X 0 = Z, and X n+1 = f (X n, Y n+1 ). Then X is a homogeneous Markov chain such that ν 0 = L(Z), and p(i, j) = P (f (i, Y 1 ) = j). Remark: A converse result exists, but it won t be used. Samy T. Ergodic theorems Probability Theory 9 / 92

10 Graph of a Markov chain Definition 5. Let X homogeneous Markov chain (initial law ν 0, transition p). We define a graph G(X) by G(X) is an oriented graph Vertices of G(X) are points of E. Edges of G(X) are defined by V {(i, j); i j, p(i, j) > 0}. Samy T. Ergodic theorems Probability Theory 10 / 92

11 Example Example 6. We consider E = {1, 2, 3, 4, 5} and 1/3 0 2/ /4 1/2 1/4 0 0 p = 1/2 0 1/ /3 1/3 Related graph: to be done in class Samy T. Ergodic theorems Probability Theory 11 / 92

12 Graph and accessibility Proposition 7. Let X homogeneous Markov chain (initial law ν 0, transition p). Then i j iff i = j or there exists an oriented path from i to j in G(X). Samy T. Ergodic theorems Probability Theory 12 / 92

13 Minimal class Definition 8. An equivalence class C is minimal if: For all i C and j C, we have i j. Example: For Example 6, C 1, C 3 are minimal, and C 2 is not minimal. Minimality criterions: (i) If there exists a unique class C, it is minimal. (ii) There exists a unique minimal class C! class C such that for all i E, we have i C. Application: Random walk Samy T. Ergodic theorems Probability Theory 13 / 92

14 Recurrence criterions Theorem 9. Let Then X homogeneous Markov chain (initial law ν 0, transition p). C class for the relation. 1 If C is not minimal then it is transient 2 If C is minimal and E <, then C is recurrent C is positive recurrent : for all i C, Ei [R i ] <. Samy T. Ergodic theorems Probability Theory 14 / 92

15 Example Recall: In Example 6 we had E = {1, 2, 3, 4, 5} (hence E < ) and 1/3 0 2/ /4 1/2 1/4 0 0 p = 1/2 0 1/ /3 1/3 Related classes: C 1 = {1, 3}, C 2 = {2} and C 3 = {4, 5}. C 1, C 3 minimal and C 2 not minimal. Conclusion: C 1, C 3 positive recurrent, C 2 transient. Samy T. Ergodic theorems Probability Theory 15 / 92

16 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 16 / 92

17 Stationary sequence Definition 10. Let {X n ; n 0} sequence of random variables We say that X is stationary if for all k 1 we have {X n+k ; n 0} (d) = {X n ; n 0}. Samy T. Ergodic theorems Probability Theory 17 / 92

18 iid random variables Example 11. Let X = {X n ; n 0} sequence of i.i.d random variables Then X is a stationary sequence. Samy T. Ergodic theorems Probability Theory 18 / 92

19 Markov chains Example 12. Let X = {X n ; n 0} Markov chain on state space S Transition probability: p(x, A) Hypothesis 1: unique stationary distribution π Hypothesis 2: L(X 0 ) = π Then X is a stationary sequence. Proof: Easy consequence of relation (1). Samy T. Ergodic theorems Probability Theory 19 / 92

20 Trivial example of Markov chain Example 13. On S = {0, 1} we take p(x, 1 x) = 1 π(0) = 1, π(1) = Then P(X = (0, 1, 0, 1,...)) = P(X = (1, 0, 1, 0,...)) = 1 2 Samy T. Ergodic theorems Probability Theory 20 / 92

21 Rotation of the circle Example 14. We consider λ Lebesgue measure and: (Ω, F, P) = ([0, 1], Borel sets, λ) θ (0, 1) X = {X n ; n 0} with X n (ω) = (ω + nθ) mod 1 = ω + nθ [ω + nθ]. Then X is a stationary sequence. Samy T. Ergodic theorems Probability Theory 21 / 92

22 Proof Law of X 1 : For ϕ C b (R) we have E [ϕ(x 1 )] = = = θ ϕ (x + θ [x + θ]) dx ϕ (x + θ) dx θ ϕ (v) dv = E [ϕ(x 0 )] ϕ (x + θ 1) dx Thus L(X 1 ) = L(X 0 ). Samy T. Ergodic theorems Probability Theory 22 / 92

23 Proof (2) Altenative expression for X n : Due to the relation (a + b) mod 1 = [a mod 1 + b mod 1] mod 1, we have X n = (ω + nθ) mod 1 = (ω + (n 1)θ + θ) mod 1 = [(ω + (n 1)θ) mod 1 + θ mod 1] mod 1 = [X n 1 + θ] mod 1 Samy T. Ergodic theorems Probability Theory 23 / 92

24 Proof (3) Recall: We have seen that X n = [X n 1 + θ] mod 1. (2) Conclusion: It is readily checked that 1 From (2) and Proposition 4, X is a Markov chain 2 The relation L(X 1 ) = L(X 0 ) means that λ is an invariant distribution Therefore X is stationary thanks to Example 12. Samy T. Ergodic theorems Probability Theory 24 / 92

25 Transformation of a stationary sequence Theorem 15. Let X = {X n ; n 0} stationary sequence g : R N R measurable We consider a sequence Y defined by Y k = g ({X k+n ; n 0}). Then Y is a stationary sequence. Samy T. Ergodic theorems Probability Theory 25 / 92

26 σ-algebra on R N (1) Set R N : We define R N = {ω = (ω j ) j N ; ω j R}. Finite dimensional set: Of the form A = { ω R N ; ω j B j, for 1 j n }, with B j R, where R Borel σ-algebra in R. Samy T. Ergodic theorems Probability Theory 26 / 92

27 σ-algebra on R N (2) Definition 16. On R N we set: R N = σ (finite dimensional sets). Then R N is called Borel σ-algebra on R N. Remark: Kolmogorov s extension theorem is valid on (R N, R N ). Samy T. Ergodic theorems Probability Theory 27 / 92

28 Proof of Theorem 15 Notation: For x R N and k 0 we set g k (x) = g ({x k+n ; n 0}). Inverse image: Let B R N. Then { ω R N ; Y B } = { ω R N ; X A }, where A = { ω R N ; g(x) B }. Samy T. Ergodic theorems Probability Theory 28 / 92

29 Proof of Theorem 15 (2) Proof of stationarity: For the generic B R N we have P ( ω R N ; Y B ) = P ( ω R N ; X A ) This yields stationarity for Y. = P ( ω R N ; {X n ; n 0} A ) = P ( ω R N ; {X k+n ; n 0} A ) = P ( ω R N ; {Y k+n ; n 0} B ). Samy T. Ergodic theorems Probability Theory 29 / 92

30 Bernoulli shifts Example 17. We consider λ Lebesgue measure and: (Ω, F, P) = ([0, 1], Borel sets, λ) Y 0 = ω Y = {Y n ; n 0} where for n 1 we have Y n (ω) = 2Y n 1 mod 1. Then Y is a stationary sequence. Samy T. Ergodic theorems Probability Theory 30 / 92

31 Proof by Markov chains Law of Y 1 : For ϕ C b (R) we have E [ϕ(y 1 )] = = = ϕ (2y ϕ (2y) dy + mod 1) dy ϕ (v) dv = E [ϕ(y 0 )] ϕ (2y 1) dy Thus L(Y 1 ) = L(Y 0 ). Samy T. Ergodic theorems Probability Theory 31 / 92

32 Proof by Markov chains (2) Proof of stationarity: We have 1 Y n (ω) = 2Y n 1 mod 1 2 Thanks to proposition 4, Y is thus a Markov chain 3 The relation L(Y 1 ) = L(Y 0 ) means that λ is an invariant distribution for Y Therefore Y is a stationary sequence thanks to Example 12. Samy T. Ergodic theorems Probability Theory 32 / 92

33 Proof by Theorem 15 Another representation for Y : Let {X i ; i 1} i.i.d with common law B(1/2) g(x) = i 1 x i 2 (i+1) defined for x {0, 1} N g k (x) = g({x k+i ; i 1}) defined for x {0, 1} N Y k = g k (X) Then Y 0 λ, and Y n (ω) = 2Y n 1 mod 1. Stationarity of Y : We have X stationary Y k = g k (X) Therefore Y is a stationary sequence. Samy T. Ergodic theorems Probability Theory 33 / 92

34 Measure preserving map Definition 18. Let (Ω, F, P) a probability space A measurable map ϕ : Ω Ω We say that ϕ is measure preserving if P ( ϕ 1 (A) ) = P (A), for all A F. Otherwise stated, ϕ is measure preserving if E [ g ( X(ϕ(ω)) )] = E [g(x(ω))], for all g C b. Samy T. Ergodic theorems Probability Theory 34 / 92

35 Measure preserving map and stationarity Example 19. Let (Ω, F, P) a probability space A measure preserving map ϕ : Ω Ω X F a random variable For n 0 we set: X n = X (ϕ n (ω)) Then X is a stationary sequence. Samy T. Ergodic theorems Probability Theory 35 / 92

36 Proof Characterization with expected values: For g C(R n ) and k 1 we have: E [g (X k,..., X k+n )] = E [ g ( X 0 (ϕ k (ω)),..., X n (ϕ k (ω)) )] Thus X is a stationary sequence. = E [g (X 0 (ω),..., X n (ω))] = E [g (X 0,..., X n )]. Samy T. Ergodic theorems Probability Theory 36 / 92

37 Stationarity and measure preserving maps Example 20. Let Y stationary sequence with values in S = R n P probability measure on (S N, R(S N )) defined by: X n (ω) = ω n = L(X) = L(Y ) We define a shift operator ϕ by: Then ϕ is measure preserving. ϕ ({ω j ; j 0}) = {ω j+1 ; j 0}. Interpretation: Previous examples can be reduced to a measure preserving map Samy T. Ergodic theorems Probability Theory 37 / 92

38 Two sided stationary sequence Theorem 21. Let X stationary sequence Then X can be embedded into a two sided sequence {Y n ; n Z}. Samy T. Ergodic theorems Probability Theory 38 / 92

39 Proof Application of Kolmogorov s extension: It is enough to define a family of probability measures 1 On R A for any finite subset of Z 2 With consistent property Current situation: we consider Y defined by E [g (Y m,..., Y n )] = E [g (X 0,..., X m+n )], for all g C b (R m+n+1 ). Kolmogorov s extension applies Samy T. Ergodic theorems Probability Theory 39 / 92

40 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 40 / 92

41 Setup General setting: We are reduced to (Ω, F, P) probability space A map ϕ preserving P A random variable X A sequence X n (ω) = X(ω) Invariant set: Let A F. The set A is invariant if P ( A ϕ 1 (A) ) = 0 Notation abuse: For an invariant set A, we often write A = ϕ 1 (A) Samy T. Ergodic theorems Probability Theory 41 / 92

42 Ergodicity Definition 22. Let (Ω, F, P) probability space A map ϕ preserving P I σ-algebra of invariant events We say that ϕ is ergodic if I is trivial, i.e: A I = P(A) {0, 1}. Samy T. Ergodic theorems Probability Theory 42 / 92

43 Kolmogorov s 0-1 law Theorem 23. Let X = {X n ; n 0} sequence of i.i.d random variables For n 0, we set F n = σ({x k ; k n}) Tail σ-field: T n 0 F n Then T is trivial, i.e: A T = P(A) {0, 1}. Samy T. Ergodic theorems Probability Theory 43 / 92

44 Ergodicity for i.i.d sequences Example 24. Let X = {X n ; n 0} sequence of i.i.d random variables ϕ shift operator on Ω = R N Then ϕ is ergodic. Samy T. Ergodic theorems Probability Theory 44 / 92

45 Proof Measurability of an invariant set: Let A I. Then A = {ω Ω; ϕ(ω) A} = A F 1. Tail σ-field: Iterating the previous relation we get A I = A T Hence by Kolmogorov s 0-1 law we get that I is trivial Samy T. Ergodic theorems Probability Theory 45 / 92

46 Ergodicity for Markov chains Example 25. Let X = {X n ; n 0} MCH on countable state space S Transition probability: p(x, A) ϕ shift operator on Ω = S N Hypothesis 1: unique stationary distribution π Hypothesis 2: π(x) > 0 for all x S Hypothesis 3: L(X 0 ) = π Then 1 If X is not irreducible, θ is not ergodic 2 If X is irreducible, θ is ergodic Samy T. Ergodic theorems Probability Theory 46 / 92

47 Proof Basic Markov chains facts: Since π(x) > 0 for all x S, all states are recurrent State space decomposition: S = j J R j, where R j disjoint irreducible sets Non irreducible case: If J 2 we have X 0 R j X n R j for all n 0. Therefore for all j J we get: 1 (X0 R j ) = 1 (X0 R j ) 1 (X1 R j ) = 1 (X1 R j ) Iterating we get (X 0 R j ) I π(x 0 R j ) (0, 1) if J 2 Samy T. Ergodic theorems Probability Theory 47 / 92

48 Proof (2) General relation for 1 A : For A I, we have 1 A = 1 A θ n We set F n = σ(x 0,..., X n ) We define h(x) = E x [1 A ] Then E π [1 A F n ] invariance Markov prop = E π [1 A θ n F n ] = h(x n ) Levy s 0-1 law: Let F n F A F Then a.s and in L 1 (Ω) we have lim n E [1 A F n ] = 1 A Samy T. Ergodic theorems Probability Theory 48 / 92

49 Proof (3) Limit of h(x n ): Recall E π [1 A F n ] = h(x n ), and lim n E [1 A F n ] = 1 A Thus lim h(x n) = 1 A. (3) n Irreducible case: If X irreducible and π(y) > 0 for all y S, then h(x n ) = h(y) infinitely often for all y S According to (2) we thus have h = Cst We have thus found that whenever A I, 1 A (ω) = Cst {0, 1} Samy T. Ergodic theorems Probability Theory 49 / 92

50 Ergodicity for rotations of the circle Example 26. We consider λ Lebesgue measure and: (Ω, F, P) = ([0, 1], Borel sets, λ) θ (0, 1) X = {X n ; n 0} with X n (ω) = (ω + nθ) mod 1 = ω + nθ [ω + nθ]. ϕ shift transformation Then the following holds true: 1 If θ is rational, then ϕ is not ergodic 2 If θ is irrational, then ϕ is ergodic Samy T. Ergodic theorems Probability Theory 50 / 92

51 Transformation of an ergodic sequence Theorem 27. Let X = {X n ; n 0} ergodic sequence g : R N R measurable We consider a sequence Y defined by Y k = g ({X k+n ; n 0}). Then Y is an ergodic sequence. Samy T. Ergodic theorems Probability Theory 51 / 92

52 Proof Inverse image: Let B R N. Recall that { ω R N ; Y B } = { ω R N ; X A }, where A = { ω R N ; g(x) B }. Consequence for ergodicity: If B satisfies { ω R N ; (Y n ) n 0 B } = { ω R N ; (Y 1+n ) n 0 B } Then A satisfies { ω R N ; (X n ) n 0 A } = { ω R N ; (X 1+n ) n 0 A } Samy T. Ergodic theorems Probability Theory 52 / 92

53 Proof (2) Conclusion: Since A satisfies { ω R N ; (X n ) n 0 A } = { ω R N ; (X 1+n ) n 0 A }, we have Therefore: P (X A) {0, 1}. P (Y B) {0, 1}. Samy T. Ergodic theorems Probability Theory 53 / 92

54 Ergodicity for Bernoulli shifts Example 28. We consider λ Lebesgue measure and: (Ω, F, P) = ([0, 1], Borel sets, λ) Y 0 = ω Y = {Y n ; n 0} where for n 1 we have Y n (ω) = 2Y n 1 mod 1. ϕ shift transformation Then ϕ is ergodic. Samy T. Ergodic theorems Probability Theory 54 / 92

55 Proof Representation for Y : Recall that we have defined {X i ; i 1} i.i.d with common law B(1/2) g(x) = i 1 x i 2 (i+1) defined for x {0, 1} N g k (x) = g({x k+i ; i 1}) defined for x {0, 1} N Y k = g k (X) Then Y 0 λ, and Y n (ω) = 2Y n 1 mod 1. Ergodicity of Y : We have X ergodic Y k = g k (X) Hence owing to Theorem 27, Y is a stationary sequence. Samy T. Ergodic theorems Probability Theory 55 / 92

56 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 56 / 92

57 Birkhoff s ergodic theorem Theorem 29. Let X L 1 (Ω) ϕ a measure preserving map on Ω I the σ-field of invariant sets Then 1 lim n n n X(ϕ j (ω)) = E [X I], j=0 a.s and in L 1 (Ω) Samy T. Ergodic theorems Probability Theory 57 / 92

58 Ergodic theorem for ergodic maps Proposition 30. Let X L 1 (Ω) ϕ a measure preserving map on Ω Hypothesis: ϕ is ergodic Then 1 lim n n n X(ϕ j (ω)) = E [X], j=0 a.s and in L 1 (Ω) Samy T. Ergodic theorems Probability Theory 58 / 92

59 Maximal ergodic lemma Lemma 31. Let X L 1 (Ω) ϕ a measure preserving map on Ω X j = X(ϕ j (ω)) S n = n 1 j=0 X j M k = max{0, S 1,..., S k } Then E [ X 1 (Mk >0)] 0. Samy T. Ergodic theorems Probability Theory 59 / 92

60 Proof Lower bound for X: We will prove that for n = 1,..., k X(ω) S n (ω) M k (ϕ(ω)) (4) Relation (4) for n = 1: We have S 1 (ω) = X(ω) and M k (ϕ(ω)) 0. Thus X(ω) S 1 (ω) M k (ϕ(ω)) Relation (4) for n = 2,..., k: We have M k (ϕ(ω)) S j (ϕ(ω)) for j = 1,..., k. Thus X(ω) + M k (ϕ(ω)) X(ω) + S j (ϕ(ω)) = S j+1 (ω) and X(ω) S j+1 (ω) M k (ϕ(ω)) Samy T. Ergodic theorems Probability Theory 60 / 92

61 Proof (2) Consequence of relation (4): X(ω) max {S 1 (ω),..., S k (ω)} M k (ϕ(ω)) Integration of the previous relation: E [ ] X1 (Mk >0) = {M k >0} {M k >0} [max {S 1 (ω),..., S k (ω)} M k (ϕ(ω))] dp [M k (ω) M k (ϕ(ω))] dp Samy T. Ergodic theorems Probability Theory 61 / 92

62 Proof (3) Conclusion: We have seen E [ ] X1 (Mk >0) {M k >0} [M k (ω) M k (ϕ(ω))] dp In addition on {M k > 0} c we have 1 M k (ω) = 0 2 M k (ϕ(ω)) 0 3 Therefore {M k >0} c [M k(ω) M k (ϕ(ω))] dp 0 We thus get E [ X1 (Mk >0) ] [M k (ω) M k (ϕ(ω))] dp ϕ invariant = 0 Samy T. Ergodic theorems Probability Theory 62 / 92

63 Proof of Theorem 29 Reduction to E[X I] = 0: We Set ˆX = X E[X I] Recall that E[X I] is invariant Therefore we have 1 n X(ϕ j (ω)) E [X I] = 1 n ( X(ϕ j (ω)) E [X I] ) n j=0 n j=0 = 1 n [X E {X I}] (ϕ j (ω))= 1 n ˆX(ϕ j (ω)) n n j=0 We can thus prove Theorem 29 for X such that E[X I] = 0 j=0 Samy T. Ergodic theorems Probability Theory 63 / 92

64 Proof of Theorem 29 (2) Sufficient condition: We define X = lim sup n S n n D = {ω; X(ω) > ε} for ε > 0 We wish to prove that P(D) = 0 Invariance of D: Since X is invariant, we have D I Samy T. Ergodic theorems Probability Theory 64 / 92

65 Proof of Theorem 29 (3) Notation: We set X (ω) = (X(ω) ε)1 D (ω) X j = X (ϕ j (ω)) Sn = n 1 j=0 Xj Mk = max{0, S1,..., Sk } F n = {Mn > 0} (increasing sequence) F = n 0 F n Relation between F and D: We have F = {There exists n s.t X(ϕ n (ω)) > ε} D = D We thus wish to prove that P(D) = P(F ) = 0 Samy T. Ergodic theorems Probability Theory 65 / 92

66 Proof of Theorem 29 (4) Proof of E[X 1 D ] 0: Owing to Lemma 31 we have E [X 1 Fn ] 0 By dominated convergence we get: E [X 1 D ] = E [X 1 F ] 0 Proof of P(D) = 0: Since D I and E[X I] = 0 we get E [X 1 D ] = E [(X ε) 1 D ] = E {E[X I] 1 D } ε P(D) = ε P(D) We have seen that E[X 1 D ] 0, which yields P(D) = 0 Samy T. Ergodic theorems Probability Theory 66 / 92

67 Proof of Theorem 29 (5) Almost sure limit of Sn : For k 1 we have seen that n ({ P(D k ) P Taking limits as k we get: P ({ ω; lim sup n ω; lim sup n S n n > 1 }) = 0 k }) S n n > 0 = 0 Since the same result is true for the r.v X we end up with: P ({ ω; lim sup n }) S n n = 0 = 1 Samy T. Ergodic theorems Probability Theory 67 / 92

68 Proof of Theorem 29 (6) Truncation procedure: For the convergence in L 1 (Ω) we set and X 1 M = X 1 ( X M), and X 2 M = X 1 ( X >M) A n = 1 n Then for n 1 we have n 1 m=0 X(ϕ m (ω)) E[X I] E [ A n ] E [ A 1 n ] + E [ A 2 n ], where for j = 1, 2 we have defined: A j n = 1 n n 1 m=0 X j M (ϕm (ω)) E[X j M I] Samy T. Ergodic theorems Probability Theory 68 / 92

69 Proof of Theorem 29 (7) Limit for A 1 n: We have 1 a.s lim n A 1 n = 0 2 A 1 n 2M Therefore by dominated convergence we have: lim E [ ] A 1 n n = 0 Samy T. Ergodic theorems Probability Theory 69 / 92

70 Proof of Theorem 29 (8) Limit for A 2 n: We have E [ A 2 n ] 1 n 1 E [ X 2 n M (ϕ m (ω)) ] [ + E E[ X 2 M I] ] m=0 2 E [ XM ] 2 In addition, by dominated convergence we have: lim E [ X 2 n M ] = 0 Therefore lim n E [ A 2 n ] = 0 Samy T. Ergodic theorems Probability Theory 70 / 92

71 Proof of Theorem 29 (9) Conclusion for the L 1 (Ω) convergence: We have seen We thus get lim n E [ A 1 n ] = 0, and lim n E [ A 2 n ] = 0. L 1 1 (Ω) lim n n n 1 m=0 X j M (ϕm (ω)) = E[X j M I] Samy T. Ergodic theorems Probability Theory 71 / 92

72 LLN for iid random variables Example 32. Let X = {X n ; n 0} sequence of i.i.d random variables Hypothesis: X 0 L 1 (Ω) X n 1 nj=1 X n j Then lim X n = E[X 0 ], n a.s and in L 1 (Ω) Remark: W.r.t the usual LLN, we have obtained the L 1 (Ω) convergence Samy T. Ergodic theorems Probability Theory 72 / 92

73 Proof Previous results: We have seen that 1 X is an ergodic sequence 2 I T and T is trivial Conclusion: Theorem 29 applies and can be read as lim X n = E[X 0 I] = E[X 0 ], n a.s and in L 1 (Ω) Remark: The L 1 (Ω) convergence can also be obtained as follows Result: If P lim n Y n = Y, then L 1 (Ω) lim n Y n = Y iff lim n E[ Y n ] = E[ Y ] Apply this result successively to Y n = X + n and Y n = X n Samy T. Ergodic theorems Probability Theory 73 / 92

74 LLN for Markov chains Example 33. Let X = {X n ; n 0} MCH on countable state space S Hypothesis 1: unique stat. dist. π and L(X 0 ) = π Hypothesis 2: π(x) > 0 for all x S Hypothesis 3: X irreducible Hypothesis 4: f : S R satisfies f L 1 (π) Then 1 lim n n n f (X j ) = f (x) π(x), j=1 x S a.s and in L 1 (Ω) Samy T. Ergodic theorems Probability Theory 74 / 92

75 Proof Previous results: We have seen that 1 X is an ergodic sequence 2 Therefore f (X) is an ergodic sequence 3 I is trivial whenever X is irreducible Conclusion: Theorem 29 applies and can be read as 1 lim n n n f (X j ) = E[f (X 0 ) I] = E[f (X 0 )], j=1 a.s and in L 1 (Ω) Remark: W.r.t the usual LLN for Markov chains, we have obtained the L 1 (Ω) convergence Samy T. Ergodic theorems Probability Theory 75 / 92

76 LLN for rotation of the circle Example 34. We consider λ Lebesgue measure and: (Ω, F, P) = ([0, 1], Borel sets, λ) θ (0, 1) Q c A Borel subset of [0, 1] X = {X n ; n 0} with X n (ω) = (ω + nθ) mod 1 = ω + nθ [ω + nθ]. Then 1 lim n n n 1 A (X n ) = A, a.s and in L 1 (Ω) j=1 Samy T. Ergodic theorems Probability Theory 76 / 92

77 Deterministic LLN for rotation of the circle Theorem 35. In the context of Example 34, for 0 a < b < 1 we have for all x [0, 1) 1 lim n n n 1 [a,b) (X n (x)) = b a, j=1 Proof: Start from Example 34 Additional ingredient based on density arguments Samy T. Ergodic theorems Probability Theory 77 / 92

78 Benford s law Proposition 36. For k {1,..., 9} we have 1 lim n n n m=1 ( ) k (1 st digit of 2 m =k) = log 10 k Samy T. Ergodic theorems Probability Theory 78 / 92

79 Proof Notation: We set θ = log 10 (2) A k = [log 10 (k), log 10 (k + 1)) Expression 1 for log 10 (2 m ): we have log 10 (2 m ) = m θ Expression 2 for log 10 (2 m ): First digit of 2 m is k 0 iff 2 m = k 0 10 α + k 1, with α 0, k 1 < 10 α and ( log 10 (2 m ) = α + log 10 k 0 + k ) 1 10 α Samy T. Ergodic theorems Probability Theory 79 / 92

80 Proof (2) Ergodic sequence: We have seen that first digit of 2 m is k 0 iff which is equivalent to ( m θ = α + log 10 k 0 + k ) 1, 10 α m θ mod 1 A k Thus 1 lim n n n m=1 1 1 (1 st digit of 2 m =k) = lim n n n 1 (m θ mod 1 Ak ) m=1 Samy T. Ergodic theorems Probability Theory 80 / 92

81 Proof (3) Conclusion: We have θ = log 10 (2) Q c We can thus apply Theorem 35 with x = 0 and A = A k. We get 1 lim n n n m=1 ( ) k (m θ mod 1 Ak ) = A k = log 10 k Samy T. Ergodic theorems Probability Theory 81 / 92

82 Outline 1 Definitions and examples Preliminaries on Markov chains Examples of stationary sequences Notion of ergodicity 2 Ergodic theorem 3 Recurrence Samy T. Ergodic theorems Probability Theory 82 / 92

83 Number of points visited Theorem 37. Let X stationary sequence with values in R d S n = n 1 j=0 X j A = {S k 0 for all k 1} R n = {S 1,..., S n } Card({S 1,..., S n }) Then R n lim n n = E [1 A I], almost surely Samy T. Ergodic theorems Probability Theory 83 / 92

84 Proof Lower bound: We have S j (ϕ m (ω)) = S j+m (ω) S m (ω) Thus and S j (ϕ m (ω)) 0 S j+m (ω) S m (ω) 1 A (ϕ m (ω)) = {S j+m (ω) S m (ω) for all j 1} We have thus obtained n R n 1 A (ϕ m (ω)). m=1 and owing to the ergodic theorem we get lim inf n R n n lim inf n 1 n n 1 A (ϕ m (ω)) = E [1 A I] m=1 Samy T. Ergodic theorems Probability Theory 84 / 92

85 Proof (2) Upper bound: For k 1 set Then and A k = {S j 0 for all 1 j k}. 1 Ak (ϕ m (ω)) = {S j (ω) S m (ω) for all m + 1 j m + k} R n k + n k m=1 1 Ak (ϕ m (ω)). Applying the ergodic theorem again we get lim inf n R n n lim sup n n k 1 1 Ak (ϕ m (ω)) = E [1 Ak I] (5) n m=1 Samy T. Ergodic theorems Probability Theory 85 / 92

86 Proof (3) Limit in k: We have k A k decreasing and A k = A. k=1 Therefore by monotone convergence we get lim k E [1 A k I] = E [1 A I]. Thanks to (5) we have thus proved that lim sup n R n n E [1 A I] Samy T. Ergodic theorems Probability Theory 86 / 92

87 Recurrence criterion in Z Theorem 38. Let X stationary sequence with values in Z Hypothesis: X 1 L 1 (Ω) S n = n 1 j=0 X j A = {S k 0 for all k 1} The following holds true: 1 If E[X 1 I] = 0 then P(A) = 0 2 If P(A) = 0 then P(S n = 0 i.o) = 1 Samy T. Ergodic theorems Probability Theory 87 / 92

88 Proof Standing assumption: E[X 1 I] = 0, which yields S n lim n n Relation for max S: For all K 1 we have = 0. (6) lim sup n max 1 k n S k n = lim sup n max K k n S k n max k K S k k Taking K and thanks to (6) we get lim max S k n 1 k n n = 0 (7) Samy T. Ergodic theorems Probability Theory 88 / 92

89 Proof (2) Consequence on R n : In d = 1 we have Hence and according to (7) we get lim max R n n 1 k n n R n = Range {X j ; 0 j n}. R n max 1 k n S k, Theorem 37 Relation (7) = E [1 A I] = 0 Conclusion for item 1: We take expectations in previous relation, which yields P(A) = 0 Samy T. Ergodic theorems Probability Theory 89 / 92

90 Proof (3) Notation: We set F j = {S i 0 for i < j and S j = 0} G j,k = {S j+i S j 0 for i < k and S j+k S j = 0} Simple relations on F j, G j,k : 1 Since P(A) = 0 we have k 1 P(F k ) = 1 2 By stationarity, P(G j,k ) = P(F k ) 3 Hence we also have k 1 P(G j,k ) = 1 4 For fixed j, the sets G j,k are disjoint Samy T. Ergodic theorems Probability Theory 90 / 92

91 Proof (4) More relations on F j, G j,k : We have obtained P (F j G j,k ) = P (F j ), and P (F j G j,k ) = 1 k 1 j,k 1 Conclusion on recurrence: We have F j G j,k = {S n = 0 at least two times} j,k 1 Therefore P (S n = 0 at least two times) = 1 Generalization: With sets G j1,...,j m Taking m Samy T. Ergodic theorems Probability Theory 91 / 92

92 Extension by Kac Theorem 39. Let X stationary sequence with values in (S, S) A S and S n = n 1 j=0 X j T 0 = 0 and T n = inf{m > T n 1 ; X m A} t n = T n T n 1 Hypothesis: P (X n A at least once) = 1 Then the following holds true: 1 Under P( X 0 A) the sequence (t n ) is stationary 2 We have E [T 1 X 0 A] = 1 P (X 0 A) Samy T. Ergodic theorems Probability Theory 92 / 92

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

STA205 Probability: Week 8 R. Wolpert

STA205 Probability: Week 8 R. Wolpert INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Part II Probability and Measure

Part II Probability and Measure Part II Probability and Measure Theorems Based on lectures by J. Miller Notes taken by Dexter Chua Michaelmas 2016 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

STA 711: Probability & Measure Theory Robert L. Wolpert

STA 711: Probability & Measure Theory Robert L. Wolpert STA 711: Probability & Measure Theory Robert L. Wolpert 6 Independence 6.1 Independent Events A collection of events {A i } F in a probability space (Ω,F,P) is called independent if P[ i I A i ] = P[A

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

MATH 418: Lectures on Conditional Expectation

MATH 418: Lectures on Conditional Expectation MATH 418: Lectures on Conditional Expectation Instructor: r. Ed Perkins, Notes taken by Adrian She Conditional expectation is one of the most useful tools of probability. The Radon-Nikodym theorem enables

More information

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Information Theory and Statistics Lecture 3: Stationary ergodic processes Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

A Study of Probability and Ergodic theory with applications to Dynamical Systems

A Study of Probability and Ergodic theory with applications to Dynamical Systems A Study of Probability and Ergodic theory with applications to Dynamical Systems James Polsinelli, Advisors: Prof. Barry James and Bruce Peckham May 0, 010 Contents 1 Introduction I began this project

More information

Lecture Notes Introduction to Ergodic Theory

Lecture Notes Introduction to Ergodic Theory Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

Ergodic Properties of Markov Processes

Ergodic Properties of Markov Processes Ergodic Properties of Markov Processes March 9, 2006 Martin Hairer Lecture given at The University of Warwick in Spring 2006 1 Introduction Markov processes describe the time-evolution of random systems

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

7 Convergence in R d and in Metric Spaces

7 Convergence in R d and in Metric Spaces STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Independent random variables

Independent random variables CHAPTER 2 Independent random variables 2.1. Product measures Definition 2.1. Let µ i be measures on (Ω i,f i ), 1 i n. Let F F 1... F n be the sigma algebra of subsets of Ω : Ω 1... Ω n generated by all

More information

Three hours THE UNIVERSITY OF MANCHESTER. 31st May :00 17:00

Three hours THE UNIVERSITY OF MANCHESTER. 31st May :00 17:00 Three hours MATH41112 THE UNIVERSITY OF MANCHESTER ERGODIC THEORY 31st May 2016 14:00 17:00 Answer FOUR of the FIVE questions. If more than four questions are attempted, then credit will be given for the

More information

MEASURE-THEORETIC ENTROPY

MEASURE-THEORETIC ENTROPY MEASURE-THEORETIC ENTROPY Abstract. We introduce measure-theoretic entropy 1. Some motivation for the formula and the logs We want to define a function I : [0, 1] R which measures how suprised we are or

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

Notes 15 : UI Martingales

Notes 15 : UI Martingales Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Integration on Measure Spaces

Integration on Measure Spaces Chapter 3 Integration on Measure Spaces In this chapter we introduce the general notion of a measure on a space X, define the class of measurable functions, and define the integral, first on a class of

More information

18.175: Lecture 30 Markov chains

18.175: Lecture 30 Markov chains 18.175: Lecture 30 Markov chains Scott Sheffield MIT Outline Review what you know about finite state Markov chains Finite state ergodicity and stationarity More general setup Outline Review what you know

More information

Introduction to Ergodic Theory

Introduction to Ergodic Theory Introduction to Ergodic Theory Marius Lemm May 20, 2010 Contents 1 Ergodic stochastic processes 2 1.1 Canonical probability space.......................... 2 1.2 Shift operators.................................

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

STOR 635 Notes (S13)

STOR 635 Notes (S13) STOR 635 Notes (S13) Jimmy Jin UNC-Chapel Hill Last updated: 1/14/14 Contents 1 Measure theory and probability basics 2 1.1 Algebras and measure.......................... 2 1.2 Integration................................

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

3 hours UNIVERSITY OF MANCHESTER. 22nd May and. Electronic calculators may be used, provided that they cannot store text.

3 hours UNIVERSITY OF MANCHESTER. 22nd May and. Electronic calculators may be used, provided that they cannot store text. 3 hours MATH40512 UNIVERSITY OF MANCHESTER DYNAMICAL SYSTEMS AND ERGODIC THEORY 22nd May 2007 9.45 12.45 Answer ALL four questions in SECTION A (40 marks in total) and THREE of the four questions in SECTION

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

1 Probability space and random variables

1 Probability space and random variables 1 Probability space and random variables As graduate level, we inevitably need to study probability based on measure theory. It obscures some intuitions in probability, but it also supplements our intuition,

More information

Chapter 5. Weak convergence

Chapter 5. Weak convergence Chapter 5 Weak convergence We will see later that if the X i are i.i.d. with mean zero and variance one, then S n / p n converges in the sense P(S n / p n 2 [a, b])! P(Z 2 [a, b]), where Z is a standard

More information

1 Measurable Functions

1 Measurable Functions 36-752 Advanced Probability Overview Spring 2018 2. Measurable Functions, Random Variables, and Integration Instructor: Alessandro Rinaldo Associated reading: Sec 1.5 of Ash and Doléans-Dade; Sec 1.3 and

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

Hints/Solutions for Homework 3

Hints/Solutions for Homework 3 Hints/Solutions for Homework 3 MATH 865 Fall 25 Q Let g : and h : be bounded and non-decreasing functions Prove that, for any rv X, [Hint: consider an independent copy Y of X] ov(g(x), h(x)) Solution:

More information

Rudiments of Ergodic Theory

Rudiments of Ergodic Theory Rudiments of Ergodic Theory Zefeng Chen September 24, 203 Abstract In this note we intend to present basic ergodic theory. We begin with the notion of a measure preserving transformation. We then define

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

BERNOULLI DYNAMICAL SYSTEMS AND LIMIT THEOREMS. Dalibor Volný Université de Rouen

BERNOULLI DYNAMICAL SYSTEMS AND LIMIT THEOREMS. Dalibor Volný Université de Rouen BERNOULLI DYNAMICAL SYSTEMS AND LIMIT THEOREMS Dalibor Volný Université de Rouen Dynamical system (Ω,A,µ,T) : (Ω,A,µ) is a probablity space, T : Ω Ω a bijective, bimeasurable, and measure preserving mapping.

More information

Diophantine approximation of beta expansion in parameter space

Diophantine approximation of beta expansion in parameter space Diophantine approximation of beta expansion in parameter space Jun Wu Huazhong University of Science and Technology Advances on Fractals and Related Fields, France 19-25, September 2015 Outline 1 Background

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs

Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs Markov Processes Relat. Fields 9, 413 468 (2003) Extended Renovation Theory and Limit Theorems for Stochastic Ordered Graphs S. Foss 1 and T. Konstantopoulos 2 1 Department of Actuarial Mathematics and

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond

Measure Theory on Topological Spaces. Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond Measure Theory on Topological Spaces Course: Prof. Tony Dorlas 2010 Typset: Cathal Ormond May 22, 2011 Contents 1 Introduction 2 1.1 The Riemann Integral........................................ 2 1.2 Measurable..............................................

More information

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria

Weak convergence. Amsterdam, 13 November Leiden University. Limit theorems. Shota Gugushvili. Generalities. Criteria Weak Leiden University Amsterdam, 13 November 2013 Outline 1 2 3 4 5 6 7 Definition Definition Let µ, µ 1, µ 2,... be probability measures on (R, B). It is said that µ n converges weakly to µ, and we then

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

17. Convergence of Random Variables

17. Convergence of Random Variables 7. Convergence of Random Variables In elementary mathematics courses (such as Calculus) one speaks of the convergence of functions: f n : R R, then lim f n = f if lim f n (x) = f(x) for all x in R. This

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 4. Entropy and Markov Chains

Lecture 4. Entropy and Markov Chains preliminary version : Not for diffusion Lecture 4. Entropy and Markov Chains The most important numerical invariant related to the orbit growth in topological dynamical systems is topological entropy.

More information

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE)

Review of Multi-Calculus (Study Guide for Spivak s CHAPTER ONE TO THREE) Review of Multi-Calculus (Study Guide for Spivak s CHPTER ONE TO THREE) This material is for June 9 to 16 (Monday to Monday) Chapter I: Functions on R n Dot product and norm for vectors in R n : Let X

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation

Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation arxiv:1312.0287v3 [math.pr] 13 Jan 2016 Point-Map-Probabilities of a Point Process and Mecke s Invariant Measure Equation François Baccelli University of Texas at Austin Mir-Omid Haji-Mirsadeghi Sharif

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

Markov Chains, Conductance and Mixing Time

Markov Chains, Conductance and Mixing Time Markov Chains, Conductance and Mixing Time Lecturer: Santosh Vempala Scribe: Kevin Zatloukal Markov Chains A random walk is a Markov chain. As before, we have a state space Ω. We also have a set of subsets

More information

A Short Introduction to Ergodic Theory of Numbers. Karma Dajani

A Short Introduction to Ergodic Theory of Numbers. Karma Dajani A Short Introduction to Ergodic Theory of Numbers Karma Dajani June 3, 203 2 Contents Motivation and Examples 5 What is Ergodic Theory? 5 2 Number Theoretic Examples 6 2 Measure Preserving, Ergodicity

More information

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias

Notes on Measure, Probability and Stochastic Processes. João Lopes Dias Notes on Measure, Probability and Stochastic Processes João Lopes Dias Departamento de Matemática, ISEG, Universidade de Lisboa, Rua do Quelhas 6, 1200-781 Lisboa, Portugal E-mail address: jldias@iseg.ulisboa.pt

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

18.175: Lecture 3 Integration

18.175: Lecture 3 Integration 18.175: Lecture 3 Scott Sheffield MIT Outline Outline Recall definitions Probability space is triple (Ω, F, P) where Ω is sample space, F is set of events (the σ-algebra) and P : F [0, 1] is the probability

More information

4. Ergodicity and mixing

4. Ergodicity and mixing 4. Ergodicity and mixing 4. Introduction In the previous lecture we defined what is meant by an invariant measure. In this lecture, we define what is meant by an ergodic measure. The primary motivation

More information

Ergodic Theory. Constantine Caramanis. May 6, 1999

Ergodic Theory. Constantine Caramanis. May 6, 1999 Ergodic Theory Constantine Caramanis ay 6, 1999 1 Introduction Ergodic theory involves the study of transformations on measure spaces. Interchanging the words measurable function and probability density

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989), Real Analysis 2, Math 651, Spring 2005 April 26, 2005 1 Real Analysis 2, Math 651, Spring 2005 Krzysztof Chris Ciesielski 1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

ABSTRACT EXPECTATION

ABSTRACT EXPECTATION ABSTRACT EXPECTATION Abstract. In undergraduate courses, expectation is sometimes defined twice, once for discrete random variables and again for continuous random variables. Here, we will give a definition

More information

I. ANALYSIS; PROBABILITY

I. ANALYSIS; PROBABILITY ma414l1.tex Lecture 1. 12.1.2012 I. NLYSIS; PROBBILITY 1. Lebesgue Measure and Integral We recall Lebesgue measure (M411 Probability and Measure) λ: defined on intervals (a, b] by λ((a, b]) := b a (so

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

18.175: Lecture 2 Extension theorems, random variables, distributions

18.175: Lecture 2 Extension theorems, random variables, distributions 18.175: Lecture 2 Extension theorems, random variables, distributions Scott Sheffield MIT Outline Extension theorems Characterizing measures on R d Random variables Outline Extension theorems Characterizing

More information

2 Probability, random elements, random sets

2 Probability, random elements, random sets Tel Aviv University, 2012 Measurability and continuity 25 2 Probability, random elements, random sets 2a Probability space, measure algebra........ 25 2b Standard models................... 30 2c Random

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability Chapter 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary family

More information

Conditional independence, conditional mixing and conditional association

Conditional independence, conditional mixing and conditional association Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May

More information

MET Workshop: Exercises

MET Workshop: Exercises MET Workshop: Exercises Alex Blumenthal and Anthony Quas May 7, 206 Notation. R d is endowed with the standard inner product (, ) and Euclidean norm. M d d (R) denotes the space of n n real matrices. When

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}.

Dual Space of L 1. C = {E P(I) : one of E or I \ E is countable}. Dual Space of L 1 Note. The main theorem of this note is on page 5. The secondary theorem, describing the dual of L (µ) is on page 8. Let (X, M, µ) be a measure space. We consider the subsets of X which

More information

9 Radon-Nikodym theorem and conditioning

9 Radon-Nikodym theorem and conditioning Tel Aviv University, 2015 Functions of real variables 93 9 Radon-Nikodym theorem and conditioning 9a Borel-Kolmogorov paradox............. 93 9b Radon-Nikodym theorem.............. 94 9c Conditioning.....................

More information

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due 9/5). Prove that every countable set A is measurable and µ(a) = 0. 2 (Bonus). Let A consist of points (x, y) such that either x or y is

More information

DYNAMICS ON THE CIRCLE I

DYNAMICS ON THE CIRCLE I DYNAMICS ON THE CIRCLE I SIDDHARTHA GADGIL Dynamics is the study of the motion of a body, or more generally evolution of a system with time, for instance, the motion of two revolving bodies attracted to

More information

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor)

Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Dynkin (λ-) and π-systems; monotone classes of sets, and of functions with some examples of application (mainly of a probabilistic flavor) Matija Vidmar February 7, 2018 1 Dynkin and π-systems Some basic

More information

arxiv: v2 [math.ds] 24 Apr 2018

arxiv: v2 [math.ds] 24 Apr 2018 CONSTRUCTION OF SOME CHOWLA SEQUENCES RUXI SHI arxiv:1804.03851v2 [math.ds] 24 Apr 2018 Abstract. For numerical sequences taking values 0 or complex numbers of modulus 1, we define Chowla property and

More information