Ergodicity of Stochastic Processes and the Markov Chain Central Limit Theorem

Size: px
Start display at page:

Download "Ergodicity of Stochastic Processes and the Markov Chain Central Limit Theorem"

Transcription

1 Uiversity of Bristol School of Mathematics Ergodicity of Stochastic Processes ad the Markov Chai Cetral Limit Theorem A Project of 30 Credit Poits at Level 7 For the Degree of MSci Mathematics Author: Bejami A. Robiso Supervisor: Dr Martó Balázs May 1, 015

2 Ackowledgemet of Sources For all ideas take from other sources (books, articles, iteret), the source of the ideas is metioed i the mai text ad fully refereced at the ed of the report. All material which is quoted essetially word-for-word from other sources is give i quotatio marks ad refereced. Pictures ad diagrams copied from the iteret or other sources are labelled with a referece to the web page, book, article etc. Siged: Dated: i

3 Abstract I this report we ivestigate how the well-kow cetral limit theorem for i.i.d. radom variables ca be exteded to Markov chais. We preset some of the theory o ergodic measures ad ergodic stochastic processes, icludig the ergodic theorems, before applyig this theory to prove a cetral limit theorem for square-itegrable ergodic martigale differeces ad for certai ergodic Markov chais. We also give a alterative proof of a cetral limit theorem for statioary, irreducible, aperiodic Markov chais o a fiite state space. Fially, we outlie some of the diverse applicatios of the Markov chai cetral limit theorem ad discuss extesios of this work.

4 Cotets Itroductio 1 Prelimiary Measure Theory ad Probability Covergece of Radom Variables Almost sure covergece L p covergece Weak covergece Covergece i measure The bouded covergece theorem Useful iequalities Markov Processes Martigales Ergodicity of Stochastic Processes 14.1 Measure-Preservig Trasformatios Ergodic Theorems Ergodicity of Measures Structure of Ivariat Measures Statioary Markov Processes Cetral Limit Theorems The CLT for Martigale Differeces The CLT for Markov Chais I The CLT for Markov Chais II Applicatios of the Markov Chai CLT Simple Radom Walks o a Torus The simple symmetric radom walk Asymmetric radom walks A Queuig Problem Coclusio 76 1

5 Itroductio A cetral limit theorem gives a scalig limit for the sum of a sequece of radom variables. This cotrols the fluctuatios of the sequece i the log ru. It is well kow that there is a cetral limit theorem for sequeces of i.i.d. radom variables; the theorem is give, for example, i Chapter III, Sectio 3 of [11]. This is a very useful result to have, ad it is a atural questio to ask whether this ca be geeralised to sequeces of radom variables which are ot i.i.d.. I this report we show that we have a cetral limit theorem for fuctios of discretetime Markov chais uder certai coditios. This cetral limit theorem has may applicatios, some of which we discuss i later chapters. We ow outlie the structure of this report. I Chapter 1, we collect prelimiary material o measure theory ad probability from various sources, which we will refer to later i the report. This icludes defiitios of modes of covergece of radom variables, which we will eed to be familiar with i order to uderstad i what sese the limit theorems i later chapters hold. We the recall elemetary properties of Markov chais, which will be useful for us to have i mid whe we cosider limit theorems for Markov chais. We give rigorous defiitios of coditioal expectatio ad martigales, as well as some results o these, which we will use i the proofs of several theorems i later chapters. Here, we also defie martigale differeces, which are processes related to martigales, as we will study the limitig behaviour of these processes i Chapter 3, i order to prove a cetral limit theorem for Markov chais. I Chapter, we prove the ergodic theorems from [15], defie what it meas for a measure or a stochastic process to be ergodic, ad prove several results o ergodicity. We will see i Chapter 3 that the coditio that a Markov chai is ergodic allows us, uder a few additioal coditios, to prove a cetral limit theorem for fuctios of that Markov chai. The mai results of this report are cotaied i Chapter 3. Here, we prove a cetral limit theorem for certai ergodic Markov chais i two ways. First, we prove a cetral limit theorem for square-itegrable ergodic martigale differeces ad the, followig [15], we deduce from this that we have a cetral limit theorem for fuctios of ergodic Markov chais, uder some coditios. We the restrict ourselves to Markov chais which take values i a fiite state space. I this settig, we use a differet method, as i [13], to prove a cetral limit theorem for fuctios of ergodic Markov chais, where we have to impose fewer coditios tha i the case of a geeral state space. I both cases we derive formulae for the variace of the limitig distributio. I Chapter 4, we discuss some simple applicatios of the Markov chai cetral limit theorem which is proved i Chapter 3. We cosider a simple radom walk o a torus, started i its statioary distributio, ad we show that there is a cetral limit theorem for the amout of time spet i the iitial state. We treat the symmetric ad asymmetric cases separately. We also look at a example of a radom walk o the o-egative itegers from [15], which could model the legth of a queue. Here we show, by a similar method, that there is a

6 cetral limit theorem for the amout of time durig which the queue is empty. We coclude the report by discussig extesios of the theory which we have preseted ad further applicatios of this. 3

7 Chapter 1 Prelimiary Measure Theory ad Probability Here we preset some theory o covergece of radom variables, o Markov processes ad o martigales that we will eed i order to approach the topics which we discuss i the rest of this report. We assume kowledge of some basic defiitios ad properties from measure theory, icludig product sigma-fields ad product measures, as could be foud for example i Bartle s text [1]. We also assume kowledge of itroductory probability theory, such as the material from Ross s book [10]. I this chapter we state results without proof that we will later apply i provig our mai results. 1.1 Covergece of Radom Variables The mai results of this report cocer the covergece of sequeces of radom variables. Therefore, we eed to recall some defiitios ad results o covergece which ot all readers will be familiar with. There are four types of covergece which we will work with i this report: amely, they are almost sure covergece, weak covergece (covergece i distributio), covergece i L p ad covergece i measure. We will oly cosider real-valued radom variables i this report. We start by recallig the defiitios of these modes of covergece, as well as some useful facts, take from the books of Bartle [1], Grimmett ad Stirzaker [6], Ross [10], Shiryaev [11], ad Varadha [15]. Throughout this sectio, we cosider a probability space (Ω, F, P) Almost sure covergece Let {X } be a sequece of radom variables o (Ω, F). We defie what it meas for X to coverge to X almost surely, where X is aother radom variable o (Ω, F), as i Sectio 7. of [6]. 4

8 Defiitio (Almost sure covergece). We say that X coverges to X almost surely if ( ) P lim X (ω) = X(ω) = 1. We write this as X X a.s L p covergece For our ext mode of covergece, we cosider fuctio spaces called L p spaces. We will give a defiitio of these spaces as ormed spaces ad defie what it meas for radom variables to coverge i L p. L p spaces are discussed i Chapter 6 of [1] i the cotext of geeral measure spaces. We reformulate the material here i terms of a probability space (Ω, F, P). Recall that a radom variable X is defied to be a F-measurable fuctio ad that the expectatio of X is defied as the itegral of X with respect to P, whe this itegral exists; i.e. E(X) = XdP. We start by defiig the spaces L p, for p [1, ) ad the for p =. Defiitio 1.1. (L p spaces). Let p [1, ). The L p (Ω, F, P) is the space of equivalece classes [X] := {Y : X = Y P-almost everywhere} of F-measurable fuctios X such that Ω E( X p ) <. Defiitio (The space L ). We defie L (Ω, F, P) to be the space of equivalece classes [X] := {Y : X = Y P-almost everywhere} of F-measurable fuctios X which are bouded P-almost everywhere; i.e. [X] L if ad oly if M 0 such that X M P-almost everywhere. Remark Whe it is clear which space, σ-field or probability measure we are workig with, we will drop the argumets from L p so that we may write L p L p (Ω, F, P) L p (Ω) L p (P).. Although techically the space L p is a space of equivalece classes of fuctios, we will i practice say that a fuctio X is a elemet of L p if X is F-measurable ad E( X p ) <. This is a commo covetio, as remarked after Theorem 6.7 of [1]. For 1 p, we ca defie a orm o the space L p as follows. 5

9 Defiitio (L p orm). Suppose that p [1, ). The we defie the L p orm by X p := [E( X p )] 1 p, (1.1.1) for ay X L p. We defie the L orm by for ay X L. X := if {M 0 : X M P-almost everywhere}, (1.1.) These quatities are well-defied as orms o the L p spaces, as we assert i the followig theorem. Theorem For p [1, ), the space L p equipped with the orm p, as defied i (1.1.1), is a ormed space. Also, the space L equipped with the orm, as defied i (1.1.), is a ormed space. The proof that L p is a ormed space for p [1, ) relies o Mikowski s iequality, which we will ow state. Lemma (Mikowski s iequality). Suppose that p [1, ) ad X, Y L p. The X + Y L p ad we have the followig iequality: X + Y p X p + Y p. (1.1.3) We are ow ready to defie L p covergece, as i Chapter 7 of [1]. Defiitio (L p covergece). Let p [1, ] ad let {X } be a sequece of radom variables such that X i L p (Ω, F, P) for all i N. Also let X L p (Ω, F, P). The we say that X coverges to X i L p if X X p 0. We write X L p X Weak covergece Let us ow defie weak covergece of the sequece {X } as i Sectio 7. of [6]. Let X, X 1, X,... be radom variables o (Ω, F). Defiitio (Weak covergece). Let F be the distributio fuctio of X, for each N, ad F be the distributio fuctio of X. We say that X coverges weakly to X, or X coverges to X i distributio if, for every x at which F (x) is cotiuous, We will deote this as X D X. F (x) F (x). 6

10 We will ow see two theorems which give us ways to prove weak covergece. We first state a ecessary ad sufficiet coditio for weak covergece, by defiig the characteristic fuctio of a radom variable, as i Sectio 5.7 of [6]. Defiitio Let X be a radom variable. The the characteristic fuctio of X is the fuctio φ : R C, defied by φ(t) := E ( e itx). This fuctio is well-defied for ay radom variable X. The statemet of the cotiuity lemma is take from Chapter III, Sectio 3 of [11]. Theorem 1.1. (Cotiuity lemma). Let {X } be a sequece of radom variables with characteristic fuctios φ. The we have the followig implicatios. 1. If there exists a radom variable X with characteristic fuctio φ such that D X, the φ (t) φ(t) for all t R. X. If φ(t) := lim φ (t) exists for all t R ad φ(t) is cotiuous at t = 0, the a radom variable X with characteristic fuctio φ such that X D X. The cotiuity lemma is very importat i probability theory. For example, it is used by Shiryaev i [11] to prove the cetral limit theorem for sequeces of i.i.d. radom variables. We will use it to prove a cetral limit theorem for martigale differeces i Sectio 3.1. We ow state a theorem which gives aother criterio for weak covergece for certai sequeces of radom variables. We will first recall the defiitio of the momet geeratig fuctio, from Chapter 7, Sectio 7 of [10]. Defiitio For a radom variable X, the momet geeratig fuctio of X is the fuctio M : R R {+ }, defied by M(t) = E ( e tx). Note that this fuctio may be ifiite. We ow state the theorem, which is foud i Chapter 8, Sectio 3 of [10]. Theorem Let {X } be a sequece of radom variables with momet geeratig fuctios M ad let X be a radom variable with momet geeratig fuctio M. Suppose that M M for all t R. D The X X. This theorem is less commoly used tha the cotiuity lemma for characteristic fuctios, but we will use this form i our secod proof of a Markov chai cetral limit theorem i Sectio 3.3, followig [13]. 7

11 1.1.4 Covergece i measure The fial mode of covergece which we defie is covergece i measure, as i Sectio 1.3 of [15]. Defiitio Let {X } be a sequece of radom variables o (Ω, F) ad X aother radom variable o (Ω, F). We say that X coverges to X i measure or i probability if, for ay ε > 0, P (ω : X (ω) X(ω) ε) 0. We will see how this mode of covergece is of use to us i Sectio It will also be useful to ote the followig implicatio. Theorem Let p [1, ] ad let {X } be a sequece of radom variables with X L p (Ω, F, P) for each N. Suppose that X L p (Ω, F, P) such L that X p X or X Xa.s.. The X coverges to X i measure. The above theorem comes from Chapter 7 of [1], where Bartle discusses all of the relatios betwee the modes of covergece which we have defied here. We should ote here that the above theorem is false if we are workig i a geeral measure space. For almost sure covergece to imply covergece i measure, we eed the coditio that the total measure of the space is fiite. Of course, this is ot a problem for us here, as a probability space has total measure The bouded covergece theorem Let X, X 1, X,... be radom variables o (Ω, F) ad suppose that X X i some sese. We are iterested i whether we ca iterchage the order i which we take the limit ad the expectatio of these radom variables. There are a few results from measure theory which give sufficiet coditios for this to be allowed (see, for example, [1] ad [11]), but we are oly goig to use two of these i our work. These theorems both cocer radom variables which are bouded. The first theorem which we state is a special case of Lebesgue s domiated covergece theorem, which ca be foud i Chapter II, Sectio 6 of [11]. Theorem (Bouded covergece theorem). Suppose that there exists a costat C such that X C, for all N, ad suppose that X X almost surely. The X, X 1, X,... L 1, E(X ) E(X) ad E( X X ) 0. 8

12 We ca relax the coditio that {X } coverges almost surely to the weaker coditio of covergece i measure ad fid that Lebesgue s domiated covergece theorem still holds, as show i Sectio 1.3 of [15]. Hece we have a bouded covergece theorem for a sequece which coverges i measure, as follows. Theorem Suppose that there exists a costat C such that X C for all N ad suppose that X X i measure. The X, X 1, X,... L 1, E(X ) E(X) ad E( X X ) Useful iequalities Provig covergece will ivolve approximatig certai quatities. The iequalities which we list here are very commoly used i probability ad will all be useful for us at some stage i this report. The first iequality which we state is due to Chebyshev ad is foud i may textbooks, icludig i Chapter II, Sectio 6 of [11]. Theorem (Chebyshev s iequality). Let X be a radom variable ad let k (0, ). Defie µ = E(X) ad σ = Var(X). The P( X µ k) σ k. (1.1.4) We ow state a equally commo iequality, kow as the Cauchy-Schwarz iequality, from [11]. Theorem (Cauchy-Schwarz iequality). Let X, Y L. The XY L 1 ad [E( XY )] E(X )E(Y ). (1.1.5) Fially, we state ad prove a immediate corollary of the Cauchy-Schwarz iequality. Corollary Let X, Y L. The Proof. Set µ := E(X) ad ν := E(Y ). The Cov(X, Y ) Var(X) Var(Y ). (1.1.6) Cov(X, Y ) = [E [(X µ)(y ν)]] E([X µ] )E([Y ν] ) = Var(X) Var(Y ). 9

13 1. Markov Processes Sice a large proportio of this report will be spet o provig cetral limit theorems for Markov chais, we ow take some time to review some basic Markov chai theory. I this sectio we recall the defiitio of a Markov chai, as well as some related defiitios ad properties, adapted from the material i Chapter 6 of [6]. Let (Ω, F, P) be a probability space. We recall that a stochastic process is defied to be a sequece of radom variables o (Ω, F). Let (S, B) be a measurable space i which the radom variables ca take values ad suppose that S is coutable. We call S the state space of the process. Defiitio A stochastic process {X } is called a Markov process or a Markov chai if it satisfies the Markov coditio: P(X = s X 0 = x 0, X 1 = x 1,..., X 1 = x 1 ) = P(X = s X 1 = x 1 ), for all N ad s, x 1, x,..., x 1 S. We are oly iterested i Markov chais such that P(X +1 = j X = i) = P(X 1 = j X 0 = i) =: π i,j, (1..1) for all = 0, 1,,... ad i, j S. Defiitio 1.. (Trasitio probabilities). The trasitio matrix of a Markov chai which satisfies (1..1) is defied to be the matrix Π with etries π i,j. For ay m, N, we defie the -step trasitio probabilities to be π i,j (m, m + ) = P(X m+ = j X m = i). These probabilities are idepedet of m, so we write π i,j (m, m + ) =: π i,j (). We defie Π() to be the matrix with etries π i,j (). Lemma For ay N, we have Π() = Π. We ext defie a statioary distributio of a Markov chai. This will be a very importat defiitio for us later i the report, particularly whe we study ergodicity of Markov chais i Sectio.5 ad whe we apply our theory to examples i Chapter 4. Defiitio A probability measure µ o (S, B) is said to be a statioary distributio for a Markov chai with trasitio matrix Π if µ = µπ. The followig two properties of a Markov chai are commo ad the study of Markov chais which have these properties is simpler tha that of geeral Markov chais. Defiitio We say that a Markov chai with trasitio matrix Π is irreducible if, for each i, j S, m N such that π i,j (m) > 0. 10

14 The period d(i) of a state i S is defied to be the highest commo factor of the set { : π i,i > 0}. I a irreducible chai, d(i) = d(j) for all i, j S. Defiitio We say that a irreducible chai is aperiodic if d(i) = 1 for all i S. Fially, we state a result about Markov chais o fiite state spaces, which we will use i Sectio.5 to say somethig about ergodicity of these chais. Theorem Let S be a fiite set ad suppose that {X } is a irreducible aperiodic Markov chai with state space S. The the Markov chai {X } has a uique statioary distributio. 1.3 Martigales Martigales are aother essetial tool i our work. For example, we prove a cetral limit theorem for Markov chais i Sectio 3. via the cetral limit theorem for martigale differeces (Theorem 3.1.1), which are processes related to martigales ad are defied below. Also, we will repeatedly use properties of martigales i our secod proof of a cetral limit theorem for Markov chais i Sectio 3.3. We are ow goig to give a formal defiitio of a martigale ad some basic properties which we will eed, without proof, from Williams book [16]. To defie martigales, we eed a rigorous defiitio of coditioal expectatio, as give i Chapter 9 of [16]. Let (Ω, F, P) be a probability space. Let X L 1 ad let G F be a sub-σ-field. Defiitio [Coditioal expectatio] A versio of the coditioal expectatio of X give G is defied to be a radom variable Y such that 1. Y is G-measurable,. Y L 1, 3. For every G G, E(1 G X) = E(1 G Y ). It was proved by Kolmogorov that such a radom variable exists ad is almost surely uique. Theorem There exists a radom variable Y which satisfies the three coditios i Defiitio Moreover, if Y 1, Y both satisfy these coditios, the Y 1 = Y almost surely. Because of the above theorem, we refer to a radom variable Y as i Defitio as the coditioal expectatio of X give G. We write Y = E(X G). The coditioal expectatio has several ice properties which will be useful for 11

15 us. We take these from Chapter 9 of [16]. Let X L 1 ad let G ad H be sub-σ-algebras of F. The we have the followig properties. Theorem 1.3. (Properties of coditioal expectatio). 1. E (E[X G]) = E(X).. If X is G-measurable, the E(X G) = X a.s.. 3. (Tower property) If H G, the E (E[X G] H) = E(X H) a.s.. (1.3.1) 4. If Z is a bouded G-measurable radom variable, the E(ZX G) = ZE(X G) a.s.. 5. If H is idepedet of σ(σ(x), G), the E(X σ(g, H)) = E(X G) a.s.. We will ow give the defiitio of what it meas for a stochastic process to be a martigale with respect to some filtratio, as i Chapter 10 of [16]. Defiitio 1.3. (Filtratio). A filtratio is a family of σ-algebras {F : = 0, 1,... } such that F 0 F 1 F. Defiitio We say that a stochastic process {X } is adapted to the filtratio (F ) if X is F -measurable for all = 0, 1,,.... Defiitio [Martigale] A stochastic process {X } is a martigale with respect to (F ) if 1. {X } is adapted to (F ),. X L 1 for each = 0, 1,,..., ad 3. E(X F 1 ) = X 1 a.s., for all 1. Whe it is clear which filtratio we are workig with, we will just say that {X } is a martigale. A easy cosequece of the defiitio of a martigale is the followig lemma. Lemma Let {X } be a martigale with respect to (F ). The for all N. E(X ) = E(X 0 ) 1

16 We ow see that that the above result ca, uder some coditios, be exteded to the case where is a radom time. Defiitio (Stoppig time). We say that a o-egative iteger-valued radom variable T is a stoppig time with respect to a filtratio (F ) if {T } F for every = 0, 1,,.... The followig theorem is a special case of the optioal stoppig theorem which is give i Chapter 10 of [16]. Theorem (Optioal stoppig theorem). Let (F ) be a filtratio, T a stoppig time ad {X } a martigale. The we have that X T L 1 ad E(X T ) = E(X 0 ), if oe of the followig coditios holds. 1. T is bouded;. {X } is bouded ad T is almost surely fiite; 3. E(T ) < ad K 0 such that for every N ad ω Ω. X (ω) X 1 (ω) K We will apply the optioal stoppig theorem several times i our secod proof of a cetral limit theorem for Markov chais i Sectio 3.3. Before closig this prelimiary chapter, we give oe more defiitio which will be of use to us later i this report. This comes from Sectio 5.1 of [15]. Defiitio [Martigale differece] Let {X } be a martigale with respect to a filtratio (F ). Defie Y +1 := X +1 X for all N. The we say that the process {Y } is a martigale differece. Lemma Let (F ) be a filtratio. A stochastic process {Y } is a martigale differece with respect to (F ) if ad oly if {Y } satisfies the first two coditios i Defiitio ad, for all N, E(Y +1 F ) = 0 a.s. (1.3.) I Sectio 3.1, we prove a cetral limit theorem for martigale differeces ad the go o to deduce a cetral limit theorem for Markov chais i the followig sectio. 13

17 Chapter Ergodicity of Stochastic Processes I this chapter we prove the ergodic theorems, which cocer covergece of radom variables. These theorems will play a key role i provig the cetral limit theorems i the ext chapter. We will defie what it meas for a probability measure or a stochastic process to be ergodic, ad we will see that ergodicity is a sufficiet coditio for the limit i the ergodic theorems to be a costat..1 Measure-Preservig Trasformatios I this sectio we relate statioary stochastic processes to measure-preservig trasformatios, followig Varadha i [15]. A statioary stochastic process is a sequece of radom variables {X } N such that the joit distributio of (X 1, X,..., X k ) is the same as the joit distributio of (X 1+m, X +m,..., X k +m) for ay 1,,..., k, m Z. Let Ω be the space of sequeces which take values i some measurable space (X, B) ad let F be the product σ-field. Uder certai cosistecy coditios, we ca costruct a measure P o (Ω, F) which describes the evolutio of the process {X } N over time. We ca defie the shift T o Ω by (T ω)() = ξ +1, where ω() = ξ. The statioarity of the process is equivalet to ivariace of P with respect to T ; i.e. PT 1 = P. I this case, we call T a measure-preservig trasformatio for P. We also say that P is a ivariat measure for T. We will study geeral measure-preservig trasformatios ad will later apply our results to statioary stochastic processes. Let (Ω, F, P) be a probability space ad T a measure-preservig trasformatio for P. We will prove some geeral facts about these trasformatios, which are stated i Sectio 6.1 of [15], by relatig T to a liear trasformatio o the space of fuctios o Ω, as described by the followig lemma. 14

18 Lemma.1.1. The measure-preservig trasformatio T : Ω Ω iduces a liear trasformatio U o the space of fuctios o Ω by Proof. Clearly (Uf)(ω) = f(t ω). U(αf + βg)(ω) = (αf + βg)(t ω) = αf(t ω) + βg(t ω) = α(uf)(ω) + β(ug)(ω). We will ow study the trasformatio U. First we show that U is a isometry o the L p spaces for 1 p. By defiitio (see [9]), U is a isometry o a ormed space if it preserves the orm. Lemma.1.. U is a isometry o L p for 1 p <. Proof. We just use the defiitio of U ad ivariace of T with respect to P. Let p [1, ) ad f L p. The f(ω) p dp(ω) = f(t ω) p dp(t ω) Ω Ω = f(t ω) p dp(ω) = (Uf)(ω) p dp(ω). Ω Ω Remark.1.1. U is also a isometry o L. Proof. For ay ω 0 Ω, Uf(ω 0 ) = f(t ω 0 ) sup f(ω) = f. ω Ω So Uf L ad Uf f. But ω 1 Ω such that T ω 1 = ω 1 ad so Uf(ω 1 ) = f(ω 1 ). Thus Uf = f. Next we show that U is ivertible ad we fid its iverse. Lemma.1.3. U is ivertible ad the iverse of U is the trasformatio iduced by T 1, the iverse of T. Proof. Defie U 1 by The So U 1 is the iverse of U. (U 1 f)(ω) = f(t 1 ω) for ay fuctio f o Ω. (UU 1 f)(ω) = (U 1 f)(t ω) = f(t 1 T ω) = f(ω). 15

19 If we cosider U as a trasformatio o L ad defie the usual ier product o L by f, g = f(ω)g(ω)dp(ω), for ay f, g L, the we ca show that Ω U is uitary. To defie what it meas for a trasformatio to be uitary, we itroduce the Hilbert adjoit operator for a trasformatio A, which we deote A. These defiitios are take from Sectios 3.9 ad 3.10 of [9]. A is defied to be the trasformatio o L such that for ay f, g L, The A is uitary if A = A 1. Af, g = f, A g. Remark.1.. A equivalet coditio for A to be uitary is that, for ay f, g L, Af, Ag = f, g. For a simple proof of this, see Sectio 3.10 of [9]. We ow use the above remark to prove that U is uitary. This property will be useful for us i the proof of the L ergodic theorem. Lemma.1.4. U is uitary i L, with ier product defied by f, g = f(ω)g(ω)dp(ω); i.e. U preserves this ier product. Ω Proof. Usig the defiitio of U ad ivariace of T with respect to P, we get Uf, Ug = f(t ω)g(t ω)dp(ω) = f(t ω)g(t ω)dp(t ω) = f(ω)g(ω)dp(ω) = f, g. Before movig o to prove the ergodic theorems, we ote two more properties of U which will be of use to us. Remark.1.3. U1 = 1, where 1(ω) 1, ω Ω, sice (U1)(ω) = 1(T ω) = 1. Remark.1.4. U(fg) = U(f)U(g), for ay fuctios f, g o Ω, sice U(fg)(ω) = (fg)(t ω) = f(t ω)g(t ω) = (Uf)(ω)(Ug)(ω). 16

20 . Ergodic Theorems Let (Ω, F, P) be a probability space ad T a measure-preservig trasformatio for P. Defie the ivariat σ-field by I = {A Ω : T A = A}. The followig theorems from [15] are key results i the proof of the cetral limit theorems of the ext chapter. The first of these is alterately called the Idividual Ergodic Theorem or Birkhoff s Theorem. Theorem..1 (The Idividual Ergodic Theorem). For ay f L 1 (P), the limit f(ω) + f(t ω) + + f(t 1 ω) lim = g(ω) exists for P-almost all ω. Moreover, the limit g(ω) is give by the coditioal expectatio g(ω) = E P (f I). We will first prove the Mea Ergodic Theorems, which cocer L p covergece. We will the develop the tools we eed to complete the proof of the Idividual Ergodic Theorem, followig Varadha i [15]. Theorem.. (Mea Ergodic Theorems). Let p [1, ). f L p (P), the limit The for ay f(ω) + f(t ω) + + f(t 1 ω) lim = g(ω) exists i L p (P). Moreover, the limit g(ω) is give by the coditioal expectatio Proof. We first cosider p =. Defie H = L ad g(ω) = E P (f I). H 0 = {f : f H, Uf = f} = {f : f H, f(t ω) = f(ω)}, where U is the operator iduced by T. We claim that H 0 is a closed o-trivial subspace of H. Sice c H 0 for ay costat c, H 0. Suppose that f, g H 0, α R. The f + g, αf H, by a elemetary property of L p spaces, ad we have U(f + g) = Uf + Ug = f + g ad U(αf) = αuf = αf, by liearity of U. So f + g, αf H 0. We have that H 0 is a subspace of H. Now suppose that (f j ) is a sequece i 17

21 H 0 with L limit f. The Uf(ω) = f(t ω) = lim j f j(t ω) = lim j Uf j(ω) = lim j f j(ω) = f(ω). Thus H 0 is closed. This proves our claim. Now for each N, defie A : H R by A f = f + Uf + + U 1 f. The A f = 1 f + Uf + + U 1 f 1 ( f + Uf + + U 1 f ) by Mikowski s iequality (1.1.3) = f sice U is a L isometry by Lemma.1.. (..1) Hece A 1. Suppose that f H 0, so that Uf = f, ad let N. The A f = f + Uf + + U 1 f So clearly we have that f H 0, = f + f + + f = f. A f f i H ad almost surely. Now suppose that f H0. We claim that H0 = Rage (I U). Sice U is uitary by Lemma.1.4, we have the equivalece Thus Uf = f U 1 f = U f = f. H 0 = {f : f H, (I U )f = f}. We eed the followig remark to prove our claim. Remark..1. The statemet of Exercise 6.1 of [15] tells us that for ay bouded liear trasformatio A o H, we have Rage A = {f : f H, A f = 0}. We do ot prove this here, sice this would require more discussio of Hilbert spaces ad Hilbert adjoit operators, as could be foud for example i Chapter 3 of [9]. 18

22 We ca see that I U is liear ad bouded. Thus, by the above remark, H 0 = {f : f H, (I U )f = f} = Rage (I U), as we claimed. Therefore, we ca approximate ay f H0 by a sequece (f j ) i Rage (I U) such that f f j j 0. For each j N, g j H such that f j = (I U) g j ad thus Hece A f j = gj Ug j + U ( g j Ug j) + + U ( 1 g j Ug j) = gj Ug j + Ug j U g j + U 1 g j U g j = gj U g j. A f j = By the triagle iequality, g j U g j 1 g j + U g j = g j sup g j. j N A f = A (f f j ) + A f j A (f f j ) + A f j. (..) As we oted i (..1), by Mikowski s iequality, A (f f j ) f f j. Therefore A f f f j + sup g j. j N Takig j, we get that Hece 0 A f sup g j j N A f 0. L 0 f H 0. If we deote by Π the orthogoal projectio from H ito H 0, as defied i Sectio 3.3 of [9] - i.e. Π : H H 0 satisfies Πf = g, where g H 0 is such that f = g + h, for some h H 0 - the Πf = { f if f H 0 0 if f H 0, 19

23 ad so A f Πf i L for ay f H. We have a limit i L ad ext we eed to show that this limit is i fact the coditioal expectatio claimed. Let f H. It ca be show that Πf = E P (f I), usig a theorem from Sectio 9.5 of [9], as follows. E P ( I) is a orthogoal projectio if ad oly if for f, g H, E P (E P (f I) I) = E P (f I) ad E P (f I), g = f, E P (g I). Let f, g H. The the first equality which we require is clear from the tower property of coditioal expectatio (1.3.1), ad we arrive at the secod equality as follows: E P (f I), g = E P [ E P (f I)g ] = E ( E P [ E P (f I)g I ]) = E ( E P (f I)E P (g I) ) as E P (f I) is I-measurable = E P [ E P (g I)f ] by symmetry = f, E P (g I). So E P ( I) is a orthogoal projectio. Sice E P (f I) is I-measurable, E P (f I) H 0 for all f H. I particular, for f H 0, E P (f I) = f. So E P ( I) : H H 0 is surjective. Therefore E P ( I) is the orthogoal projectio from H ito H 0. Thus, for f H, E P (f I) = Πf. We have the required result for L ad we ow prove the mea ergodic theorems for geeral L p. Note that the coditioal expectatio operator is well-defied o L p, so Π is a operator of orm oe o L p, for 1 p. First suppose that f L. The A f f, by the triagle iequality for the L orm. Sice we have show that A f Πf i L, we also have that A f Πf i measure, by Theorem Thus we ca use the bouded covergece theorem for covergece i measure (Theorem 1.1.6) to get that A f Πf p 0 for ay p [1, ). (..3) Now let p [1, ) ad f L p. We kow that L is dese i L p. By Mikowski s iequality (1.1.3), we see that A g p g p for ay g L p. Let (f j ) be a sequece of fuctios i L which approximate f i L p ; i.e. f f j p 0 as j. Fix j N. The 0 A f Πf p A f A f j p + A f j Πf j p + Πf j Πf p f f j p + A f j Πf j p + f j f p f j f p by (..3). (..4) 0

24 By defiitio, f f j p 0 as j. Therefore, by lettig j i (..4), we have A f Πf p 0, as required. The proof of the almost sure covergece i the idividual ergodic theorem is based o a iequality which we will be able to prove by meas of the followig theorem from [15]. Theorem..3 (Maximal Ergodic Theorem). Let f L 1 (P) ad, for 1, defie { E 0 ( := ω : sup f(ω) + f(t ω) + + f(t j 1 ω) ) } 0. 1 j The E 0 f(ω)dp 0. Proof. Let h be the fuctio defied by ( h (ω) := f(ω) + f(t ω) + + f(t j 1 ω) ) sup 1 j = f(ω) + max {0, h 1 (T ω)} = f(ω) + h + 1 (T ω), where we defie h + (ω) := max {0, h (ω)}. Note that for ay ω Ω, O E 0, h (ω) = h + (ω) 0, ω, so Hece E 0 f(ω)dp = = h + (ω) 0, ad h + 1 (ω) h+ (ω). f(ω) = h (ω) h + 1 (T ω) = h + (ω) h + 1 (T ω), ω E0. E 0 E 0 E 0 h + (ω)dp h + (ω)dp E 0 E 0 h + 1 (T ω)dp h + (T ω)dp h + (ω)dp h + T (E 0 ) (ω)dp, 0, sice for ay itegrable fuctio h(ω), E = {ω : h(ω) 0}. E h(ω)dp is largest whe by ivariace of T 1

25 We ca ow apply this theorem to prove the followig iequalities which are adapted from Lemma 6.3 of [15]. { } Lemma..1. Let f L 1 (P), l > 0 ad Ẽ = ω : sup (A j f)(ω) l. 1 j The P(Ẽ) 1 f(ω)dp. l Ẽ Proof. We have that { Ẽ = ω : = { ω : f(ω) + f(t ω) + + f(t j 1 ω) sup l 1 j j (f(ω) l) + (f(t ω) l) + + ( f(t j 1 ω) l ) sup 0 1 j j Thus, by the maximal ergodic theorem (Theorem..3), Ẽ (f(ω) l) dp 0, which is equivalet to Ẽ f(ω)dp lp(ẽ) 0. Rearragig gives us the required iequality. } }. Corollary..1. For ay f L 1 (P) ad l > 0, ( ) P ω : sup (A j f) (ω) l 1 f(ω) dp. j 1 l Proof. Let N ad defie If we also defie E := Ê := { } ω : sup 1 j (A j f)(ω) l. { } ω : sup 1 j (A j f )(ω) l, the we see that (A j f)(ω) = A j f(ω) for ay ω Ω, by the triagle iequality, ad so E Ê. We ca ow apply the previous lemma to get P(E ) P(Ê) 1 f dp l Ê 1 f dp. l Ω (..6)

26 Now ote that for ay N, which implies that sup (A j f)(ω) sup (A j f)(ω), 1 j 1 j +1 E E +1. By mootoicity of the sequece of evets (E ), we have that ( ) ( ) P sup (A f)(ω) l = P lim sup (A f)(ω) l j 1 1 j = P ( =1E ) = lim P(E ) 1 f(ω) dp by (..6). l Now we are ready to prove the almost sure covergece i the idividual ergodic theorem, as i [15]. Proof of the Idividual Ergodic Theorem. Fix f L 1 (P). Note that the set D := {f 1 + f : f 1 H 0, f = (I U)g, g L } is dese i L 1. We do ot prove this fact here. For each j N, let f j D be such that E ( f f j ) ( 0 as j ; i.e. ) f j approximates f i L 1. Fix j N. We claim that A f j coverges almost surely. We observed i the proof of the mea ergodic theorems that we have almost sure covergece for ay f 1 H 0. Now suppose g L ad let f = (I U)g. The, usig (..), A f = g U g g + U g = g 0. So we also have almost sure covergece for f. Sice f j D, for ay fixed j, we have almost sure covergece for A f j as we claimed. Thus we have covergece for f as follows. For fixed j, we see that ( A f j [ A f j A f ]) lim sup A f lim if A f = lim sup ( lim if A f j [ A f j A f ]) lim sup A f j lim if A f j [ + lim sup A f j A f ] lim if 3 [ A f j A f ].

27 But, sice A f j coverges almost surely, Also ote that lim sup [ A f j A f ] lim if lim sup A f j = lim if A f j a. s.. (..7) [ A f j A f ] ( sup A f j f ). (..8) N So, puttig everythig together ad applyig the previous corollary, we arrive at ( ) 0 P lim sup A f lim if A f ε ( [ = P lim sup A f j A f ] [ lim if A f j A f ] ) ε by (..7) ( P sup N ( A f j f ) ε ) by (..8) ε E ( A ( f j f ) ) by Corollary..1 ε E ( f j f ) j 0, where the fial iequality follows from the fact that A g 1 g 1, for ay g L 1. This is due to Mikowski s iequality (1.1.3), as we oted i the proof of the mea ergodic theorems. If we ow let ε 0, we see that we have a almost sure limit for A f as. By the result of the mea ergodic theorems, it must be the case that lim A f = E P (f I) a.s.. We have show almost sure ad L p covergece to a radom variable, but the theorems will be of use to us whe the limit is a costat. We will see i the followig sectio that this is the case uder certai coditios o the measure we are workig with..3 Ergodicity of Measures We ow defie what it meas for a ivariat measure to be ergodic ad see that this provides a sufficiet coditio for the limit i the ergodic theorems to be a costat. Defiitio.3.1. Let P be a ivariat measure for a trasformatio T o Ω. P is ergodic for T if P(A) {0, 1} for ay ivariat set A I. Remark.3.1. Let T be a trasformatio o Ω ad P a ergodic measure for T. The ay fuctio f which is ivariat uder T is almost surely costat with respect to P. Moreover, E (f I) = E P (f). 4

28 Proof. Let f be ivariat uder T. The for ay c R, {f(x) c} I. So by ergodicity, each of these sets is trivial. Thus c 0 R such that f(x) = c 0 for P-almost every x. That is, f = E P (f) almost surely. Thus E (f I) = E ( E P (f) I ) = E P (f). This remark tells us that, whe P is ergodic for T, the limit i the ergodic theorems is a costat. We will use this result to prove the cetral limit theorems i the ext chapter. Next, we look at a simple example of whe we have a ergodic measure. Theorem.3.1. Ay product measure is ergodic for the shift. To prove this, we eed to appeal to a result kow as Kolmogorov s zero-oe law. Let (Ω, F, P) be a product space with Ω =... Ω Ω 1 Ω 0 Ω 1 Ω ad defie F i = σ {Ω j : j i}, for all i N. Defie the tail σ-algebra to be T := N T, where T := σ(f, F +1,... ). Lemma.3.1 (Kolmogorov s zero-oe law). For ay A T, P(A) = 0 or 1. Proof. Let A T F ad let ε > 0. The a, b N with a < b ad A σ(f a, F a+1,..., F b ) such that P(A A ) ε, where we use the otatio A A = A \ A A \ A. Note that P(A) P(AA ) + P(A \ A ) P(AA ) + P(A A ). So P(A) P(AA ) ε. (.3.1) Sice A T ad A σ(f a, F a+1,..., F b ), A ad A are idepedet. Therefore we have P(A) P(A)P(A ) ε. Also, we have that P(A) P(A ) ε, sice P(A) P(A ) P(A) P(AA ) ε by (.3.1), ad similarly P(A ) P(A) ε. Therefore P(A) P(A)P(A) P(A) P(A)P(A ) + P(A)P(A ) P(A)P(A) So we have = (P(A) P(A)P(A )) + P(A) P(A) P(A ) ε + P(A)ε ε. P(A) (1 P(A)) ε. Sice ε > 0 is arbitrary, it follows that P(A) (1 P(A)) = 0. So either P(A) = 0 or P(A) = 1. We ca ow prove the theorem. 5

29 Proof of theorem. Let T be the shift, P a product measure ad A I be a ivariat set. We eed to show that P(A) {0, 1}. By the defiitio of F, we ca approximate A by sets A i the σ-field correspodig to the coordiates from to, i the sese that P(A A ) 0 as. Equivaletly, we ca approximate T ± A by T ± A, so by ivariace of A, we ca approximate A by T ± A. But T A is i the σ-field correspodig to coordiates from to 3 ad T A i the σ-field correspodig to coordiates from to 3. So T ± A is idepedet of the σ-field correspodig to coordiates from to. Sice T ± A approximates A, we see that A belogs to the tail σ-field. So, by Kolmogorov s zero-oe law (Lemma.3.1), P(A) {0, 1}..4 Structure of Ivariat Measures We are ow goig to apply the idividual ergodic theorem (Theorem..1) to prove a criterio for a probability measure to be ergodic. We will the see that ay ivariat measure ca be obtaied by takig a weighted average of ergodic measures. Let T be a trasformatio o Ω ad defie M := {P : T -ivariat probability measure o (Ω, F)}. Note that M is a covex set which may be empty. For ay covex set C, we say that x C is a extreme poit of C (or x is extremal) if it caot be writte as a o-trivial covex combiatio of two other poits i C. Theorem.4.1. P M is ergodic for T if ad oly if it is a extreme poit of M. Proof. Let P M. We first show that if P is ot extremal, the P is ot ergodic. Suppose that P M is ot extremal. The P 1, P M, a (0, 1) such that P 1 P ad P = ap 1 + (1 a)p. Suppose for a cotradictio that P is ergodic. The we have that, for ay A I, P(A) {0, 1}. But P(A) = 0 P 1 = P = 0, ad similarly P(A) = 1 P 1 = P = 1. So P 1 = P o I. Next we show that this implies that P 1 = P o F. 6

30 Let f be a bouded F-measurable fuctio. We will show that E P1 (f(ω) = E P (f(ω)). Defie E to be the set where the followig limit exists: 1 ( h(ω) := lim f(ω) + f(t ω) + + f(t 1 ω) ). By the idividual ergodic theorem (Theorem..1), P 1 (E) = P (E) = 1 ad h is I-measurable. Sice P 1, P are ivariat, we have, for i = 1, ad N, E 1 ( f(ω) + f(t ω) + + f(t 1 ω) ) dp i 1 = E (f(ω) + f(ω) + f(ω)) dp i = f(ω)dp i E E = E Pi (f(ω)), as P i (E) = 1. By the bouded covergece theorem (Theorem 1.1.5), 1 ( h(ω)dp i = lim f(ω) + f(t ω) + + f(t 1 ω) ) dp i E = E Pi (f(ω)), for i = 1,. We ow use that h is I-measurable, P 1 = P o I, ad P 1 (E) = P (E) to see that E P1 (f(ω)) = h(ω)dp 1 = h(ω)dp = E P (f(ω)). E So we have that P 1 = P o F. This is a cotradictio. Next we show the coverse part of the theorem. Suppose that P is ot ergodic. The A I with 0 < P(A) < 1. We ca therefore defie probability measures P 1 ad P by The we have P 1 (E) = E P(A E) P(A) P (E) = P(Ac E) P(A c. ) ad P 1 (T E) = = = P(A T E) P(A) P(T A T E) P(T A) P(A E) P(A) as A I by ivariace of P. 7

31 So P 1 M. Similarly, we ca show that P M. Furthermore, P = P(A)P 1 + P(A c )P = P(A)P 1 + [1 P(A)] P. Therefore P is ot extremal. This completes the proof. The ext theorem shows that ay probability measure i the covex set M - that is ay ivariat measure - ca be obtaied by takig a weighted average of the extremal poits of M - i.e. the ergodic measures. We use the otatio M e := {P : P M, P extremal}. Theorem.4.. For ay ivariat measure P, there exists a probability measure µ P o the set M e of ergodic measures such that P = Qµ P dq. M e To prove this we eed the followig lemma. Lemma.4.1. Let P be a probability measure o (Ω, F) ad, for each ω Ω, defie P ω by P ω (E) := P (E I) (ω) = E (1 E I) (ω), for ay E F. Suppose that P is ivariat. The for P-almost every ω, P ω is ivariat ad ergodic. Proof. We wat to show that P ω (T A) = P ω (A) A F, for almost all ω. The we will have ivariace of P ω for almost all ω. Let E I. It is eough to show that P ω (A)dP(ω) = P ω (T A)dP(ω), because P ω is I-measurable. We have that P ω (A)dP(ω) = E Ω E 1 E P ω (A)dP(ω) = E P (1 E E (1 A I)) = E P (E (1 E 1 A I)) as E I = E P (1 E 1 A ) = P (E A). O the other had, by similar reasoig, P ω (T A)dP(ω) = P(E T A) E = P(T E T A), by ivariace of E I = P(E A), by ivariace of P. E (.4.3) 8

32 So we have ivariace ad ow eed to establish ergodicity. Agai let E I. The, repeatig the argumet i (.4.3), we get P ω (E)dP(ω) = P(E E) = P(E). (.4.4) But, sice P ω (E) 1, E E P ω (E)dP(ω) E dp(ω) = P(E), with equality if ad oly if P ω (E) = 1 for P-almost every ω E. Therefore, for the equality (.4.4) to hold, we must have P ω (E) = 1 for P-almost every ω E. Repeatig the same argumet from (.4.3) oce more, we get E c P ω (E)dP(ω) = P (E E c ) = 0. So for P-almost every ω E c, P ω = 0. Thus for P-almost every ω Ω, P ω (E) = 1 {ω E} {0, 1}, E I; that is, P ω is ergodic for P-almost all ω. Proof of Theorem. As a cosequece of the lemma, we ca view P ω as a map Ω M e. Take µ P to be the image of P uder this map. From the defiitio of P ω, we have that P = P ω dp(ω). By a chage of variables we get P = Qµ P dq M e as required..5 Statioary Markov Processes Ω I this sectio we are goig to show how a ergodic Markov process ca be defied ad how the theory from the previous sectios ca be applied to Markov processes, followig Varadha i Sectio 6.3 of [15]. Let (X, B) be a measurable space ad let (Ω, F) be the space of sequeces which take values i X with the product σ-field. Let {X } be a stochastic process which takes values i the state space X. For ay m, Z with m <, defie F m = σ {X j : m j }. Also defie 9

33 F = σ {X j : j } ad F m = σ {X j : j m}. As we oted i the opeig discussio of Sectio.1, we ca, uder certai cosistecy coditios, costruct a probability measure P o (Ω, F), which describes the evolutio of {X } over time. We assume from ow o that these coditios are satisfied. Further discussio of these coditios ca be foud i [15]. We ca the trasfer all of the defiitios ad results from the previous sectio o measure-preservig trasformatios by takig the measure to be P ad the trasformatio T to be the shift. Suppose that {X } is statioary; i.e. P is a ivariat measure for T. Defiitio.5.1. We say that {X } is a ergodic stochastic process if P is a ergodic measure, as defied i Defiitio.3.1, for the shift T. I particular, we wat to cosider Markov processes. The measure P o (Ω, F) defies a Markov process with trasitio probabilities give by Π if, for ay N {0} ad A B, P {X +1 A F } = Π(X, A) P-almost surely, wheever this measure exists ad is uique. The followig theorem tells us that wheever the trasitio probabilities are idepedet of time, we have the required existece ad uiqueess. Theorem.5.1. Let P be a statioary Markov process with give trasitio probability Π. The the oe-dimesioal margial distributio µ, give by µ(a) = P(X A) (idepedet of time by statioarity), is Π-ivariat; i.e. µ(a) = Π(x, A)µ(x)dx for every A B. Coversely, give ay such µ, there exists a uique statioary Markov process P with margials µ ad trasitio probability Π. Proof. Suppose that P is a statioary Markov process with trasitio probabilities give by Π. The µ(a) = P(X A) = Π(x, A)P(X 1 = x)dx = Π(x, A)µ(x)dx, by statioarity. Now take a measure µ o (X, B). The a uique statioary Markov process with margials µ exists. We will ot prove this fact, but refer the reader to Sectio 4.4 of [15] for further discussio of this. Let Π be a trasitio probability ad defie the set of ivariat probability measures for Π by { } M := µ : µ(a) = Π(x, A)µ(x)dx for all A B. X 30

34 M is clearly a covex set of probability measures. Deote the set of extremals of M by M e, as we defied at the begiig of Sectio.4, ad ote that this set may be empty. For each µ M, deote the Markov process with margials µ by P µ. Sice the map µ P µ is liear, we have that µ M e P µ M e. I fact we also have the opposite implicatio. To see this, we eed the followig theorem. Theorem.5.. Let µ be a ivariat measure for Π ad P = P µ the correspodig statioary Markov process. Let I be the ivariat σ-field o Ω; I = {A : T A = A}, where T is the shift. The I σ(x 0 ), to withi sets of P measure 0. Proof. Let E I F. The, as i the proof of Theorem.3.1, we see that there are sets E F which approximate E. So for ay k Z, T k E approximates T k E ad, by ivariace, approximates E. But T k E F k k+. So E T + := m N F m ad E T := N F. Thus P (E σ(x 0 )) = P (E E σ(x 0 )) = P ( E [ E T ] E [ E T +] σ(x 0 ) ) = P (E σ(x 0 )) P (E σ(x 0 )) by coditioal idepedece. Therefore P (E σ(x 0 )) = 0 or 1. It follows that E σ(x 0 ) to withi sets of P measure 0. We ca ow show that ergodicity of a Markov process is equivalet to ergodicity of its margials. Theorem.5.3. Let µ be a measure o (X, B) ad P µ Markov process. The the followig equivalece holds: the correspodig µ M e P µ M e. Proof. For the first implicatio, suppose that µ is ot extremal. The µ 1, µ M ad α R such that µ = αµ 1 + (1 α)µ. Therefore, P µ = αp µ1 + (1 α)p µ, by liearity. So P µ is ot extremal. Now suppose that P is ot extremal. The P is ot ergodic, by Theorem.4.1. Therefore E I such that 0 < P(E) < 1. By the previous theorem, we ca choose E such that E σ(x 0 ). This meas that A X such that 0 < µ(a) < 1 ad E = {X 0 A}. By ivariace, for ay N, E = {X A}. Thus E = {ω : X (ω) A, }. 31

35 Suppose that there exists a subset of A with positive measure o which Π(x, A) < 1. The P(E) = 0, which is a cotradictio. Thus for µ-almost every x A, Π(x, A) = 1 ad Π(x, A c ) = 0. Now we ca write µ(a) = Π(x, A)dµ(x) = Π(x, A)dµ(x) + A Π(x, A)dµ(x) A c = µ(a) + Π(x, A)dµ(x). A c So Π(x, A)dµ(x) = 0 ad thus, for µ-almost every x A c, A c For ay measurable set B we have that Π(x, A) = 0 ad Π(x, A c ) = 1. µ(b) = µ(b A)µ(A) + µ(b A c )µ(a c ), = µ(b A)µ(A) + µ(b A c )[1 µ(a)]. by the law of total probability We claim that both ν 1 := µ( A) ad ν := µ( A c ) are statioary distributios. The it will follow that µ is otextremal. We wat to show that ν i (B) = Π(x, B)dν i (x) for all measurable sets B, for i = 1,. We have that Π(x, B)dν 1 (x) = Π(x, B) dµ(x) by a chage of variables A µ(a) = 1 Π(x, A B)dµ(x) sice Π(x, A) = 1 for µ-a.e. x A µ(a) A = 1 Π(x, A B)dµ(x) sice Π(x, A) = 0 for µ-a.e. x A c µ(a) = 1 µ(a B) by statioarity of µ µ(a) = µ(b A) = ν 1 (B). By a very similar argumet, This completes the proof. Π(x, B)dν (x) = ν (B). 3

36 Remark.5.1. The measure µ is always a ivariat measure for the trasitio matrix Π of P µ, by Theorem.5.1; i.e. µ is a statioary distributio of the Markov chai P µ. Suppose that µ is the uique ivariat measure for Π. The M = {µ} ad so clearly µ is a extremal poit of the set M. Therefore, by Theorem.5.3, P µ is ergodic. We ca ow prove a simple criterio for ergodicity of a statioary Markov chai, i the case where the state space of the chai is fiite. Theorem.5.4. Let {X } be a statioary Markov chai which takes values o a fiite state space. Suppose that the chai is irreducible ad aperiodic. The {X } is ergodic. Proof. By Theorem 1..1, {X } has a uique statioary distributio. Therefore, by Remark.5.1, {X } is ergodic. 33

37 Chapter 3 Cetral Limit Theorems I this chapter, we cosider cetral limit theorems for ergodic stochastic processes. We will see that a cetral limit theorem holds for ergodic Markov chais, uder some coditios. We preset oe proof, followig [15], which will be a cosequece of a cetral limit theorem for square-itegrable ergodic martigale differeces. We the preset a alterative proof, followig [13], for ergodic Markov chais o a fiite state space. These theorems cocer weak covergece, which we defied i Sectio 1.1 ad D deote X X, for radom variables X, X1, X, The CLT for Martigale Differeces Let (X, B) be a measurable space ad let (Ω, F) be the space of sequeces which take values i X with the product σ-field. Let {ξ j } be a sequece of square-itegrable martigale differeces with respect to a filtratio (F ), as we defied i Defiitio 1.3.6, which take values i X. Let P be the probability measure o (Ω, F) which describes the evolutio of the stochastic process {ξ j }. Suppose that {ξ j } is statioary ad ergodic; i.e. P is statioary ad ergodic o (Ω, F). Remark As stated i Sectio 6.5 of [15], it follows immediately from the idividual ergodic theorem (Theorem..1) ad Remark.3.1 that we have a strog law of large umbers, ξ 1 + ξ + + ξ 0 We will ow prove that we have the followig cetral limit theorem, as show i [15]. Theorem a.s.. Z := ξ 1 + ξ + + ξ D Z, 34

38 where Z N ( 0, σ ), for some σ > 0. Proof. This proof follows Varadha i [15], usig the cotiuity lemma for characteristic fuctios (Lemma 1.1.). The characteristic fuctios of Z ad Z are respectively. Let us defie ad ψ(, j, t) := exp We wat to show that [ exp φ (t) = E φ(t) = exp { σ t { it ξ }] 1 + ξ + + ξ }, { σ t } [ { j E exp it ξ }] 1 + ξ + + ξ j. ψ(,, t) 1 0. The the result will follow by Lemma 1.1., the cotiuity lemma, sice, First ote that we ca write ψ(, j, t) 1 0 φ (t) φ(t). ψ(,, t) 1 = ψ(,, t) ψ(, 0, t) = [ψ(, j, t) ψ(, j 1, t)], j=1 so that we wat to estimate the quatity (, t) := [ψ(, j, t) ψ(, j 1, t)]. j=1 We will estimate this quatity i three steps. For ay j {1,,..., }, let us set S j = ξ 1 + ξ + + ξ j. The the j th term i the sum is ψ(, j, t) ψ(, j 1, t) { σ t } [ j = exp E exp { σ t j = exp = exp { itsj }] exp } ( [ { } itsj 1 E exp exp [ { } itsj 1 E exp exp { σ t (j 1) { }] itξj { σ t }]) { σ t } [ { } { } j itsj 1 itξj E exp (exp exp } [ { }] itsj 1 E exp { σ t })]. 35

39 For our first estimate, we show, by meas of a Taylor expasio, that we ca replace this term i the sum by { σ t } [ { } ( ( ) )] j itsj 1 σ ξj t θ(, j, t) := exp E exp. Take t to be i a arbitrary fiite iterval, say t T. I this iterval we have ψ(, j, t) ψ(, j 1, t) θ(, j, t) { σ t } j = exp [ { } ( { } { E itsj 1 itξj σ t exp exp exp = exp { σ t } [ j E ( exp exp { } itsj 1 { itξj } exp { σ t } ( σ ξj ) )] t } ( ) )] σ ξj t itξ j, where the secod equality is due to the martigale differece property (1.3.), as follows: ( { } ) itsj 1 itξj E exp = it ( ( { } )) itsj 1 E E exp ξ j F j 1 = it ( { } ) itsj 1 E exp E (ξ j F j 1 ) = 0, sice E (ξ j F j 1 ) = 0 by (1.3.). We ow apply the triagle iequality for expectatio ad the fact that exp {is} = 1 for s R to get ψ(, j, t) ψ(, j 1, t) θ(, j, t) { σ t } j exp [ { } E exp itsj 1 { } itξj exp exp { σ t j = exp { σ t } j = exp [ E exp } E [ { exp itξj { σ t } { σ t exp { } itξj 1 itξ j + ξ j t ( exp } } ( ) σ ξj t ( ) σ ξj t { σ t } 1 + σ t ] itξ j ] itξ j ) ]. 36

40 Applyig the triagle iequality agai, ad the fact that the expoetial fuctio is mootoe icreasig, we see that ψ(, j, t) ψ(, j 1, t) θ(, j, t) { σ t } j exp ( [ { } ] E exp itξj 1 itξ j + ξ j t ( { + σ t } ) ) exp 1 + σ t [ { } ] C T E exp itξj 1 itξ j + ξ j t { σ + C T (exp t } ) 1 + σ t, { σ T } with C T = exp. This estimate is idepedet of j by statioarity. We ow take the followig Taylor expasios up to the term liear i 1 : { } itξj exp = 1 + itξ j ξ j t + o { σ t } exp = 1 σ t ( ) 1 + o. ( ) 1 Substitutig these ito our equatio above gives that for ay j {1,,..., } ad t T, ( ) 1 ψ(, j, t) ψ(, j 1, t) θ(, j, t) = o, so that ad sup t T ( ) 1 sup ψ(, j, t) ψ(, j 1, t) θ(, j, t) = o t T 1 j ( ) 1 ψ(, j, t) ψ(, j 1, t) θ(, j, t) = o. j=1 As this is true for arbitrary T, we have that for ay t R, ψ(, j, t) ψ(, j 1, t) θ(, j, t) = o j=1 ( ) 1 0. (3.1.3) 37

41 It will ow be eough to estimate θ(, j, t), sice j=1 0 (, t) = [ψ(, j, t) ψ(, j 1, t) θ(, j, t)] + θ(, j, t) j=1 j=1 ψ(, j, t) ψ(, j 1, t) θ(, j, t) + θ(, j, t). j=1 j=1 (3.1.4) I order to do this, we fix some large k Z ad divide the set {1,,..., } ito blocks of size k, with possibly a icomplete block at the ed. Let j Z ad defie r(j) 0 to be the iteger such that kr(j) + 1 j k(r(j) + 1). We see that r idexes the block of size k i which we fid ay give j. Defie { σ t kr(j) θ k (, j, t) := exp } [ E exp { itskr(j) } ( ) ] σ ξj t. Our secod estimate will be o the θ k (, j, t). We will the show that the θ k (, j, t) approximate θ(, j, t) sufficietly well. Fix r 1 k. The, for ay j such that kr 1+1 j k(r 1 +1), we have r(j) = r 1. Therefore k(r 1+1) k(r θ k (, j, t) j=kr 1+1 = 1+1) { σ t } [ { kr } ( σ 1 itskr1 exp E exp ξ ) ] j t j=kr 1+1 C(t) 1 k(r E 1+1) { } itskr1 (σ exp ξ ) j j=kr 1+1 = C(t) 1 { } E exp itskr1 k(r 1+1) ( σ ξj ) j=kr 1+1 = C(t) 1 k(r E 1+1) ( σ ξj ), where C(t) = t exp { σ t E Set δ(k) := }. j=kr 1+1 [ k(r1+1) ( j=kr 1+1 σ ξj k ) ], which is idepedet of r 1 by 38

42 statioarity, so that k(r 1+1) j=kr 1+1 θ k (, j, t) C(t) k δ(k). We ca use the L 1 ergodic theorem from Theorem.. to show that δ(k) k 0 as follows. Sice {ξ j } is a ergodic process ad E ( ) ξj = σ, we have, settig r 1 = 0, [ k ( ) ] E j=1 σ ξj k δ(k) = E ( σ ξj I ) k =E ( σ ξj ), by Remark.3.1 =0. Therefore k θ k (, j, t) j=1 r=0 k(r+1) j=kr+1 k C(t) k δ(k) θ k (, j, t) = C(t)δ(k) k 0. (3.1.5) Next, for our fial estimate, we cosider j=1 θ k (, j, t) θ(, j, t) sup θ k (, j, t) θ(, j, t) 1 j = t sup 1 j = t sup 1 j { σ exp t kr(j) { σ t j exp } E [ exp { σ exp t } kr(j) [ { } (σ E ξj ) itskr(j) exp ( { } itskr(j) (σ ξ ) ] j } [ { } itsj 1 (σ E exp ξ ) ] j { σ t (j kr(j)) 1 exp } { ( )})] it Sj 1 S kr(j) exp. 39

43 The, by the triagle iequality, we ca estimate θ k (, j, t) θ(, j, t) j=1 { { t σ sup t } kr(j) exp 1 j [ σ E ξj 1 exp { t σ exp t } [ sup E σ ξ j 1 j { σ 1 exp t (j kr(j)) { σ t (j kr(j)) } { ( )} ]} it Sj 1 S kr(j) exp } { ( )} it Sj 1 S kr(j) exp ]. We ow claim that the expectatio [ { E σ ξj σ 1 exp t } { ( )} ] (j kr(j)) it Sj 1 S kr(j) exp (3.1.6) is periodic i j with period k. Let m Z. First ote that r(j + mk) = r(j) + m, by our defiitio of r, so that S j+mk 1 S kr(j+mk) = S j+mk 1 S kr(j)+mk = ξ kr(j)+mk+1 + ξ kr(j)+mk+ + + ξ j+mk 1. (3.1.7) ad j + mk kr(j + mk) = j + mk kr(j) mk = j kr(j). (3.1.8) Usig (3.1.7), we write [ σ E ξ j+km [ σ = E ξ { σ 1 exp t } ([j + km] kr(j + km)) { ( )} ] it Sj+km 1 S kr(j+km) exp j+km { σ 1 exp t } ([j + km] kr(j + km)) { ( )} ] it ξkr(j)+mk ξ j+mk 1 exp. 40

44 The, by (3.1.8), we have [ E σ ξ j+km [ σ = E ξ { σ 1 exp t } ([j + km] kr(j + km)) { ( )} ] it Sj+km 1 S kr(j+km) exp j+km { σ 1 exp t } (j kr(j)) { ( )} ] it ξkr(j)+mk ξ j+mk 1 exp [ { = E σ ξj σ 1 exp t } (j kr(j)) { ( )} ] it ξkr(j)+1 + ξ kr(j)+ + + ξ j 1 exp, where i the last lie we use statioarity to shift all idices of ξ by mk. Thus we have periodicity, ad so we eed oly cosider j = 1,,..., k. But, for 1 j k, we have that r(j) = 0, by defiitio of r. Therefore sup E 1 j = sup E 1 l k = sup E 1 l k [ σ { ξj σ 1 exp t (j kr(j)) [ σ { ξl σ 1 exp t (l kr(l)) [ { σ ξl σ 1 exp t } l exp Substitutig this back ito (3.1.6), we obtai } { ( )} ] it Sj 1 S kr(j) exp } { ( )} ] it Sl 1 S kr(l) exp { } ] it (Sl 1 ). θ k (, j, t) θ(, j, t) sup θ k (, j, t) θ(, j, t) j=1 1 j { t σ exp t } [ { σ sup E ξl σ 1 exp t } { } ] l itsl 1 exp = C(t) sup E 1 l k 1 l k ( { σ ξl σ 1 exp t l } exp { } ) itsl 1 0 by bouded covergece (Theorem 1.1.5). 41

45 We ow have 0 θ(, j, t) j=1 = [θ(, j, t) θ k (, j, t)] + θ k (, j, t) j=1 j=1 θ(, j, t) θ k (, j, t) + θ k (, j, t). j=1 Fix k ad let to get 0 lim sup θ(, j, t) j=1 lim sup j=1 θ k (, j, t). Now, lettig k ad recallig (3.1.5), we see that the right-had side of the above iequality teds to 0 ad so lim θ(, j, t) = 0. (3.1.9) Hece, puttig together (3.1.3), (3.1.4) ad (3.1.9), we have that as required. j=1 (, t) 0, We have proved a cetral limit theorem for square-itegrable ergodic martigale differeces. Next we will show that this result ca be exteded to a larger class of processes, ad i particular to certai statioary ergodic Markov chais. j=1 3. The CLT for Markov Chais I We cotiue to work i the same settig as i the previous sectio. Thus we let (X, B) be a measurable space ad let (Ω, F) be the space of sequeces which take values i X with the product σ-field. I this sectio our aim is to prove a cetral limit theorem for fuctios of ergodic Markov chais, uder some coditios, followig the method used by Varadha i [15]. We start by cosderig the followig processes. Let {X } be a statioary zero-mea process, adapted to a filtratio (F ), whose path is described by the probability measure P o (Ω, F). If we ca write X = ξ +1 + η +1, where {ξ } is a square-itegrable ergodic martigale differece ad {η } is egligible i some sese, the we ca show that we have a cetral limit theorem for {X }. We formalise this i the followig theorem, which is a key step i provig a cetral limit theorem for Markov chais. This result is stated but ot proved i [15]. Here we provide a detailed proof. 4

46 Theorem Let {X } be a statioary process such that, for ay, E (X ) = 0 ad X = ξ +1 + η +1, where {ξ } is a ergodic statioary sequece of square-itegrable martigale differeces ad {η } satisfies [ ( ] ) E η j = o(). (3..1) j=1 The X 1 + X + + X D Z, where Z N ( 0, σ ), for some σ > 0. Proof. Fix a > 0. We eed to show that ( i=1 P X ) i a Φ(a), where Φ is the distributio fuctio of a N (0, σ ) radom variable for some σ > 0. Now fix ε > 0 ad N. We claim that P ( i=1 X i ) a P ( i=1 ξ i+1 ( i+1 P ξ i+1 ) < a + ε ( i=1 + P η ) i+1 ε < a + ε; i=1 η ) i+1 ε (3..) ad P ( i=1 X i ) ( i=1 a P ξ ) i+1 < a ε ( i=1 P η ) i+1 ε ( I=1 + P ξ i+1 I=1 + η i+1 ; i=1 η ) i+1 ε. (3..3) We prove the upper boud (3..) usig a lemma which we ow formulate. Lemma For ay a, ε > 0 ad ay radom variables A, B, we have P (A + B < a) P (A < a + ε) P (A < a + ε; B ε) + P ( B ε). Proof of lemma. By the law of total probability, P (A + B < a) = P (A + B < a; B < ε) + P (A + B < a; B ε). (3..4) 43

47 Note that B < ε B < ε. So for the first term i the above equatio we have P (A + B < a; B < ε) = P (A < a B; B < ε) P (A < a + ε : B < ε) = P (A < a + ε) P (A < a + ε; B ε). We ca boud the secod term i equatio (3..4) by ad so we have proved the lemma. P (A + B < a; B ε) P ( B ε), Note that P ( i=1 X i ) ( i=1 a = P ξ i+1 i=1 + η ) i+1 a The upper boud claimed i (3..) ow follows from the above lemma by substitutig i=1 A = i+1 ad (3..5a) i=1 B = i+1. (3..5b) Now let us defie a error term ( i+1 α() := P ξ i+1 < a + ε; i=1 η i+1 ε ) ( i=1 + P η ) i+1 ε, so that the upper boud (3..) implies that ( i=1 P X ) ( i i=1 a P ξ i+1 ) < a + ε + α(). By Chebyshev s iequality (1.1.4) ad our assumptio (3..1), we ca see that ( ( i=1 0 P η ) E [ i+1 i=1 η i+1] ) ε ε 0, ad similarly, ( i+1 0 P ξ i+1 < a + ε; i=1 η ) ( i+1 i=1 ε P η ) i+1 ε 0. Hece α() 0. We ow tur our attetio to the secod claim (3..3) ad formulate aother lemma. 44

48 Lemma 3... For ay a, ε > 0 ad ay radom variables A, B, we have P (A + B < a) P (A < a ε) P ( B ε) + P (A + B < a; B ε). Proof of lemma. We ote that B < ε B > ε. Therefore P (A + B < a; B < ε) P (A < a ε; B < ε) So, by the law of total probability, = P (A < a ε) P (A < a ε; B ε) P (A < a ε) P ( B ε). P (A + B < a) = P (A + B < a; B < ε) + P (A + B < a; B ε) ad the lemma is proved. P (A < a ε) P ( B ε) + P (A + B < a; B ε), We ow make the same substitutio as i (3..5) ad the claimed lower boud (3..3) follows from the lemma that we have just proved. Next we defie aother error term β() := P ( i=1 η i+1 ) ε ( i=1 + P ξ i+1 i=1 + η i+1 < a; i=1 η ) i+1 ε, so that the lower boud (3..3) implies ( i=1 P X ) ( i i=1 a P ξ i+1 ) < a ε + β(). I the same way as for α(), we ca see that β() 0. At this poit we have show that P ( i=1 ξ i+1 ) < a ε + β() P ( i=1 X i ) a ( i=1 P ξ ) i+1 < a + ε + α(), with α() 0, β() 0. Sice {ξ } is a ergodic statioary sequece of square-itegrable martigale differeces, we have a cetral limit theorem for {ξ } by Theorem Therefore σ > 0 such that ( i=1 P ξ ) i+1 < a ε Φ(a ε) ad ( i=1 P ξ ) i+1 < a + ε Φ(a + ε), 45

49 where Φ is the distributio fuctio of a radom variable with distributio N ( 0, σ ). Thus, by the sadwich rule, Φ(a ε) lim if P ( i=1 X i ) a ( i=1 lim sup P X ) i a Φ(a + ε). Fially, we let ε 0 to see that the required limit exists ad ( lim P i=1 X ) i a = Φ(a). The followig remark from [15] helps us to fid processes which satisfy the coditios of the above theorem. Remark Suppose that {Z } is a statioary square-itegrable sequece. The if we defie η := Z Z +1 for each, we have that E[( j=1 η j) ] = o(). We provide a short proof of this fact. Proof. Sice {Z } is square-itegrable, the expectatio we are iterested i is well defied. To prove the remark, we use Mikowski s iequality (1.1.3) ad statioarity. E [ ( j=1 ] ) η j = E [(Z 1 Z +1 ) ] = Z 1 Z +1 ( Z 1 + Z +1 ) by Mikowski s iequality (1.1.3) ( = Z 1 ) by statioarity = o(). Give a zero-mea statioary ergodic process {X }, we ca, uder some coditios, costruct {Z } as i the remark above so that X + (Z +1 Z ) is a square-itegrable ergodic martigale differece. We would the have X = ξ +1 + η +1 satisfyig the coditios i Theorem 3..1 for a cetral limit theorem to hold. Defie Z := E (X +j F ), wheever this sum coverges. j=0 We will ow show that if Z exists ad is square-itegrable for each, the X + (Z +1 Z ) is a square-itegrable ergodic martigale differece ad, by Theorem 3..1 ad the previous remark, we have a cetral limit theorem for {X }, as is claimed i [15]. 46

50 Lemma Suppose that Z := E (X +j F ) j=0 exists ad is square-itegrable for each. The we have a cetral limit theorem for {X }. Proof. We have that E [Z +1 F ] = E E [X +1+j F +1 ] F Therefore If we defie the we have = = j=0 E [X +1+j F ] by the tower rule (1.3.1) j=0 E [X +j F ] E [X F ] j=0 = Z X. X = Z E [Z +1 F ] = (Z Z +1 ) + (Z +1 E [Z +1 F ]). η +1 := Z Z +1 ad ξ +1 := Z +1 E [Z +1 F ], X = η +1 + ξ +1. By Remark 3..1, E[( j=1 η j) ] = o(). It is easy to see that {ξ } is ergodic ad square-itegrable, so we oly show that {ξ } is a martigale differece. By Lemma 1.3., we just eed to show that E [ξ +1 F ] = 0. This follows immediately from our defiitio of ξ : E [ξ +1 F ] = E [Z +1 E [Z +1 F ] F ] = E [Z +1 F ] E [Z +1 F ] = 0. Therefore, by Theorem 3..1, we have a cetral limit theorem for {X }. We ow see uder what coditios we ca apply this lemma to the specific case of a fuctio of a ergodic Markov chai. Let {X } be a statioary ergodic Markov chai, adapted to a filtratio (F ), with state space X, trasitio probability Π ad ivariat measure µ. Let P be 47

51 the statioary ergodic probability measure o (Ω, F) which describes the path of this Markov process. Let f be a square-itegrable fuctio with mea zero with respect to the ivariat measure. We will show that, uder further coditios, we have a cetral limit theorem for f(x ). We use the method outlied above, followig [15]. Theorem 3... Let f L (X, B, µ) be such that x X f(x)µ(x) = 0. Suppose that U L such that [I Π] U = f. The j=1 f(x j) D Z, where Z N ( 0, σ ), with variace Proof. We wat to show that Z := σ = E Pµ [(U(X 1 ) U(X 0 ) + f(x 0 )) ]. E (f(x +j ) F ) is well-defied ad j=0 square-itegrable for all N {0}. If Z 0 ca be defied, the we ca defie Z usig the shift operator T, via Z (ω) = Z 0 (T ω). Thus we oly eed to show that Z 0 is well-defied ad square-itegrable. For each = 0, 1,,..., defie Q f by The (Q f)(x) := E (f(x ) X 0 = x) for all x X. (Q f)(x 0 ) = E (f(x ) F 0 ). We claim that Q (Q m )f = Q m+ f, for ay, m N {0}. To prove this claim, let m, N ad let ˆX k be a radom variable with the same distributio as X k for each k N {0}. The (Q (Q m )f)(x 0 ) = E (Q m f(x ) F 0 ) ( = E E [f( ˆX m ) ˆX ] ) 0 = X F 0. Note that startig a Markov chai from state X ad lettig it ru for time m is equivalet to cosiderig a Markov chai which is i state X at time ad lettig it ru up to time m +. Thus E [f( ˆX m ) ˆX ] 0 = X = E [f(x m+ ) F ], ad so (Q (Q m )f)(x 0 ) = E (E [f(x m+ ) F ] F 0 ) = E [f(x m+ ) F 0 ] by the tower property (1.3.1) = Q m+, 48

52 as claimed. Therefore Q f = Q 1 (Q 1 f) = Q (Q 1f) = = Q 1 f. Fix x X. The (Q 1 f)(x) = E (f(x 1 ) X 0 = x) = y X f(y)π(x, y)dy = (Πf)(x). So we have that Q 1 = Π ad thus, wheever the sum coverges, E (f(x j ) F 0 ) = (Q j f)(x 0 ) = (Q j 1 f)(x 0) = (Π j f)(x 0 ). j=0 j=0 Moreover, if the sum does coverge, sice (Π j f)(x 0 ) = j=0 j=0 j=0 j=0 ( ) [I Π] 1 f (X 0 ), j=0 [I Π] Π j f = Π j f Π j+1 f = Π 0 f = f. As i the hypothesis of the theorem, suppose that U L such that [I Π] U = f. The Z 0 = j=0 E (f(x j ) F 0 ) = ( [I Π] 1 f ) (X 0 ) = U(X 0 ) (3..8) j=0 coverges ad we see that Z L with Z = E (f(x +j ) F 0 ) = U(X ), = 0, 1,,.... j=0 We ow appeal to the previous lemma which gives us that, uder the assumptios of the theorem, we have a cetral limit theorem: j=1 f(x j) D Z, where Z N ( 0, σ ), for some σ > 0. To complete the proof of the theorem, we ow eed to calculate the variace σ. Usig our otatio from earlier, we have that f(x ) = ξ +1 + η +1, where ξ +1 = U(X +1 ) U(X ) + f(x ) is a martigale differece, as show i Lemma 3..3, ad [ ] ( ) E η j = o(). (3..9) j=1 49

53 The we have ( j=1 Var f(x ) j) = 1 Var ξ j+1 + j=1 j=1 η j+1 [ = 1 ] [ ] Var ξ j+1 + Var η j+1 j=1 i=1 j=1 j=1 [ ] + Cov ξ i+1, η j+1. Note that all of the radom variables ξ j, η j, j = 1,,..., have mea zero. Therefore [ ] [ ( ] ) Var η j+1 = E η j+1 = o() by (3..9) ad sice for ay i < j, j=1 j=1 [ ] [ ] ( ) Var ξ j+1 = E ξ j+1 j=1 = j=1 E [ ξj+1], j=1 Cov [ξ i, ξ j ] = E [ξ i ξ j ] = E (E [ξ i ξ j F i ]) = E (ξ i E [ξ j F i ]) = 0, as {ξ k } is a martigale differece, ad similarly for i > j. The, by statioarity, we see that E [ ξj+1] [ ] = E ξ 0. j=1 Puttig these together, we get ( j=1 Var f(x ) j) = E [ ] ξ0 + o() [ + Cov ξ i+1, η j+1 ]. By the Cauchy-Schwarz iequality for covariace (1.1.6), we have that 0 Cov ξ i+1, η j+1 Var ξ j+1 Var i=1 j=1 j=1 i=1 E [ξ 0 ] o() = o() = E [ξ0 ] j=1 j=1 η j+1

54 Hece Var ( j=1 f(x ) j) E [ ξ0 ] = E Pµ [(U(X 1 ) U(X 0 ) + f(x 0 )) ]. That is σ = E Pµ [(U(X 1 ) U(X 0 ) + f(x 0 )) ]. 3.3 The CLT for Markov Chais II We ow restrict ourselves to Markov chais o a fiite state space ad preset a alterative approach to provig a cetral limit theorem for statioary ergodic Markov chais. This method allows us to arrive at the result more directly, but we get a less geeral result here. What follows is adapted from hadwritte otes by Bálit Tóth [13]. Let X be a fiite set ad let (Ω, F) be the space of sequeces which take values i X, with the product σ-field. Let {X } be a statioary, irreducible, aperiodic Markov chai o (Ω, F), adapted to a filtratio (F ), with trasitio probabilities give by Π ad state space X. Because we are workig o a fiite state space, we ca apply Theorem.5.4. This tells us that {X } is ergodic, sice it is a statioary, irreducible ad aperiodic Markov chai o a fiite state space. Defie P to be the statioary ergodic probability measure o (Ω, F) which describes the time-evolutio of the process {X }. Let f L (X, B, µ) ad suppose that f has mea zero uder the uique statioary distributio µ; i.e. x X µ(x)f(x) = 0. Defie 1 S := f(x k ). The mai result of this sectio is the followig cetral limit theorem: Theorem where Z N (0, σ 1) for some σ 1 > 0. k=0 S D Z, To prove this, we are goig to prove a cetral limit theorem for aother quatity which approximates S, as i [13], ad we will see that our desired result the follows. Set X 0 = x 0 determiistically. We are goig to use the fact that the times betwee cosecutive returs to x 0 ad the behaviour of the chai i these time itervals are i.i.d.. We defie 51

55 several radom variables which we are goig to work with. Defie the retur times to x 0 iductively by R 0 := 0 R +1 := mi {k > R : X k = x 0 }. The R is the th retur time to x 0 after time 0 for each N. Next we defie the legth of the time iterval betwee returs as T := R R 1, N ad the sum of the fuctio of the Markov chai i each of these time itervals by Y := R 1 k=r 1 f(x k ). By the Markov property (1..1), we see that (Y, T ) N are i.i.d.. We also have that all expoetial momets of Y ad T are fiite, as we are i a fiite state space. Clearly E(Y ) = 0. Let us set σ 0 = E ( Y ) ad b = E (T ). Let the total umber of visits to x 0 (icludig the oe at time 0) before time be give by ν := mi {k : R k } {0, 1,,... }. Equivaletly, ν = mi { k : } k T l. Defie U := R ν = mi {m : X m = x 0 }, so that U is the time of the first retur to x 0 after time. The S = U 1 k=0 l=1 U 1 f(x k ) k= f(x k ). Remark As stated i [13], we ca show that U 1 k= f(x k) is stochastically bouded, by a radom variable idepedet of. The idea of the proof is that this sum is bouded by the sum of f take betwee ay two cosecutive retur times. We do ot provide details of this here. We also have that where we defie S U = U 1 k=0 f(x k ) = S m := ν k=1 m Y k. k=1 Y k = S ν, 5

56 The proof of the mai result will follow immediately from the cetral limit theorem below. We ext tur our attetio to provig this theorem, which is the mai focus of [13]. Theorem For S m := m k=1 Y k, where Z N ( 0, σ 0 b ). Proof. Defie a fuctio ψ : R R by We claim that S ν D Z, exp {ψ(λ)} := E (exp {λy 1 }). ψ(λ) = 1 σ 0λ + O(λ 3 ) as λ 0. Note first that, by the series represetatio of the expoetial fuctio, { } [ ] 1 1 exp σ 0λ + O(λ 3 ) = 1 + σ 0λ + O(λ 3 ) + 1 [ ] 1! σ 0λ + O(λ 3 ) + = σ 0λ + O(λ 3 ), as λ 0. Now [ exp {ψ(λ)} = E (exp {λy 1 }) = E 1 + λy ] λ Y1 + O(λ 3 ) = λ E(Y 1 ) + O(λ 3 ) = λ σ0 + O(λ 3 ) { } 1 = exp σ 0λ + O(λ 3 ), ad takig logarithms proves { our claim. Next we claim that exp λ S } k ψ(λ)k is a martigale. Sice we have fiite expoetial momets, the followig calculatio is all that is eeded to prove this: [ { E exp λ S } ] k+1 ψ(λ) (k + 1) F k [ { = E exp λ S } ] k ψ(λ)k exp {λy k+1 ψ(λ)} F k { = exp λ S } k ψ(λ)k E [exp {λy k+1 }] exp { ψ(λ)} { = exp λ S } k ψ(λ)k exp {ψ(λ)} exp { ψ(λ)} { = exp λ S } k ψ(λ)k. 53

57 exp [ Note that E { }] λ S ν is the momet geeratig fuctio of exp ad { } λ σ 0 b is the momet geeratig fuctio of a ormal radom variable with mea zero ad variace σ 0 [ b {. We wat to show that E exp }] λ S ν S ν { λ exp σ 0 b }. The Theorem will give us the result of this theorem. Let us fix. The ν is a stoppig time ad by the optioal stoppig theorem (Theorem 1.3.3), for ay θ R, E Therefore [ { E exp [ exp {θ S }] ν ψ(θ)ν λ S }] ν = E = E [ { exp θ S }] 0 0 = e 0 = 1. (3.3.) [ { ( ) } { ( ) }] λ λ λ exp Sν ψ ν exp ψ ν [ { ( ) } { λ λ σ = E exp Sν ψ ν exp 0 λ }] b [ { ( ) } λ λ + E exp Sν ψ ν { σ = exp 0 λ b ( { ( ) } λ ν exp ψ exp } [ { λ + E exp Sν ψ { σ 0 λ })] b ( ) } λ ν ( { ( ) } λ ν exp ψ exp where the last lie follows from (3.3.) with θ = λ. Call [ { ( ) } λ λ E := E exp Sν ψ ν ( { ( ) } λ ν exp ψ exp { σ 0 λ b })], { σ 0 λ b })], so that E [ exp { λ S }] ν = exp { σ 0 λ b } + E. We will show that E is a error term which teds to 0 as. 54

58 We ca write [ { λ E = E exp Sν 1 ( ) } {[ λ 1 ψ ν exp ψ ( { ( ) } { λ ν σ exp ψ exp 0 λ })] b [ { λ exp Sν 1 ( ) } λ ψ ν {[ ( ) ( )] } 1 λ λ exp ψ ψ ν ( { ( ) λ ν exp ψ σ 0λ } b [ { λ exp Sν 1 ( ) } λ ψ ν exp = e σ 0 λ b E = e σ 0 λ b E ( exp { σ 0λ b )] 1 ( ) ( )] } λ λ ψ ν { 1 ψ } { ( ) λ ν exp ψ ( ) } λ ν })]. We are goig to use the Cauchy-Schwarz iequality (1.1.5) to boud this error. Note that for ay radom variables X, Y, Z L, with Y, Z L, (E [XY Z]) 4 ( E [ X ]) ( E [ Y Z ]) by Cauchy-Schwarz ( E [ X ]) E [ (Y ) ] E [ (Z ) ] applyig Cauchy-Schwarz agai = ( E [ X ]) E [ Y 4 ] E [ Z 4]. (3.3.3) The Defie the followig radom variables: { λ E 1, : = exp Sν 1 ( ) } λ ψ ν { ( ) } 1 λ E, : = exp ψ ν E 3, : = exp { σ 0λ } { ( ) } λ ν exp ψ. b E = e σ 0 λ b E [E 1, E, E 3, ] ad it is easy to check that we ca apply the iequality (3.3.3) to get E 4 e σ 0 λ b ( E [ E 1, ]) E [ E 4, ] E [ E 4 3, ]. We ca ow look at each term i the product idividually. The costat e σ 0 λ b plays o role here. 55

59 Usig (3.3.), we see that E [ [ { ( ) }] E1, ] λ λ = E exp Sν ψ ν = 1. I order to estimate the expectatio of the other radom variables i the error term, we claim that we have a strog law of large umbers for ν : ν 1 b a.s.. (3.3.5) We prove this claim as follows. Sice the {T j } are i.i.d., 1 J J j=1 T j J E(T 1 ) = b a.s., by the strog law of large umbers for i.i.d. radom variables. Note that we have the followig equivalece, for ay J N: 1 J J T j J j=1 Fix ε > 0 ad defie J := b + ε 1 J T j J j=1 J ν.. The J, so b But 1 J J > b, so the evet T j { J J j=1 = J ν } occurs for oly fiitely may almost surely. Therefore N N such that > N, ν > 1 a.s.. b + ε a.s.. Hece Now defie J := b ε lim if ν 1 b + ε. Agai we have a.s.. 1 J J T j j=1 b a.s.. 56

60 But ñ J < b, so the evet 1 J fiitely may almost surely. Therefore M N such that > M, Hece We ow have J T j < ñ J j=1 = ν < 1 b ε lim sup 1 ν lim if b + ε ν 1 b ε lim sup { } J < ν a.s.. a.s.. ν 1 b ε a.s., occurs for oly so by takig the limit ε 0, we have that the desired limit exists ad we have proved our claim that ν lim 1 a.s.. b We ca ow see that E [ E 4, ] = E [ exp [ { = E exp e 4λ { ( ) }] λ ν ψ σ 0 b, ( 1 σ0(λ) + O ( 8λ 3 3 )) }] ν by the strog law of large umbers for ν (3.3.5) ad bouded covergece (Theorem 1.1.5). By the same reasoig, E [ [ ( { E3,] 4 = E exp λ σ0 } { ( ) }) 4 ] λ ν exp ψ b [ ( { = E exp λ σ0 b 0. } { exp ( 1 σ0λ ( λ 3 + O 3 So 0 lim E4 e σ 0 λ b 1 e 4λ σ 0 b 0 = 0. Thus E 0 as, ad so we have [ { E exp λ S }] { ν λ σ } 0 exp. b Appealig to Theorem 1.1.3, we have that S ν )) }) 4 ] ν coverges i distributio to a mea zero ormal radom variable with variace σ 0 b, as required. 57

61 The proof of the mai theorem is ow straight forward. Proof of Theorem Let Z be a radom variable such that Z N We have that S = S ν 1 U 1 f(x k ), k= ( ) 0, σ 0. b by the defiitios of S ad S. But by Remark 3.3.1, the sum i the secod term is stochastically bouded by a quatity which is idepedet of, so Also, by the previous theorem, U 1 1 f(x k ) D 0. k= S ν D Z. Hece we have S D Z, as required. We have proved that we have a cetral limit theorem ad we ow wish to calculate the variace σ 0 b of the limitig distributio, agai followig Tóth i [13]. The followig remark is a key observatio to facilitate the calculatio i [13]. Remark Sice f has mea zero, g such that f = (I Π) g. I fact, g = =0 Π f. Proof. Suppose that g = =0 Π f coverges. The (I Π) g = (I Π) Π f = Π f Π +1 f = Π 0 f = f. =0 =0 So we just eed to show the covergece of the ifiite sum. However, it is wellkow that, whe Π is the trasitio matrix of a irreducible aperiodic Markov chai o a fiite state space, Π f coverges expoetially, so the ifiite sum coverges. Theorem If we defie a ier product o the space of fuctios o X by a 1, a := x X a 1 (x)a (x)µ(x), =0 58

62 for ay fuctios a 1, a, the where f = (I Π)g. σ 0 b = f, g f, f, Proof. By the above remark, Defie 1 S = f(x k ) k=0 1 = [g(x k ) Πg(X k )] k=0 = g(x 0 ) g(x ) + M := [g(x k ) Πg(X k 1 )]. k=1 [g(x k ) Πg(X k 1 )]. k=1 We claim that M is a martigale. To prove this we ote that E (g(x +1 ) F ) = Πg(X ), by defiitio of the trasitio probability Π. Thus E (M +1 F ) = E ad so M is a martigale. Let T = T 1. The S T = ( +1 ) [g(x k ) Πg(X k 1 )] F k=1 ( ) = E [g(x k ) Πg(X k 1 )] F k=1 + E (g(x +1 ) F ) E (Πg(X ) F ) = M + Πg(X ) Πg(X ) = M, T 1 1 k=0 f(x k ) = R 1 1 k=0 f(x k ) = Y 1. T is a stoppig time ad S T = g(x 0 ) g(x T )+M T = g(x 0 ) g(x 0 )+M T = M T. So by optioal stoppig (Theorem 1.3.3), E(S T ) = E(M T ) = E(M 0 ) = 0. 59

63 Now defie aother martigale 1 [ N := M Πg (Πg) ] (X k ). k=0 We cofirm that this is ideed a martigale by the followig calculatios. By defiitio of M ad N, E (N +1 F ) N = E ( M +1 F ) M Πg (X ) + [Πg(X )]. Now ote that E ( M+1 F ) M = E ( M+1 M F ) ) = E ([(M +1 M ) + M ] M F We the see that sice = E ( (M +1 M ) F ) + E ((M+1 M )M F ) = E ( (M +1 M ) F ) + M E (M +1 F ) M = E ( (M +1 M ) F ) + M M sice M is a martigale = E ( (M +1 M ) F ). E (N +1 F ) N = 0, E ( (M +1 M ) F ) = E ( [g(x +1 ) Πg(X )] F ) Thus N is a martigale, as claimed. Usig this we ca calculate = E ( g(x +1 ) ) F Πg(X )E (g(x +1 ) F ) ) + E ([Πg(X )] F = Πg (X ) Πg(X )Πg(X ) + [Πg(X )] = Πg (X ) [Πg(X )]. E(ST ) = E(MT ) ( ) T 1 [ = E N T + Πg (Πg) ] (X k ) = E(N 0 ) + E k=0 ( T 1 k=0 ) [ Πg (Πg) ] (X k ) by optioal stoppig (Theorem 1.3.3) ( T 1 ) [ = E Πg (Πg) ] (X k ). k=0 60

64 We claim that E ( T 1 k=0 ) [ Πg (Πg) ] (X k ) = µ(x) [ Πg (x) (Πg(x)) ]. µ(x 0 ) x X ( T ) Defie F (x) := E 1 {X = x}. =1 We will show that F (x) = y X F (y)π(y, x). The, by uiqueess of the statioary distributio (Theorem 1..1), we must have F = Cµ for some costat C. We have that ( ) F (x) = E 1 {X = x; T } = = =1 P (1 {X = x; T }) =1 P (1 {X = x; X 1 = y; T }), where y x Π(y, x) > 0 =1 y x = Π(0, x) + P (1 {X = x; X 1 = y; T }) = y x y x 0 = Π(x 0, x) + P (1 {X = x; X 1 = y; T 1}), = y x y x 0 where the last lie ca be explaied by cosiderig the followig two cases. Suppose first that x = x 0. The {X = x; X 1 = y; T } = {X = x; X 1 = y; T = } Now suppose x x 0. The clearly = {X = x; X 1 = y; T 1} sice y x 0. {X = x; X 1 = y; T } {X = x; X 1 = y; T 1} ad, sice y x 0, {X = x; X 1 = y; T 1} {X = x; X 1 = y; T }, givig the required equality. We ext ote that P (X = x X 1 = y; T 1) = Π(y, x). 61

65 The we have F (x) = Π(x 0, x) + = y x y x 0 P (1 {X = x; X 1 = y; T 1}) = Π(x 0, x) + P (X = x X 1 = y; T 1) P (X 1 = y; T 1) = y x y x 0 = Π(x 0, x) + Π(y, x) P (X 1 = y; T 1) y x = y x 0 = Π(x 0, x) + Π(y, x) P (X = y; T ). y x =1 y x 0 But, sice we get that P (X = x 0 ; T ) = P (T = ) = 1, =1 =1 F (x) = Π(x 0, x) + Π(y, x) P (X = y; T ) y x =1 y x 0 = Π(y, x) P (X = y; T ) y x =1 = y x Π(y, x)f (y). Therefore C R such that F (x) = Cµ(x), x X, by uiqueess of the statioary distributio. We have that F (x 0 ) = 1, so C = 1 µ(x 0) ad F (x) = µ(x) µ(x 0 ) for all x X. 6

66 Our claim follows by otig that x X 1 {X k = x} = 1 for ay k. This gives E ( T 1 k=0 ) ( [ Πg (Πg) ] T 1 (X k ) = E Next, we calculate x X µ(x)πg (x) = y X k=0 x X ) [ 1 {X k = x} Πg (Πg) ] (X k ) ( ) [ = E Πg (Πg) ] T 1 (x) 1 {X k = x} = x X = x X = x X x X [ Πg (x) (Πg(x)) ] E k=0 ( T 1 [ Πg (x) (Πg(x)) ] F (x) ) 1 {X k = x} k=0 µ(x) [ Πg (x) (Πg(x)) ]. µ(x 0 ) µ(x)π(x, y)g (y) x X = y X µ(y)g (y) by statioarity = g, g by defiitio of the ier product. If we take to be the orm iduced by the ier product,, the we have x X µ(x) [ Πg (x) (Πg(x)) ] = 1 ( g Πg ). µ(x 0 ) µ(x 0 ) We ow see that 1 µ(x 0 ) = µ(x) µ(x 0 ) = F (x) = ( T ) E 1 {X = x} x X x X x X =1 ( T ) = E 1 {X = x} = E(T ) = b. So we have show that E(S T ) = E ( T 1 k=0 =1 x X ) [ Πg (Πg) ] (X k ) = µ(x) [Πg (x) (Πg(x)) ] µ(x 0 ) x X ( = b g Πg ). 63

67 I fact, where E(S T ) = bσ 1, σ 1 := f, g f, f. We see this by the followig calculatio, otig that f = g Πg. g Πg = g, g Πg, Πg = g Πg, g + Πg, g Πg, Πg = f, g + Πg, g Πg = f, g Πg, g Πg = f, g g Πg, g Πg + g, g Πg = f, g f, f + g, f = f, g f, f. But σ 0 = E(Y 1 ) = E(S T ) = bσ 1. So as required. σ 0 b = σ 1, We have show i Theorems ad that S D Z, where Z N ( 0, σ 1), ad σ 1 = f, g f, f. We have ow proved a cetral limit theorem for Markov chais via two differet methods, uder differet coditios, oe followig Varadha i [15] ad the other Tóth i [13]. I each case we derived a formula for the variace of the limitig distributio. We ow provide a calculatio to check that the two expressios for the variace agree, i the case where both theorems are valid. Remark Take {X } to be a statioary, irreducible, aperiodic Markov chai with fiite state space X, ad let f L have mea zero uder the uique statioary distributio µ. The we have that g such that f = (I Π)g, ad j=1 f(x j) D Z, where Theorem 3.. asserts that Z N ( 0, σ ), with σ = E Pµ [(g(x 1 ) g(x 0 ) + f(x 0 )) ], 64

68 ad Theorems ad assert that Z N ( 0, σ 1), with We claim that σ = σ 1. σ 1 = f, g f, f. Proof. To prove this equality, we express both σ ad σ1 i terms of g oly ad we will see that we arrive at the same expressio. Let us first look at σ. We see that σ = µ(x)π(x, y) [g(y) g(x) + f(x)] x,y X = y X g(y) x X µ(x)π(x, y) + µ(x) [ f(x) + g(x) f(x)g(x) ] Π(x, y) x X y X + µ(x)π(x, y)g(y) [f(x) g(x)] x,y X = µ(y)g(y) + µ(x) [ f(x) + g(x) f(x)g(x) ] y X x X + µ(x)π(x, y)g(y) [f(x) g(x)]. x,y X Now we use the relatio f = (I Π)g to fid that µ(x) [ f(x) + g(x) f(x)g(x) ] x X = µ(x) ( g(x) Π(x, y)g(y)) + g(x) x X y X ( g(x) ) Π(x, y)g(y) g(x) y X (3.3.6) = µ(x) g(x) Π(x, y)g(x)g(y) + Π(x, y)g(y) Π(x, z)g(z) x X y X y X z X +g(x) g(x) + Π(x, y)g(x)g(y) y X = x X µ(x) y X Π(x, y)g(y) z X Π(x, z)g(z). 65

69 ad µ(x)π(x, y)g(y) [f(x) g(x)] x,y X = [ µ(x)π(x, y)g(y) g(x) ] Π(x, z)g(z) g(x) z X x,y X = µ(x) Π(x, y)g(y) Π(x, z)g(z). x X y X z X Substitutig these expressios ito (3.3.6), we get σ = µ(x)g(x) + µ(x) Π(x, y)g(y) Π(x, z)g(z) x X x X y X z X µ(x) Π(x, y)g(y) Π(x, z)g(z) x X y X z X = µ(x)g(x) µ(x) Π(x, y)g(y) Π(x, z)g(z). x X x X y X z X We ow cosider σ1. Recallig the defiitio a, b = x X a(x)b(x)µ(x), we have σ1 = f(x)g(x)µ(x) f(x) µ(x) x X x X = g(x) g(x) Π(x, y)g(y) µ(x) x X y X g(x) Π(x, y)g(y) µ(x) x X y X = µ(x)g(x) µ(x)π(x, y)g(x)g(y) x X x,y X µ(x)g(x) + µ(x)π(x, y)g(x)g(y) x X x,y X µ(x) Π(x, y)g(y) Π(x, z)g(z) x X y X z X = µ(x)g(x) µ(x) Π(x, y)g(y) Π(x, z)g(z). x X x X y X z X Thus we have verified that σ = σ 1. 66

70 Chapter 4 Applicatios of the Markov Chai CLT I this chapter, we are goig to give some examples of how oe ca apply the theory which we have studied i this report. 4.1 Simple Radom Walks o a Torus Our first two examples cocer a simple radom walk. This process represets a very simple physical situatio: we have a particle which jumps either to the left or right i each time step with give probabilities. However, we would ot be able to say aythig about this usig the well-kow cetral limit theorem for i.i.d. radom variables (Theorem 3 of Chapter III, Sectio 3 i [11]). Radom walks are studied i may texts, ad more discussio of this topic ca be foud i [3] ad [6], for example. We adapt some of the defiitios ad results from Sectio 3.9 of [6], where simple radom walks o the itegers are treated. Let (Ω, F, P) be a probability space. Let K N ad cosider a torus with K sites; i.e. a lie i oe dimesio with K discrete poits labelled 0, 1,..., K 1 such that site K 1 eighbours site 0, as show i Figure The simple symmetric radom walk We cosider a simple symmetric radom walk {X } o the torus, which we defie as follows, adaptig Grimmett ad Stirzaker s defiitio of a simple symmetric radom walk o Z i [6]. Let Z 1, Z,... be a sequece of i.i.d. radom variables such that P(Z i = ±1) = 1. Let X 0 = 0 ad, for N, defie X = Z i mod K. i=1 67

71 It is easy to see that {X } is a Markov chai, by adaptig a argumet from Sectio 3.1 of [6], ad that the chai has trasitio matrix Π, whose etries are give by Π i,j = Π(i, j), with Π(i, i + 1) = Π(i, i 1) = 1 for i = 1,,, K Π(0, 1) = Π(0, K 1) = 1 Π(K 1, K ) = Π(K 1, 0) = 1, ad all other etries zero. The physical iterpretatio of this radom walk is that we cosider a particle costraied to jump betwee K sites o a torus. The particle starts i state 0 ad i each discrete time step the particle jumps oe site to the left or oe site to the right with equal probability. We ca see this i Figure 4.1, with p = q = 1. Figure 4.1: We have created a figure to demostrate a simple radom walk o a torus: The particle i red starts at site 0 ad, i each time step, it jumps to the right with probability p, or to the left with probability q, where p + q = 1. The case p = q = 1 correspods to the simple symmetric radom walk i Sectio ad the case p q correspods to a asymmetric radom walk, which is treated i Sectio Example We are goig to cosider the local time at zero of the Markov chai up to time, which we defie as T K () := 1 0 (X k ). k=0 The local time measures the amout of time the Markov chai speds i state 0 up to time. 68

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Amog the most useful of the various geeralizatios of KolmogorovâĂŹs strog law of large umbers are the ergodic theorems of Birkhoff ad Kigma, which exted the validity of the

More information

Entropy Rates and Asymptotic Equipartition

Entropy Rates and Asymptotic Equipartition Chapter 29 Etropy Rates ad Asymptotic Equipartitio Sectio 29. itroduces the etropy rate the asymptotic etropy per time-step of a stochastic process ad shows that it is well-defied; ad similarly for iformatio,

More information

A Proof of Birkhoff s Ergodic Theorem

A Proof of Birkhoff s Ergodic Theorem A Proof of Birkhoff s Ergodic Theorem Joseph Hora September 2, 205 Itroductio I Fall 203, I was learig the basics of ergodic theory, ad I came across this theorem. Oe of my supervisors, Athoy Quas, showed

More information

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence

Chapter 3. Strong convergence. 3.1 Definition of almost sure convergence Chapter 3 Strog covergece As poited out i the Chapter 2, there are multiple ways to defie the otio of covergece of a sequece of radom variables. That chapter defied covergece i probability, covergece i

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4. 4. BASES I BAACH SPACES 39 4. BASES I BAACH SPACES Sice a Baach space X is a vector space, it must possess a Hamel, or vector space, basis, i.e., a subset {x γ } γ Γ whose fiite liear spa is all of X ad

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

BIRKHOFF ERGODIC THEOREM

BIRKHOFF ERGODIC THEOREM BIRKHOFF ERGODIC THEOREM Abstract. We will give a proof of the poitwise ergodic theorem, which was first proved by Birkhoff. May improvemets have bee made sice Birkhoff s orgial proof. The versio we give

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014.

Product measures, Tonelli s and Fubini s theorems For use in MAT3400/4400, autumn 2014 Nadia S. Larsen. Version of 13 October 2014. Product measures, Toelli s ad Fubii s theorems For use i MAT3400/4400, autum 2014 Nadia S. Larse Versio of 13 October 2014. 1. Costructio of the product measure The purpose of these otes is to preset the

More information

Lecture 3 : Random variables and their distributions

Lecture 3 : Random variables and their distributions Lecture 3 : Radom variables ad their distributios 3.1 Radom variables Let (Ω, F) ad (S, S) be two measurable spaces. A map X : Ω S is measurable or a radom variable (deoted r.v.) if X 1 (A) {ω : X(ω) A}

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013 Fuctioal Law of Large Numbers. Costructio of the Wieer Measure Cotet. 1. Additioal techical results o weak covergece

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Math Solutions to homework 6

Math Solutions to homework 6 Math 175 - Solutios to homework 6 Cédric De Groote November 16, 2017 Problem 1 (8.11 i the book): Let K be a compact Hermitia operator o a Hilbert space H ad let the kerel of K be {0}. Show that there

More information

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero?

If a subset E of R contains no open interval, is it of zero measure? For instance, is the set of irrationals in [0, 1] is of measure zero? 2 Lebesgue Measure I Chapter 1 we defied the cocept of a set of measure zero, ad we have observed that every coutable set is of measure zero. Here are some atural questios: If a subset E of R cotais a

More information

Notes 19 : Martingale CLT

Notes 19 : Martingale CLT Notes 9 : Martigale CLT Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: [Bil95, Chapter 35], [Roc, Chapter 3]. Sice we have ot ecoutered weak covergece i some time, we first recall

More information

Solutions to HW Assignment 1

Solutions to HW Assignment 1 Solutios to HW: 1 Course: Theory of Probability II Page: 1 of 6 Uiversity of Texas at Austi Solutios to HW Assigmet 1 Problem 1.1. Let Ω, F, {F } 0, P) be a filtered probability space ad T a stoppig time.

More information

1 Convergence in Probability and the Weak Law of Large Numbers

1 Convergence in Probability and the Weak Law of Large Numbers 36-752 Advaced Probability Overview Sprig 2018 8. Covergece Cocepts: i Probability, i L p ad Almost Surely Istructor: Alessadro Rialdo Associated readig: Sec 2.4, 2.5, ad 4.11 of Ash ad Doléas-Dade; Sec

More information

Chapter IV Integration Theory

Chapter IV Integration Theory Chapter IV Itegratio Theory Lectures 32-33 1. Costructio of the itegral I this sectio we costruct the abstract itegral. As a matter of termiology, we defie a measure space as beig a triple (, A, µ), where

More information

Lecture 8: Convergence of transformations and law of large numbers

Lecture 8: Convergence of transformations and law of large numbers Lecture 8: Covergece of trasformatios ad law of large umbers Trasformatio ad covergece Trasformatio is a importat tool i statistics. If X coverges to X i some sese, we ofte eed to check whether g(x ) coverges

More information

Notes 5 : More on the a.s. convergence of sums

Notes 5 : More on the a.s. convergence of sums Notes 5 : More o the a.s. covergece of sums Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces: Dur0, Sectios.5; Wil9, Sectio 4.7, Shi96, Sectio IV.4, Dur0, Sectio.. Radom series. Three-series

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

Measure and Measurable Functions

Measure and Measurable Functions 3 Measure ad Measurable Fuctios 3.1 Measure o a Arbitrary σ-algebra Recall from Chapter 2 that the set M of all Lebesgue measurable sets has the followig properties: R M, E M implies E c M, E M for N implies

More information

Lecture 3 The Lebesgue Integral

Lecture 3 The Lebesgue Integral Lecture 3: The Lebesgue Itegral 1 of 14 Course: Theory of Probability I Term: Fall 2013 Istructor: Gorda Zitkovic Lecture 3 The Lebesgue Itegral The costructio of the itegral Uless expressly specified

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 3 9/11/2013. Large deviations Theory. Cramér s Theorem MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 3 9//203 Large deviatios Theory. Cramér s Theorem Cotet.. Cramér s Theorem. 2. Rate fuctio ad properties. 3. Chage of measure techique.

More information

6.3 Testing Series With Positive Terms

6.3 Testing Series With Positive Terms 6.3. TESTING SERIES WITH POSITIVE TERMS 307 6.3 Testig Series With Positive Terms 6.3. Review of what is kow up to ow I theory, testig a series a i for covergece amouts to fidig the i= sequece of partial

More information

Math 525: Lecture 5. January 18, 2018

Math 525: Lecture 5. January 18, 2018 Math 525: Lecture 5 Jauary 18, 2018 1 Series (review) Defiitio 1.1. A sequece (a ) R coverges to a poit L R (writte a L or lim a = L) if for each ǫ > 0, we ca fid N such that a L < ǫ for all N. If the

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Introduction to Probability. Ariel Yadin

Introduction to Probability. Ariel Yadin Itroductio to robability Ariel Yadi Lecture 2 *** Ja. 7 ***. Covergece of Radom Variables As i the case of sequeces of umbers, we would like to talk about covergece of radom variables. There are may ways

More information

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 +

62. Power series Definition 16. (Power series) Given a sequence {c n }, the series. c n x n = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + 62. Power series Defiitio 16. (Power series) Give a sequece {c }, the series c x = c 0 + c 1 x + c 2 x 2 + c 3 x 3 + is called a power series i the variable x. The umbers c are called the coefficiets of

More information

Notes 27 : Brownian motion: path properties

Notes 27 : Brownian motion: path properties Notes 27 : Browia motio: path properties Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces:[Dur10, Sectio 8.1], [MP10, Sectio 1.1, 1.2, 1.3]. Recall: DEF 27.1 (Covariace) Let X = (X

More information

This section is optional.

This section is optional. 4 Momet Geeratig Fuctios* This sectio is optioal. The momet geeratig fuctio g : R R of a radom variable X is defied as g(t) = E[e tx ]. Propositio 1. We have g () (0) = E[X ] for = 1, 2,... Proof. Therefore

More information

An alternative proof of a theorem of Aldous. concerning convergence in distribution for martingales.

An alternative proof of a theorem of Aldous. concerning convergence in distribution for martingales. A alterative proof of a theorem of Aldous cocerig covergece i distributio for martigales. Maurizio Pratelli Dipartimeto di Matematica, Uiversità di Pisa. Via Buoarroti 2. I-56127 Pisa, Italy e-mail: pratelli@dm.uipi.it

More information

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002 ECE 330:541, Stochastic Sigals ad Systems Lecture Notes o Limit Theorems from robability Fall 00 I practice, there are two ways we ca costruct a ew sequece of radom variables from a old sequece of radom

More information

Singular Continuous Measures by Michael Pejic 5/14/10

Singular Continuous Measures by Michael Pejic 5/14/10 Sigular Cotiuous Measures by Michael Peic 5/4/0 Prelimiaries Give a set X, a σ-algebra o X is a collectio of subsets of X that cotais X ad ad is closed uder complemetatio ad coutable uios hece, coutable

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

Riesz-Fischer Sequences and Lower Frame Bounds

Riesz-Fischer Sequences and Lower Frame Bounds Zeitschrift für Aalysis ud ihre Aweduge Joural for Aalysis ad its Applicatios Volume 1 (00), No., 305 314 Riesz-Fischer Sequeces ad Lower Frame Bouds P. Casazza, O. Christese, S. Li ad A. Lider Abstract.

More information

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT Itroductio to Extreme Value Theory Laures de Haa, ISM Japa, 202 Itroductio to Extreme Value Theory Laures de Haa Erasmus Uiversity Rotterdam, NL Uiversity of Lisbo, PT Itroductio to Extreme Value Theory

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

Fall 2013 MTH431/531 Real analysis Section Notes

Fall 2013 MTH431/531 Real analysis Section Notes Fall 013 MTH431/531 Real aalysis Sectio 8.1-8. Notes Yi Su 013.11.1 1. Defiitio of uiform covergece. We look at a sequece of fuctios f (x) ad study the coverget property. Notice we have two parameters

More information

Infinite Sequences and Series

Infinite Sequences and Series Chapter 6 Ifiite Sequeces ad Series 6.1 Ifiite Sequeces 6.1.1 Elemetary Cocepts Simply speakig, a sequece is a ordered list of umbers writte: {a 1, a 2, a 3,...a, a +1,...} where the elemets a i represet

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n.

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n. Radom Walks ad Browia Motio Tel Aviv Uiversity Sprig 20 Lecture date: Mar 2, 20 Lecture 4 Istructor: Ro Peled Scribe: Lira Rotem This lecture deals primarily with recurrece for geeral radom walks. We preset

More information

Chapter 6 Infinite Series

Chapter 6 Infinite Series Chapter 6 Ifiite Series I the previous chapter we cosidered itegrals which were improper i the sese that the iterval of itegratio was ubouded. I this chapter we are goig to discuss a topic which is somewhat

More information

2 Banach spaces and Hilbert spaces

2 Banach spaces and Hilbert spaces 2 Baach spaces ad Hilbert spaces Tryig to do aalysis i the ratioal umbers is difficult for example cosider the set {x Q : x 2 2}. This set is o-empty ad bouded above but does ot have a least upper boud

More information

Probability for mathematicians INDEPENDENCE TAU

Probability for mathematicians INDEPENDENCE TAU Probability for mathematicias INDEPENDENCE TAU 2013 28 Cotets 3 Ifiite idepedet sequeces 28 3a Idepedet evets........................ 28 3b Idepedet radom variables.................. 33 3 Ifiite idepedet

More information

On the optimality of McLeish s conditions for the central limit theorem

On the optimality of McLeish s conditions for the central limit theorem O the optimality of McLeish s coditios for the cetral limit theorem Jérôme Dedecker a a Laboratoire MAP5, CNRS UMR 845, Uiversité Paris-Descartes, Sorboe Paris Cité, 45 rue des Saits Pères, 7570 Paris

More information

2.2. Central limit theorem.

2.2. Central limit theorem. 36.. Cetral limit theorem. The most ideal case of the CLT is that the radom variables are iid with fiite variace. Although it is a special case of the more geeral Lideberg-Feller CLT, it is most stadard

More information

LECTURE 8: ASYMPTOTICS I

LECTURE 8: ASYMPTOTICS I LECTURE 8: ASYMPTOTICS I We are iterested i the properties of estimators as. Cosider a sequece of radom variables {, X 1}. N. M. Kiefer, Corell Uiversity, Ecoomics 60 1 Defiitio: (Weak covergece) A sequece

More information

Beurling Integers: Part 2

Beurling Integers: Part 2 Beurlig Itegers: Part 2 Isomorphisms Devi Platt July 11, 2015 1 Prime Factorizatio Sequeces I the last article we itroduced the Beurlig geeralized itegers, which ca be represeted as a sequece of real umbers

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 6 9/23/2013. Brownian motion. Introduction MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/5.070J Fall 203 Lecture 6 9/23/203 Browia motio. Itroductio Cotet.. A heuristic costructio of a Browia motio from a radom walk. 2. Defiitio ad basic properties

More information

Lesson 10: Limits and Continuity

Lesson 10: Limits and Continuity www.scimsacademy.com Lesso 10: Limits ad Cotiuity SCIMS Academy 1 Limit of a fuctio The cocept of limit of a fuctio is cetral to all other cocepts i calculus (like cotiuity, derivative, defiite itegrals

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

Lecture 2. The Lovász Local Lemma

Lecture 2. The Lovász Local Lemma Staford Uiversity Sprig 208 Math 233A: No-costructive methods i combiatorics Istructor: Ja Vodrák Lecture date: Jauary 0, 208 Origial scribe: Apoorva Khare Lecture 2. The Lovász Local Lemma 2. Itroductio

More information

Math 61CM - Solutions to homework 3

Math 61CM - Solutions to homework 3 Math 6CM - Solutios to homework 3 Cédric De Groote October 2 th, 208 Problem : Let F be a field, m 0 a fixed oegative iteger ad let V = {a 0 + a x + + a m x m a 0,, a m F} be the vector space cosistig

More information

7 Sequences of real numbers

7 Sequences of real numbers 40 7 Sequeces of real umbers 7. Defiitios ad examples Defiitio 7... A sequece of real umbers is a real fuctio whose domai is the set N of atural umbers. Let s : N R be a sequece. The the values of s are

More information

Chapter 6 Principles of Data Reduction

Chapter 6 Principles of Data Reduction Chapter 6 for BST 695: Special Topics i Statistical Theory. Kui Zhag, 0 Chapter 6 Priciples of Data Reductio Sectio 6. Itroductio Goal: To summarize or reduce the data X, X,, X to get iformatio about a

More information

The Pointwise Ergodic Theorem and its Applications

The Pointwise Ergodic Theorem and its Applications The Poitwise Ergodic Theorem ad its Applicatios Itroductio Peter Oberly 11/9/2018 Algebra has homomorphisms ad topology has cotiuous maps; i these otes we explore the structure preservig maps for measure

More information

TENSOR PRODUCTS AND PARTIAL TRACES

TENSOR PRODUCTS AND PARTIAL TRACES Lecture 2 TENSOR PRODUCTS AND PARTIAL TRACES Stéphae ATTAL Abstract This lecture cocers special aspects of Operator Theory which are of much use i Quatum Mechaics, i particular i the theory of Quatum Ope

More information

K. Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria

K. Grill Institut für Statistik und Wahrscheinlichkeitstheorie, TU Wien, Austria MARKOV PROCESSES K. Grill Istitut für Statistik ud Wahrscheilichkeitstheorie, TU Wie, Austria Keywords: Markov process, Markov chai, Markov property, stoppig times, strog Markov property, trasitio matrix,

More information

Application to Random Graphs

Application to Random Graphs A Applicatio to Radom Graphs Brachig processes have a umber of iterestig ad importat applicatios. We shall cosider oe of the most famous of them, the Erdős-Réyi radom graph theory. 1 Defiitio A.1. Let

More information

Law of the sum of Bernoulli random variables

Law of the sum of Bernoulli random variables Law of the sum of Beroulli radom variables Nicolas Chevallier Uiversité de Haute Alsace, 4, rue des frères Lumière 68093 Mulhouse icolas.chevallier@uha.fr December 006 Abstract Let be the set of all possible

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number

Integrable Functions. { f n } is called a determining sequence for f. If f is integrable with respect to, then f d does exist as a finite real number MATH 532 Itegrable Fuctios Dr. Neal, WKU We ow shall defie what it meas for a measurable fuctio to be itegrable, show that all itegral properties of simple fuctios still hold, ad the give some coditios

More information

sin(n) + 2 cos(2n) n 3/2 3 sin(n) 2cos(2n) n 3/2 a n =

sin(n) + 2 cos(2n) n 3/2 3 sin(n) 2cos(2n) n 3/2 a n = 60. Ratio ad root tests 60.1. Absolutely coverget series. Defiitio 13. (Absolute covergece) A series a is called absolutely coverget if the series of absolute values a is coverget. The absolute covergece

More information

Lecture 10: Bounded Linear Operators and Orthogonality in Hilbert Spaces

Lecture 10: Bounded Linear Operators and Orthogonality in Hilbert Spaces Lecture : Bouded Liear Operators ad Orthogoality i Hilbert Spaces 34 Bouded Liear Operator Let ( X, ), ( Y, ) i i be ored liear vector spaces ad { } X Y The, T is said to be bouded if a real uber c such

More information

MAT1026 Calculus II Basic Convergence Tests for Series

MAT1026 Calculus II Basic Convergence Tests for Series MAT026 Calculus II Basic Covergece Tests for Series Egi MERMUT 202.03.08 Dokuz Eylül Uiversity Faculty of Sciece Departmet of Mathematics İzmir/TURKEY Cotets Mootoe Covergece Theorem 2 2 Series of Real

More information

The Central Limit Theorem

The Central Limit Theorem Chapter The Cetral Limit Theorem Deote by Z the stadard ormal radom variable with desity 2π e x2 /2. Lemma.. Ee itz = e t2 /2 Proof. We use the same calculatio as for the momet geeratig fuctio: exp(itx

More information

Introductory Ergodic Theory and the Birkhoff Ergodic Theorem

Introductory Ergodic Theory and the Birkhoff Ergodic Theorem Itroductory Ergodic Theory ad the Birkhoff Ergodic Theorem James Pikerto Jauary 14, 2014 I this expositio we ll cover a itroductio to ergodic theory. Specifically, the Birkhoff Mea Theorem. Ergodic theory

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

lim za n n = z lim a n n.

lim za n n = z lim a n n. Lecture 6 Sequeces ad Series Defiitio 1 By a sequece i a set A, we mea a mappig f : N A. It is customary to deote a sequece f by {s } where, s := f(). A sequece {z } of (complex) umbers is said to be coverget

More information

A Hilbert Space Central Limit Theorem for Geometrically Ergodic Markov Chains

A Hilbert Space Central Limit Theorem for Geometrically Ergodic Markov Chains A Hilbert Space Cetral Limit Theorem for Geometrically Ergodic Marov Chais Joh Stachursi Research School of Ecoomics, Australia Natioal Uiversity Abstract This ote proves a simple but useful cetral limit

More information

On forward improvement iteration for stopping problems

On forward improvement iteration for stopping problems O forward improvemet iteratio for stoppig problems Mathematical Istitute, Uiversity of Kiel, Ludewig-Mey-Str. 4, D-24098 Kiel, Germay irle@math.ui-iel.de Albrecht Irle Abstract. We cosider the optimal

More information

Probability and Random Processes

Probability and Random Processes Probability ad Radom Processes Lecture 5 Probability ad radom variables The law of large umbers Mikael Skoglud, Probability ad radom processes 1/21 Why Measure Theoretic Probability? Stroger limit theorems

More information

Math 2784 (or 2794W) University of Connecticut

Math 2784 (or 2794W) University of Connecticut ORDERS OF GROWTH PAT SMITH Math 2784 (or 2794W) Uiversity of Coecticut Date: Mar. 2, 22. ORDERS OF GROWTH. Itroductio Gaiig a ituitive feel for the relative growth of fuctios is importat if you really

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall Midterm Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall Midterm Solutions MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.65/5.070J Fall 0 Midterm Solutios Problem Suppose a radom variable X is such that P(X > ) = 0 ad P(X > E) > 0 for every E > 0. Recall that the large deviatios rate

More information

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities Chapter 5 Iequalities 5.1 The Markov ad Chebyshev iequalities As you have probably see o today s frot page: every perso i the upper teth percetile ears at least 1 times more tha the average salary. I other

More information

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = =

Review Problems 1. ICME and MS&E Refresher Course September 19, 2011 B = C = AB = A = A 2 = A 3... C 2 = C 3 = = Review Problems ICME ad MS&E Refresher Course September 9, 0 Warm-up problems. For the followig matrices A = 0 B = C = AB = 0 fid all powers A,A 3,(which is A times A),... ad B,B 3,... ad C,C 3,... Solutio:

More information

Metric Space Properties

Metric Space Properties Metric Space Properties Math 40 Fial Project Preseted by: Michael Brow, Alex Cordova, ad Alyssa Sachez We have already poited out ad will recogize throughout this book the importace of compact sets. All

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

Diagonal approximations by martingales

Diagonal approximations by martingales Alea 7, 257 276 200 Diagoal approximatios by martigales Jaa Klicarová ad Dalibor Volý Faculty of Ecoomics, Uiversity of South Bohemia, Studetsa 3, 370 05, Cese Budejovice, Czech Republic E-mail address:

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

MA131 - Analysis 1. Workbook 3 Sequences II

MA131 - Analysis 1. Workbook 3 Sequences II MA3 - Aalysis Workbook 3 Sequeces II Autum 2004 Cotets 2.8 Coverget Sequeces........................ 2.9 Algebra of Limits......................... 2 2.0 Further Useful Results........................

More information

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator

Slide Set 13 Linear Model with Endogenous Regressors and the GMM estimator Slide Set 13 Liear Model with Edogeous Regressors ad the GMM estimator Pietro Coretto pcoretto@uisa.it Ecoometrics Master i Ecoomics ad Fiace (MEF) Uiversità degli Studi di Napoli Federico II Versio: Friday

More information

6a Time change b Quadratic variation c Planar Brownian motion d Conformal local martingales e Hints to exercises...

6a Time change b Quadratic variation c Planar Brownian motion d Conformal local martingales e Hints to exercises... Tel Aviv Uiversity, 28 Browia motio 59 6 Time chage 6a Time chage..................... 59 6b Quadratic variatio................. 61 6c Plaar Browia motio.............. 64 6d Coformal local martigales............

More information

Rademacher Complexity

Rademacher Complexity EECS 598: Statistical Learig Theory, Witer 204 Topic 0 Rademacher Complexity Lecturer: Clayto Scott Scribe: Ya Deg, Kevi Moo Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved for

More information

Mathematical Foundations -1- Sets and Sequences. Sets and Sequences

Mathematical Foundations -1- Sets and Sequences. Sets and Sequences Mathematical Foudatios -1- Sets ad Sequeces Sets ad Sequeces Methods of proof 2 Sets ad vectors 13 Plaes ad hyperplaes 18 Liearly idepedet vectors, vector spaces 2 Covex combiatios of vectors 21 eighborhoods,

More information

Lecture 20: Multivariate convergence and the Central Limit Theorem

Lecture 20: Multivariate convergence and the Central Limit Theorem Lecture 20: Multivariate covergece ad the Cetral Limit Theorem Covergece i distributio for radom vectors Let Z,Z 1,Z 2,... be radom vectors o R k. If the cdf of Z is cotiuous, the we ca defie covergece

More information

Output Analysis and Run-Length Control

Output Analysis and Run-Length Control IEOR E4703: Mote Carlo Simulatio Columbia Uiversity c 2017 by Marti Haugh Output Aalysis ad Ru-Legth Cotrol I these otes we describe how the Cetral Limit Theorem ca be used to costruct approximate (1 α%

More information

Notes on Snell Envelops and Examples

Notes on Snell Envelops and Examples Notes o Sell Evelops ad Examples Example (Secretary Problem): Coside a pool of N cadidates whose qualificatios are represeted by ukow umbers {a > a 2 > > a N } from best to last. They are iterviewed sequetially

More information

Introduction to Functional Analysis

Introduction to Functional Analysis MIT OpeCourseWare http://ocw.mit.edu 18.10 Itroductio to Fuctioal Aalysis Sprig 009 For iformatio about citig these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. LECTURE OTES FOR 18.10,

More information

REAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS

REAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS REAL ANALYSIS II: PROBLEM SET 1 - SOLUTIONS 18th Feb, 016 Defiitio (Lipschitz fuctio). A fuctio f : R R is said to be Lipschitz if there exists a positive real umber c such that for ay x, y i the domai

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information