Non-homogeneous random walks on a semi-infinite strip

Size: px
Start display at page:

Download "Non-homogeneous random walks on a semi-infinite strip"

Transcription

1 Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016

2 Outline Motivation: Lamperti s problem Our Model Non-homogeneous RW on semi-infinite strip Classification of the random walk Assumptions Main results Constant drift Lamperti drift Generalized Lamperti drift Examples: Correlated random walks

3 Motivation: Lamperti s problem Let X n be a nearest neighbour random walk on Z +.

4 Motivation: Lamperti s problem Let X n be a nearest neighbour random walk on Z +. Denote the mean drift at x by µ(x).

5 Motivation: Lamperti s problem Let X n be a nearest neighbour random walk on Z +. Denote the mean drift at x by µ(x). Lamperti s problem: µ(x) = O(1/x) when x.

6 Non-homogeneous RW on semi-infinite strip Let S be a finite non-empty set.

7 Non-homogeneous RW on semi-infinite strip Let S be a finite non-empty set. Let Σ be a subset of R + S that is locally finite, i.e., Σ ([0, r] S) is finite for all r R +.

8 Non-homogeneous RW on semi-infinite strip Let S be a finite non-empty set. Let Σ be a subset of R + S that is locally finite, i.e., Σ ([0, r] S) is finite for all r R +. E.g. Σ = Z + S.

9 Non-homogeneous RW on semi-infinite strip Let S be a finite non-empty set. Let Σ be a subset of R + S that is locally finite, i.e., Σ ([0, r] S) is finite for all r R +. E.g. Σ = Z + S. Define for each k S the line Λ k := {x R + : (x, k) Σ}.

10 Non-homogeneous RW on semi-infinite strip Let S be a finite non-empty set. Let Σ be a subset of R + S that is locally finite, i.e., Σ ([0, r] S) is finite for all r R +. E.g. Σ = Z + S. Define for each k S the line Λ k := {x R + : (x, k) Σ}. Suppose that for each k S the line Λ k is unbounded.

11 Non-homogeneous RW on semi-infinite strip Suppose that (X n, η n ), n Z +, is a time-homogeneous, irreducible Markov chain on Σ, a locally finite subset of R + S.

12 Non-homogeneous RW on semi-infinite strip Suppose that (X n, η n ), n Z +, is a time-homogeneous, irreducible Markov chain on Σ, a locally finite subset of R + S. Neither coordinate is assumed to be Markov.

13 Motivating examples We can view S as a set of internal states, influencing motion on the lines R +. E.g., Operations research: modulated queues (S = states of server)

14 Motivating examples We can view S as a set of internal states, influencing motion on the lines R +. E.g., Operations research: modulated queues (S = states of server) Economics: regime-switching processes (S contains market information)

15 Motivating examples We can view S as a set of internal states, influencing motion on the lines R +. E.g., Operations research: modulated queues (S = states of server) Economics: regime-switching processes (S contains market information) Physics: physical processes with internal degrees of freedom (S = energy/momentum states of particle)

16 Classification of the random walk Recall (X n, η n ) is a time-homogeneous irreducible Markov chain on the state-space Σ R + S. (i) If (X n, η n ) is transient, then X n a.s. (ii) If (X n, η n ) is recurrent, then P(X n A i.o.) = 1 for any bounded region A. (iii) Define τ = min{n 0 : X n A}. If (X n, η n ) is positive-recurrent, then E[τ] < for any bounded region A. (iv) If (X n, η n ) is recurrent but not positive recurrent, then we call it null-recurrent.

17 Assumptions Moments bound on jumps of X n (B p ) C p < s.t. E[ X n+1 X n p (X n, η n ) = (x, i)] C p

18 Assumptions Moments bound on jumps of X n (B p ) C p < s.t. E[ X n+1 X n p (X n, η n ) = (x, i)] C p Notation for moments of the displacements in the X -coordinate µ i (x) = E[X n+1 X n (X n, η n ) = (x, i)] σ i (x) = E[(X n+1 X n ) 2 (X n, η n ) = (x, i)]

19 Assumptions (cont.) η n is close to being Markov when X n is large Define q ij (x) = P[η n+1 = j (X n, η n ) = (x, i)] (Q ) q ij = lim x q ij (x) exists for all i, j S and (q ij ) is irreducible

20 Assumptions (cont.) η n is close to being Markov when X n is large Define q ij (x) = P[η n+1 = j (X n, η n ) = (x, i)] (Q ) q ij = lim x q ij (x) exists for all i, j S and (q ij ) is irreducible Let π be the unique stationary distribution on S corresponding to (q ij ).

21 Constant drift Constant-type moment condition (M C ) d i R for all i S such that µ i (x) = d i + o(1).

22 Constant drift

23 Constant drift

24 Constant drift

25 Constant drift

26 Recurrence classification for constant drift The following theorem is from Georgiou, Wade (2014), extending slightly earlier work of Malyshev (1972), Falin (1988), and Fayolle et al (1995). Theorem Suppose that (B p ) holds for some p > 1 and conditions (Q ) and (M C ) hold. The following sufficient conditions apply. If i S d iπ i > 0, then (X n, η n ) is transient. If i S d iπ i < 0, then (X n, η n ) is positive-recurrent. where π i is the unique stationary distribution on S. What about i S d iπ i = 0?

27 Different drifts (i) i S d iπ i 0, constant drift: µ i (x) = d i + o(1) (ii) i S d iπ i = 0 and d i = 0 for all i, Lamperti drift: µ i (x) = c i x + o(x 1 ) σ i (x) = s 2 i + o(1) (iii) i S d iπ i = 0 and d i 0 for some i, generalized Lamperti drift: µ i (x) = d i + c i x + o(x 1 ) σ i (x) = si 2 + o(1)

28 Lamperti drift Lamperti-type moment conditions (M L ) c i, s i R for all i S (at least one s i nonzero) such that µ i (x) = c i x + o(x 1 ); σ i (x) = s 2 i + o(1).

29 Lamperti drift Lamperti-type moment conditions (M L ) c i, s i R for all i S (at least one s i nonzero) such that µ i (x) = c i x + o(x 1 ); σ i (x) = s 2 i + o(1). When S is a singleton, this reduces to the classical Lamperti problem on R +.

30 Lamperti drift

31 Lamperti drift

32 Lamperti drift

33 Lamperti drift

34 Recurrence classification for Lamperti drift Theorem (Georgiou, Wade, 2014) Suppose that (B p ) holds for some p > 2 and conditions (Q ) and (M L ) hold. The following sufficient conditions apply. (i) If i S (2c i s 2 i )π i > 0, then (X n, η n ) is transient. (ii) If i S 2c iπ i < i S s2 i π i, then (X n, η n ) is null-recurrent. (iii) If i S (2c i + s 2 i )π i < 0, then (X n, η n ) is positive-recurrent. [With better error bounds in (Q ) and (M L ) we can also show that the boundary cases are null-recurrent.]

35 Idea of proof of the theorem Our general analysis is based on the Lyapunov function f ν : Σ R defined for ν R by where b i R. f ν (x, i) := x ν + ν 2 b ix ν 2

36 Idea of proof of the theorem Our general analysis is based on the Lyapunov function f ν : Σ R defined for ν R by where b i R. f ν (x, i) := x ν + ν 2 b ix ν 2 For appropriate choices of ν, and selecting the right b i depending on the drift and the stationary distribution, we can show that f ν (X n, η n ) is a supermartingale and so we can apply some semi-martingale theorem. Hence we obtain the last theorem shown.

37 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied,

38 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied, (i) For all i S, µ i (x) = d i + c i x + o(x 1 ) as x,

39 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied, (i) For all i S, µ i (x) = d i + c i x + o(x 1 ) as x, (ii) For all i, j S, µ ij (x) = d ij + o(1) as x,

40 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied, (i) For all i S, µ i (x) = d i + c i x + o(x 1 ) as x, (ii) For all i, j S, µ ij (x) = d ij + o(1) as x, (iii) σ i (x) = si 2 + o(1),

41 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied, (i) For all i S, µ i (x) = d i + c i x + o(x 1 ) as x, (ii) For all i, j S, µ ij (x) = d ij + o(1) as x, (iii) σ i (x) = si 2 + o(1), (iv) i S π id i = 0.

42 Generalized Lamperti drift Generalized Lamperti type moment conditions Define µ ij (x) = E[(X n+1 X n )1{η n+1 = j} (X n, η n ) = (x, i)] (M CL ) There exist d i R, c i R, d ij R and si 2 R +, with at least one si 2 non-zero, such that all of the following is satisfied, (i) For all i S, µ i (x) = d i + c i x + o(x 1 ) as x, (ii) For all i, j S, µ ij (x) = d ij + o(1) as x, (iii) σ i (x) = si 2 + o(1), (iv) i S π id i = 0. Transition probability condition (Q CL ) There exist γ ij R, such that q ij (x) = q ij + γ ij x + o(x 1 )

43 Generalized Lamperti drift

44 Generalized Lamperti drift

45 Recurrence classification for Generalized Lamperti drift Theorem (L., Wade, 2015) Suppose that (B p ) holds for some p > 2, and conditions (Q ), (Q CL ) and (M CL ) hold. Define a i to be the unique solution up to translation of the system of equations d i + j S (a j a i )q ij = 0 i S. The following sufficient conditions apply. If i S [2c i si j S a j(γ ij d ij )]π i > 0 then (X n, η n ) is transient. If i S (2c i + 2 j S a jγ ij )π i < i S (s2 i + 2 j S a jd ij )π i then (X n, η n ) is null-recurrent. If i S [2c i + si j S a j(γ ij + d ij )]π i < 0 then (X n, η n ) is positive-recurrent.

46 Idea of proof of the theorem Transform the process (X n, η n ) with generalized Lamperti drift to a process ( X n, η n ) = (X n + a ηn, η n )

47 Idea of proof of the theorem Transform the process (X n, η n ) with generalized Lamperti drift to a process ( X n, η n ) = (X n + a ηn, η n ) Find the appropriate choices of the real numbers a i, i S, such that ( X n, η n ) has Lamperti drift i.e., the constant components of the drifts are eliminated

48 Idea of proof of the theorem Transform the process (X n, η n ) with generalized Lamperti drift to a process ( X n, η n ) = (X n + a ηn, η n ) Find the appropriate choices of the real numbers a i, i S, such that ( X n, η n ) has Lamperti drift i.e., the constant components of the drifts are eliminated Calculate the new increment moment estimates for the transformed process ( X n, η n ).

49 Idea of proof of the theorem Transform the process (X n, η n ) with generalized Lamperti drift to a process ( X n, η n ) = (X n + a ηn, η n ) Find the appropriate choices of the real numbers a i, i S, such that ( X n, η n ) has Lamperti drift i.e., the constant components of the drifts are eliminated Calculate the new increment moment estimates for the transformed process ( X n, η n ). Apply the results in the Lamperti drift case

50 Summary of cases

51 Summary of cases

52 Example: One-step Correlated random walk

53 Simulation results on One-step Correlated RW

54 Simulation results on One-step Correlated RW

55 Example: Two-steps Correlated random walk

56 Simulation results on Two-steps Correlated RW

57 Simulation results on Two-step Correlated RW

58 Simulation results on Two-steps Correlated RW

59 Existence and Non-existence of Moments results We also obtained some quantitative information on the nature of recurrence. We studied moments of passage times.

60 Existence and Non-existence of Moments results We also obtained some quantitative information on the nature of recurrence. We studied moments of passage times. For x R +, define the stopping time τ x := min{n 0 : X n x}.

61 Existence and Non-existence of Moments results We also obtained some quantitative information on the nature of recurrence. We studied moments of passage times. For x R +, define the stopping time τ x := min{n 0 : X n x}. We got results that gives conditions for which s such that E[τ s x ] exists or not exists.

62 Existence and Non-existence of Moments results We also obtained some quantitative information on the nature of recurrence. We studied moments of passage times. For x R +, define the stopping time τ x := min{n 0 : X n x}. We got results that gives conditions for which s such that E[τ s x ] exists or not exists. We used similar techniques and ideas in our proofs.

63 References L., A. R. Wade, Non-homogeneous random walks on a half strip with generalized Lamperti drifts, G. I. Falin, Ergodicity of random walks in the half-strip, Math. Notes 44 (1988) ; translated from Mat. Zametki 44 (1988) [in Russian]. G. Fayolle, V. A. Malyshev, and M. V. Menshikov, Topics in the Constructive Theory of Countable Markov Chains, Cambridge University Press, Cambridge, N. Georgiou and A. R. Wade, Non-homogeneous random walks on a semi-infinite strip, Stoch. Process. Appl. 124 (2014) J. Lamperti, Criteria for the recurrence and transience of stochastic processes I, J. Math. Anal. Appl. 1 (1960) V. A. Malyshev, Homogeneous random walks on the product of finite set and a halfline, pp in Veroyatnostnye Metody Issledovania (Probability Methods of Investigation) 41 [in Russian], ed. A.N. Kolmogorov, Moscow State University, Moscow, 1972.

Stability of the two queue system

Stability of the two queue system Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Examples of Countable State Markov Chains Thursday, October 16, :12 PM

Examples of Countable State Markov Chains Thursday, October 16, :12 PM stochnotes101608 Page 1 Examples of Countable State Markov Chains Thursday, October 16, 2008 12:12 PM Homework 2 solutions will be posted later today. A couple of quick examples. Queueing model (without

More information

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS

Stochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008 Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv:0802.1865v2 [math.pr] 12 May 2008 M.V. Menshikov,1 M. Vachkovskaia,2 A.R. Wade,3 28th May 2008 1 Department of Mathematical

More information

Stochastic Models: Markov Chains and their Generalizations

Stochastic Models: Markov Chains and their Generalizations Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008

STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 Name STA 624 Practice Exam 2 Applied Stochastic Processes Spring, 2008 There are five questions on this test. DO use calculators if you need them. And then a miracle occurs is not a valid answer. There

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Adventures in Stochastic Processes

Adventures in Stochastic Processes Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES

More information

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1

= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1 Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The

More information

Markov Chains (Part 3)

Markov Chains (Part 3) Markov Chains (Part 3) State Classification Markov Chains - State Classification Accessibility State j is accessible from state i if p ij (n) > for some n>=, meaning that starting at state i, there is

More information

Statistics 150: Spring 2007

Statistics 150: Spring 2007 Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

Chapter 2: Markov Chains and Queues in Discrete Time

Chapter 2: Markov Chains and Queues in Discrete Time Chapter 2: Markov Chains and Queues in Discrete Time L. Breuer University of Kent 1 Definition Let X n with n N 0 denote random variables on a discrete space E. The sequence X = (X n : n N 0 ) is called

More information

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.

Lecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M. Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although

More information

Markov processes and queueing networks

Markov processes and queueing networks Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution

More information

Classification of Countable State Markov Chains

Classification of Countable State Markov Chains Classification of Countable State Markov Chains Friday, March 21, 2014 2:01 PM How can we determine whether a communication class in a countable state Markov chain is: transient null recurrent positive

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Model reversibility of a two dimensional reflecting random walk and its application to queueing network

Model reversibility of a two dimensional reflecting random walk and its application to queueing network arxiv:1312.2746v2 [math.pr] 11 Dec 2013 Model reversibility of a two dimensional reflecting random walk and its application to queueing network Masahiro Kobayashi, Masakiyo Miyazawa and Hiroshi Shimizu

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo

Winter 2019 Math 106 Topics in Applied Mathematics. Lecture 9: Markov Chain Monte Carlo Winter 2019 Math 106 Topics in Applied Mathematics Data-driven Uncertainty Quantification Yoonsang Lee (yoonsang.lee@dartmouth.edu) Lecture 9: Markov Chain Monte Carlo 9.1 Markov Chain A Markov Chain Monte

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

A review of Continuous Time MC STA 624, Spring 2015

A review of Continuous Time MC STA 624, Spring 2015 A review of Continuous Time MC STA 624, Spring 2015 Ruriko Yoshida Dept. of Statistics University of Kentucky polytopes.net STA 624 1 Continuous Time Markov chains Definition A continuous time stochastic

More information

2 Discrete-Time Markov Chains

2 Discrete-Time Markov Chains 2 Discrete-Time Markov Chains Angela Peace Biomathematics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Countable state discrete time Markov Chains

Countable state discrete time Markov Chains Countable state discrete time Markov Chains Tuesday, March 18, 2014 2:12 PM Readings: Lawler Ch. 2 Karlin & Taylor Chs. 2 & 3 Resnick Ch. 1 Countably infinite state spaces are of practical utility in situations

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient:

Transience: Whereas a finite closed communication class must be recurrent, an infinite closed communication class can be transient: Stochastic2010 Page 1 Long-Time Properties of Countable-State Markov Chains Tuesday, March 23, 2010 2:14 PM Homework 2: if you turn it in by 5 PM on 03/25, I'll grade it by 03/26, but you can turn it in

More information

T. Liggett Mathematics 171 Final Exam June 8, 2011

T. Liggett Mathematics 171 Final Exam June 8, 2011 T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)

More information

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition

DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition DISCRETE STOCHASTIC PROCESSES Draft of 2nd Edition R. G. Gallager January 31, 2011 i ii Preface These notes are a draft of a major rewrite of a text [9] of the same name. The notes and the text are outgrowths

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

FUNCTIONAL CENTRAL LIMIT (RWRE)

FUNCTIONAL CENTRAL LIMIT (RWRE) FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE

More information

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment

Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Asymptotic properties of the maximum likelihood estimator for a ballistic random walk in a random environment Catherine Matias Joint works with F. Comets, M. Falconnet, D.& O. Loukianov Currently: Laboratoire

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution

Matrix analytic methods. Lecture 1: Structured Markov chains and their stationary distribution 1/29 Matrix analytic methods Lecture 1: Structured Markov chains and their stationary distribution Sophie Hautphenne and David Stanford (with thanks to Guy Latouche, U. Brussels and Peter Taylor, U. Melbourne

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Exact Simulation of the Stationary Distribution of M/G/c Queues

Exact Simulation of the Stationary Distribution of M/G/c Queues 1/36 Exact Simulation of the Stationary Distribution of M/G/c Queues Professor Karl Sigman Columbia University New York City USA Conference in Honor of Søren Asmussen Monday, August 1, 2011 Sandbjerg Estate

More information

The Distribution of Mixing Times in Markov Chains

The Distribution of Mixing Times in Markov Chains The Distribution of Mixing Times in Markov Chains Jeffrey J. Hunter School of Computing & Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand December 2010 Abstract The distribution

More information

Analysis of the Dynamic Travelling Salesman Problem with Different Policies

Analysis of the Dynamic Travelling Salesman Problem with Different Policies Analysis of the Dynamic Travelling Salesman Problem with Different Policies Santiago Ravassi A Thesis in The Department of Mathematics and Statistics Presented in Partial Fulfillment of the Requirements

More information

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models

Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Existence, Uniqueness and Stability of Invariant Distributions in Continuous-Time Stochastic Models Christian Bayer and Klaus Wälde Weierstrass Institute for Applied Analysis and Stochastics and University

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

Quantitative Model Checking (QMC) - SS12

Quantitative Model Checking (QMC) - SS12 Quantitative Model Checking (QMC) - SS12 Lecture 06 David Spieler Saarland University, Germany June 4, 2012 1 / 34 Deciding Bisimulations 2 / 34 Partition Refinement Algorithm Notation: A partition P over

More information

Stochastic process. X, a series of random variables indexed by t

Stochastic process. X, a series of random variables indexed by t Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,

More information

On the Borel-Cantelli Lemma

On the Borel-Cantelli Lemma On the Borel-Cantelli Lemma Alexei Stepanov, Izmir University of Economics, Turkey In the present note, we propose a new form of the Borel-Cantelli lemma. Keywords and Phrases: the Borel-Cantelli lemma,

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015

Markov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015 Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Birth-death chain models (countable state)

Birth-death chain models (countable state) Countable State Birth-Death Chains and Branching Processes Tuesday, March 25, 2014 1:59 PM Homework 3 posted, due Friday, April 18. Birth-death chain models (countable state) S = We'll characterize the

More information

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1

Irreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1 Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate

More information

NEW FRONTIERS IN APPLIED PROBABILITY

NEW FRONTIERS IN APPLIED PROBABILITY J. Appl. Prob. Spec. Vol. 48A, 209 213 (2011) Applied Probability Trust 2011 NEW FRONTIERS IN APPLIED PROBABILITY A Festschrift for SØREN ASMUSSEN Edited by P. GLYNN, T. MIKOSCH and T. ROLSKI Part 4. Simulation

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

12 Markov chains The Markov property

12 Markov chains The Markov property 12 Markov chains Summary. The chapter begins with an introduction to discrete-time Markov chains, and to the use of matrix products and linear algebra in their study. The concepts of recurrence and transience

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Continuous time Markov chains

Continuous time Markov chains Continuous time Markov chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 16, 2017

More information

Stochastic process for macro

Stochastic process for macro Stochastic process for macro Tianxiao Zheng SAIF 1. Stochastic process The state of a system {X t } evolves probabilistically in time. The joint probability distribution is given by Pr(X t1, t 1 ; X t2,

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

On optimal stopping of autoregressive sequences

On optimal stopping of autoregressive sequences On optimal stopping of autoregressive sequences Sören Christensen (joint work with A. Irle, A. Novikov) Mathematisches Seminar, CAU Kiel September 24, 2010 Sören Christensen (CAU Kiel) Optimal stopping

More information

Markovian Description of Irreversible Processes and the Time Randomization (*).

Markovian Description of Irreversible Processes and the Time Randomization (*). Markovian Description of Irreversible Processes and the Time Randomization (*). A. TRZĘSOWSKI and S. PIEKARSKI Institute of Fundamental Technological Research, Polish Academy of Sciences ul. Świętokrzyska

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015

Markov chains. 1 Discrete time Markov chains. c A. J. Ganesh, University of Bristol, 2015 Markov chains c A. J. Ganesh, University of Bristol, 2015 1 Discrete time Markov chains Example: A drunkard is walking home from the pub. There are n lampposts between the pub and his home, at each of

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading

Stochastic Processes MIT, fall 2011 Day by day lecture outline and weekly homeworks. A) Lecture Outline Suggested reading Stochastic Processes 18445 MIT, fall 2011 Day by day lecture outline and weekly homeworks A) Lecture Outline Suggested reading Part 1: Random walk on Z Lecture 1: thursday, september 8, 2011 Presentation

More information

Stochastic Processes

Stochastic Processes Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Two-sided Bounds for the Convergence Rate of Markov Chains

Two-sided Bounds for the Convergence Rate of Markov Chains UDC 519.217.2 Two-sided Bounds for the Convergence Rate of Markov Chains A. Zeifman, Ya. Satin, K. Kiseleva, V. Korolev Vologda State University, Institute of Informatics Problems of the FRC CSC RAS, ISEDT

More information

The Theory behind PageRank

The Theory behind PageRank The Theory behind PageRank Mauro Sozio Telecom ParisTech May 21, 2014 Mauro Sozio (LTCI TPT) The Theory behind PageRank May 21, 2014 1 / 19 A Crash Course on Discrete Probability Events and Probability

More information

Lecture 20: Reversible Processes and Queues

Lecture 20: Reversible Processes and Queues Lecture 20: Reversible Processes and Queues 1 Examples of reversible processes 11 Birth-death processes We define two non-negative sequences birth and death rates denoted by {λ n : n N 0 } and {µ n : n

More information

Markov Chains and Stochastic Sampling

Markov Chains and Stochastic Sampling Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,

More information

An Introduction to Probability Theory and Its Applications

An Introduction to Probability Theory and Its Applications An Introduction to Probability Theory and Its Applications WILLIAM FELLER (1906-1970) Eugene Higgins Professor of Mathematics Princeton University VOLUME II SECOND EDITION JOHN WILEY & SONS Contents I

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Brownian Motion and the Dirichlet Problem

Brownian Motion and the Dirichlet Problem Brownian Motion and the Dirichlet Problem Mario Teixeira Parente August 29, 2016 1/22 Topics for the talk 1. Solving the Dirichlet problem on bounded domains 2. Application: Recurrence/Transience of Brownian

More information

On the range of a two-dimensional conditioned simple random walk

On the range of a two-dimensional conditioned simple random walk arxiv:1804.00291v1 [math.pr] 1 Apr 2018 On the range of a two-dimensional conditioned simple random walk Nina Gantert 1 Serguei Popov 2 Marina Vachkovskaia 2 April 3, 2018 1 Technische Universität München,

More information

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis

TCOM 501: Networking Theory & Fundamentals. Lecture 6 February 19, 2003 Prof. Yannis A. Korilis TCOM 50: Networking Theory & Fundamentals Lecture 6 February 9, 003 Prof. Yannis A. Korilis 6- Topics Time-Reversal of Markov Chains Reversibility Truncating a Reversible Markov Chain Burke s Theorem Queues

More information

Stochastic Simulation of Processes, Fields and Structures

Stochastic Simulation of Processes, Fields and Structures Stochastic Simulation of Processes, Fields and Structures Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2014 Ulm, 2014 2 Contents 1 Random Walks, Estimators and Markov

More information

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains

ECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1

Markov Chains. October 5, Stoch. Systems Analysis Markov chains 1 Markov Chains Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania aribeiro@seas.upenn.edu http://www.seas.upenn.edu/users/~aribeiro/ October 5, 2011 Stoch. Systems

More information

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS

Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Treball final de grau GRAU DE MATEMÀTIQUES Facultat de Matemàtiques Universitat de Barcelona MARKOV CHAINS Autor: Anna Areny Satorra Director: Dr. David Márquez Carreras Realitzat a: Departament de probabilitat,

More information

On the Central Limit Theorem for an ergodic Markov chain

On the Central Limit Theorem for an ergodic Markov chain Stochastic Processes and their Applications 47 ( 1993) 113-117 North-Holland 113 On the Central Limit Theorem for an ergodic Markov chain K.S. Chan Department of Statistics and Actuarial Science, The University

More information