MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10
|
|
- Bruno Palmer
- 5 years ago
- Views:
Transcription
1 MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea A Necessary and Sufficient Condition for Recurrence Branching Chains Regenerative Processes Positive Recurrent Discrete Time Markov Chains Regeneration: The Basic Idea We describe an approach to studying the long-run behavior of Markov chains that will work for: all discrete state space Markov chains (both finite and infinite state space) the (significant) majority of continuous state space Markov chains The idea is to recognize that a discrete state space Markov chain regenerates every time it returns to a fixed state z S. These z-cycles are iid (a careful proof of this depends, not surprisingly, on the strong Markov property), so the path evolution of the chain is thereby split up into iid segments. As a consequence, we can exploit the extensive theory for iid sequences to study Markov chains/processes. By studying the chain on the regenerative time scale, the Markov dependence structure is simplified to the point where iid theory applies. In order to exploit such an idea, we need to know that infinitely many z-cycles occur. In other words, we will need to determine whether for x S or not. P x (X n z infinitely often) 1 Definition The state z is said to be recurrent if P z (X n z infinitely often) 1. Otherwise, z is said to be transient. Let N(z) I(X k z) be the total number of visits to z. Observe that the strong Markov property implies that k0 P z (N(z) k) P z (τ(z) < ) k 1, so z is recurrent if and only if P z (τ(z) < ) 1, where τ(z) inf{n 1 : X n z}. (Of course, z is transient if and only if P z (τ(z) < ) < 1.) It turns out that recurrence/transience is a class property. We need a definition. 1
2 Definition A Markov chain X (X n : n 0) is said to be irreducible if, for each x, y S, there exists n n(x, y) such that P n (x, y) > 0. Proposition Suppose X is irreducible. transient. Either all states are recurrent or all states are Hence, it makes sense to talk about an irreducible recurrent Markov chain or an irreducible transient Markov chain (as opposed to having to characterize recurrence state-by-state). 3.2 A Necessary and Sufficient Condition for Recurrence Theorem Suppose X is irreducible. Then, X is recurrent if and only if there exists x, y S such that P n (x, y). n0 Proof: Note that for x y E x N(y) kp x (N(y) k) k1 kp x (τ(y) < )P y (τ(y) < ) k 1 P y (τ(y).) k1 Because X is irreducible, P x (τ(y) < ) > 0. It follows that E x N(y) < if and only if P y (τ(y) < ) < 1 (i.e. y is transient). But E x N(y) E x I(X k y) k1 P k (x, y). k1 A beautiful example with which to illustrate this result is that of mean-zero random walk. Definition A sequence (X n : N 0) is said to be a random walk if X n X 0 + Z Z n, where the Z i s are iid and independent of X 0. It is a mean-zero random walk if EZ i 0. Definition A nearest neighbor random walk on Z is a random walk for which X 0 Z and Z i { 1, 1}. A nearest neighbor random walk on Z d is a sequence (X n : n 0), where X n (X n (1),..., X n (d)) and X(i) (X n (i) : n 0) is a collection of independent one-dimensional nearest neighbor random walks. The random walk is said to be symmetric if P (Z i 1) P (Z i 1) 1/2. Use of Stirling s formula establishes that: 2
3 Nearest neighbor symmetric random walk is recurrent in d 1, 2. Nearest neighbor symmetric random walk is transient in d 3. This result generalizes to much more general mean-zero increment distributions; see, for example, K. L. Chung s A Course in Probability Theory. This is one of the major results of 20th century probability. Exercise Suppose that X is a random walk on Z d for which EZ 0. Prove that (X n : n 0) is transient. 3.3 Branching Chains The branching chain model arises in many applications (biological population modeling, nuclear reactions, etc.). The simplest version is a Markov chain X (X n : n 0) on Z + with dynamics described by: X n X n+1 i1 Z n+1,i where (Z ni : n 1, i 1) is a collection of iid Z + -valued rv s independent of X 0. A key quantity of interest is the probability of extinction u (x) P x (T < ), where T inf{n 0 : X n 0}. Clearly, if P (Z 0) > 0, P x (T < ) > 0. Because each individual s progeny evolve independently, u (1) P (Z x)u (1) x (3.3.1) x0 ϕ(u (1)), where ϕ(y) Ey Z for y 0. Assume that EZ <, and note that (3.3.1) asserts that u (1) is a root of ϕ(x) x. Of course, 1 is a root. But this non-linear equation may have multiple roots. Observe that ϕ( ) is non-decreasing, convex, with ϕ(0) P (Z 0) > 0 and ϕ (1) EZ. If: ϕ (1) 1: unique root of ϕ(x) x on [0, 1] occurs at x 1. So, P x (T < ) 1. ϕ (1) > 1: unique root z of ϕ(x) x on (0, 1), with second root 1. Note that both ( z x : x 0) and (1 x : x 0) are solutions of u(x) P (Z 0) x + y1 P (X n y X 0 0)u(y); (3.3.2) (u (x) : x 0) is the minimal non-negative solution of (3.3.2). So, u (x) z x (since z < 1) when EZ > 1. Exercise a) Prove that P y (X n y infinitely often) 0 for y 1. b) Prove that X n a.s. on {T }. 3
4 3.4 Regenerative Processes We now formalize the notion of a regenerative process. Given an S-valued stochastic process X (X(t) : t 0) with an associated increasing sequence of finite-valued random times (T (n) : n 0), define the cycles (W n : n 0) via { X(T (n 1) + t) : 0 t < τ n W n (t) ; t τ n, where T ( 1) 0, τ n T (n) T (n 1), and / S. Definition We say that X is regenerative (with respect to (T (n) : n 0)) if: W 0, W 1,... are independent W 1, W 2,... are identically distributed. We say that X is non-delayed if T (0) 0; otherwise, X is delayed. We say that X is positive recurrent if Eτ 1 <. Let N(t) max{n 1 : T (n) t} be the index of the last cycle completed prior to time t. Remark When T (0) 0, N(t) is a renewal counting process. Proposition N(t)/t 1/Eτ 1 a.s. as t. This is the strong law of large numbers for renewal counting processes. Theorem (Strong Law for Regenerative Processes) Let X be regenerative with respect to (T (n) : n 0), and let f : S R +. If: T (0) f(x(s)) ds < a.s. E 0 T (1) T (0) f(x(s))ds and Eτ 1 are not simultaneously infinite then as t, where 1 t t 0 f(x(s))ds EY 1(f) Eτ 1 Y i (f) T (i) T (i 1) f(x(s))ds. a.s. 4
5 Outline of Proof: N(t) a.s. (since T (n) < a.s. for n 0) Use path-by-path argument based on squeeze N(t) i0 Y i(f) N(t)+1 i0 τ i 1 t t 0 f(x(s))ds N(t)+1 i0 Y i (f) N(t) i0 τ. i Proposition EN(t)/t 1/Eτ 1 as t. This is the so-called elementary renewal theorem. The key to the proof is the Wald identity. Wald Identity: Let (Z n : n 1) be a sequence of independent non-negative rv s for which EZ i µ i for i 1. If T is a stopping time adapted to (Z n : n 1), then T T E Z i E µ i. i1 i1 Proof of Wald s Identity: T E Z i E Z i I(T i) i1 i1 EZ i I(T i) i1 (by Fubini) EZ i (1 I(T i 1)) i1 EZ i (1 q i 1 (Z 1,..., Z i 1 )) i1 EZ i E(1 q i 1 (Z 1..., Z i 1 )) i1 µ i EI(T i) i1 E µ i I(T i) (by Fubini) i1 T E µ i. i1 (since T is a stopping time) (independence) Corollary: If the Z i s are also identically distributed, E T i1 Z i ET EZ 1. 5
6 Outline of proof of Proposition N(t) + 1 is a stopping time adapted to (τ i : i 0). (Note that N(t) is not a stopping time) Wald s identity implies that N(t)+1 E i0 τ i Eτ 0 + E(N(t) + 1) Eτ 1 Since t N(t)+1 i0 τ i, lim inf t t 1 EN(t) 1/Eτ 1 when Eτ 0 <. For lim sup, truncate the τ i s to the (τ i n) s and let N n (t)+1 min{k 0 : k in τ i n t}. Note that N( ) N n ( ) and N n(t)+1 i0 τ i n t + n Apply Wald s identity and divide by t: lim sup t So, lim sup t t 1 EN(t) 1/Eτ 1 n. 1 t EN n(t) 1 Eτ 1 n Send n and apply the Monotone Convergence Theorem (MCT). Deal separately with Eτ 0. Theorem Let X be regenerative with respect to (T (n) : n 0) and let f : S R +. If E(Y 0 (f) + Y 1 (f)) <, then 1 t Ef(X(s))ds EY 1(f) t Eτ 1 as t. 0 Outline of proof of Theorem t t 0 f(x(s))ds 1 N(t)+1 t i0 Y i (f) Same type of argument as in Theorem proves that N(t)+1 t 1 i0 Y i (f) EY 1(f) Eτ 1 a.s. as t Wald s identity proves that N(t)+1 t 1 E Y i (f) t 1 EY 0 (f) + t 1 E(N(t) + 1)EY 1 (f) i0 6
7 Elementary renewal theorem shows that t 1 E N(t)+1 i0 Y i (f) EY 1 (f)/eτ 1 as t, so (t 1 N(t)+1 i0 Y i (f) : t 0) is uniformly integrable So, (t 1 t 0 f(x(s))ds : t 0) is uniformly integrable So, t 1 E t 0 f(x(s))ds EY 1(f)/Eτ Positive Recurrent Discrete Time Markov Chains Assume X (X n : n 0) is irreducible and recurrent. For y S, let { 1 if x y f(x) 0 else Note that τ(z) 1 P x (Y 0 (f) k) P x ( I(X i y) k) i0 P x (τ(y) < τ(z))p y (τ(y) < τ(z)) k 1 P y (τ(z) < τ(y)) so E x Y 0 (f) <, E x Y 1 (f) <. Hence, Theorem implies that I(X i y) E z n 1 i0 τ(z) 1 i0 I(X i y) E z τ(z) a.s. as n, and Theorem implies that P i (x, y) E z n 1 i0 τ(z) 1 i0 I(X i y) E z τ(z) (3.5.1) as n. Note that: right-hand side of (3.5.1) is independent of z (since left-hand side is) limit is independent of x if E z τ(z), limit is zero (i.e., null ) if E z τ(z) <, limit is positive Definition A recurrent state z S is said to be positive recurrent if E z τ(z) < and null recurrent otherwise. Since the right-hand side of (3.5.1) is independent of z, positive recurrence/null recurrence is a class property (i.e. either holds for all states in S or no states in S). When X is positive recurrent, set π(y) E τ(z) 1 z i0 I(X i y) (3.5.2) E z τ(z) 7
8 π (π(y) : y S) is a probability mass function on S For f : S R +, EY 1 (f) Eτ 1 y π(y)f(y). τ(z) 1 τ(z) 1 If E z j0 f(x j ) <, then E x j0 f(x j ) < for all x S (so E x Y 1 ( f ) < and E x Y 0 ( F ) < for each x S); Theorem then implies that as n 1 E x f(x j ) n j0 y π(y)f(y) Exercise Suppose X is positive recurrent and y π(y) f(y) <. Show, by example, that 1 E µ f(x j ) n j0 y π(y)f(y) as n, if µ (µ(x) : x S) does not have finite support. For π as defined via (3.5.2), π satisfies the linear system π πp. The proof is similar to that used in the compactness argument for Chapter 1; (3.5.1) implies that π(y) lim n lim n z n P i (x, y) i1 P i (x, z)p (z, y). n 1 To bring the limit inside the sum over z: For ɛ > 0, select a finite set K S such that y K c π(y) < ɛ. Then, select n 0 so that for n n 0, i0 for z K. It follows that for n n 0. On the other hand, n 1 P i (x, z) π(z) < ɛ/ K i0 n 1 z K c i0 P i (x, z)p (z, y) 2ɛ z K n 1 i0 P i (x, z)p (z, y) z K Since ɛ was arbitrary, π(y) z π(z)p (z, y). 8 π(z)p (z, y).
9 Exercise a) Prove that X is positive recurrent if and only if π πp has a solution π within the class of probability mass functions. b) Prove that if X is positive recurrent, the solution π to π πp is unique within the class of probability mass functions. c) Give an example that shows that π πp may have multiple linearly independent solutions when π is not restricted to be a probability mass function. (Note that your example must be irreducible and positive recurrent.) We are now ready to state a theorem that summarizes the above discussion. Theorem Let X (X n : n 0) be an irreducible Markov chain taking values in S. a) If X is null recurrent and f : S R + is such that EY 1 (f) <, then 1 f(x j ) 0 n j0 a.s. as n. If, in addition, E x Y 0 (f) <, then as n. 1 E x f(x j ) 0 n j0 b) X is positive recurrent if and only if there exists a probability mass function solution to π πp ; the solution π is unique within the class of probability mass functions. c) If X is positive recurrent, then for each f : S R +, 1 f(x j ) n j0 y π(y)f(y) P x a.s. (3.5.3) as n and 1 E x f(x j ) n j0 y π(y)f(y) (3.5.4) as n. If f : S R and y π(y) f(y) <, then (3.5.3) and (3.5.4) continue to hold. Remark One implication of (3.5.2) is that for a positive recurrent chain, π(z) 1/E z τ(z). Remark Suppose that X is null recurrent. Consider τ(z) 1 η(y) E z j0 I(X j y). It turns out that η (η(y) : y S) is an invariant measure, in the sense that η satisfies η ηp ; see Resnick, p (Of course, η can not be summable, since otherwise X would be positive recurrent.) Does this invariant measure have a probability interpretation? 9
10 The answer is: Yes! Suppose that at time n 0, we independently assign to each y S a Poisson number of particles to that state having mean η(y). Let each of the particles (across all states) independently evolve according to the transition dynamics of the Markov chain. Then, the distribution of the particles at time 1 (and all future times) will be identical to the distribution at time n 0. Remark We have seen that all recurrent Markov chains have invariant measures. A transient irreducible Markov chain may either have an invariant measure η (i.e. a positive solution η to η ηp or not. In particular, existence of an invariant measure of X does not imply that X is recurrent. 10
MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),
MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 4: Steady-State Theory Contents 4.1 The Concept of Stochastic Equilibrium.......................... 1 4.2
More informationMS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7
MS&E 321 Spring 12-13 Stochastic Systems June 1, 213 Prof. Peter W. Glynn Page 1 of 7 Section 9: Renewal Theory Contents 9.1 Renewal Equations..................................... 1 9.2 Solving the Renewal
More informationISyE 6761 (Fall 2016) Stochastic Processes I
Fall 216 TABLE OF CONTENTS ISyE 6761 (Fall 216) Stochastic Processes I Prof. H. Ayhan Georgia Institute of Technology L A TEXer: W. KONG http://wwong.github.io Last Revision: May 25, 217 Table of Contents
More informationSolutions to Homework Discrete Stochastic Processes MIT, Spring 2011
Exercise 1 Solutions to Homework 6 6.262 Discrete Stochastic Processes MIT, Spring 2011 Let {Y n ; n 1} be a sequence of rv s and assume that lim n E[ Y n ] = 0. Show that {Y n ; n 1} converges to 0 in
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationAdventures in Stochastic Processes
Sidney Resnick Adventures in Stochastic Processes with Illustrations Birkhäuser Boston Basel Berlin Table of Contents Preface ix CHAPTER 1. PRELIMINARIES: DISCRETE INDEX SETS AND/OR DISCRETE STATE SPACES
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More informationRenewal theory and its applications
Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially
More informationExample: physical systems. If the state space. Example: speech recognition. Context can be. Example: epidemics. Suppose each infected
4. Markov Chains A discrete time process {X n,n = 0,1,2,...} with discrete state space X n {0,1,2,...} is a Markov chain if it has the Markov property: P[X n+1 =j X n =i,x n 1 =i n 1,...,X 0 =i 0 ] = P[X
More informationStatistics 150: Spring 2007
Statistics 150: Spring 2007 April 23, 2008 0-1 1 Limiting Probabilities If the discrete-time Markov chain with transition probabilities p ij is irreducible and positive recurrent; then the limiting probabilities
More information0.1 Uniform integrability
Copyright c 2009 by Karl Sigman 0.1 Uniform integrability Given a sequence of rvs {X n } for which it is known apriori that X n X, n, wp1. for some r.v. X, it is of great importance in many applications
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More information(b) What is the variance of the time until the second customer arrives, starting empty, assuming that we measure time in minutes?
IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2006, Professor Whitt SOLUTIONS to Final Exam Chapters 4-7 and 10 in Ross, Tuesday, December 19, 4:10pm-7:00pm Open Book: but only
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More informationTheory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk
Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments
More informationMarkov Chain Monte Carlo (MCMC)
Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More informationMarkov processes and queueing networks
Inria September 22, 2015 Outline Poisson processes Markov jump processes Some queueing networks The Poisson distribution (Siméon-Denis Poisson, 1781-1840) { } e λ λ n n! As prevalent as Gaussian distribution
More informationMATH 56A SPRING 2008 STOCHASTIC PROCESSES 65
MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationRECURRENCE IN COUNTABLE STATE MARKOV CHAINS
RECURRENCE IN COUNTABLE STATE MARKOV CHAINS JIN WOO SUNG Abstract. This paper investigates the recurrence and transience of countable state irreducible Markov chains. Recurrence is the property that a
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationImportance Sampling Methodology for Multidimensional Heavy-tailed Random Walks
Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks Jose Blanchet (joint work with Jingchen Liu) Columbia IEOR Department RESIM Blanchet (Columbia) IS for Heavy-tailed Walks
More informationBRANCHING PROCESSES 1. GALTON-WATSON PROCESSES
BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationFUNCTIONAL CENTRAL LIMIT (RWRE)
FUNCTIONAL CENTRAL LIMIT THEOREM FOR BALLISTIC RANDOM WALK IN RANDOM ENVIRONMENT (RWRE) Timo Seppäläinen University of Wisconsin-Madison (Joint work with Firas Rassoul-Agha, Univ of Utah) RWRE on Z d RWRE
More information1 IEOR 4701: Continuous-Time Markov Chains
Copyright c 2006 by Karl Sigman 1 IEOR 4701: Continuous-Time Markov Chains A Markov chain in discrete time, {X n : n 0}, remains in any state for exactly one unit of time before making a transition (change
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationHomework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples
Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More information3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?
MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable
More informationUseful Probability Theorems
Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)
More informationChapter 2. Markov Chains. Introduction
Chapter 2 Markov Chains Introduction A Markov chain is a sequence of random variables {X n ; n = 0, 1, 2,...}, defined on some probability space (Ω, F, IP), taking its values in a set E which could be
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationP(X 0 = j 0,... X nk = j k )
Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationRANDOM WALKS. Course: Spring 2016 Lecture notes updated: May 2, Contents
RANDOM WALKS ARIEL YADIN Course: 201.1.8031 Spring 2016 Lecture notes updated: May 2, 2016 Contents Lecture 1. Introduction 3 Lecture 2. Markov Chains 8 Lecture 3. Recurrence and Transience 18 Lecture
More informationT. Liggett Mathematics 171 Final Exam June 8, 2011
T. Liggett Mathematics 171 Final Exam June 8, 2011 1. The continuous time renewal chain X t has state space S = {0, 1, 2,...} and transition rates (i.e., Q matrix) given by q(n, n 1) = δ n and q(0, n)
More informationSynthetic Probability Theory
Synthetic Probability Theory Alex Simpson Faculty of Mathematics and Physics University of Ljubljana, Slovenia PSSL, Amsterdam, October 2018 Gian-Carlo Rota (1932-1999): The beginning definitions in any
More informationLectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes
Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities
More informationMath 328 Course Notes
Math 328 Course Notes Ian Robertson March 3, 2006 3 Properties of C[0, 1]: Sup-norm and Completeness In this chapter we are going to examine the vector space of all continuous functions defined on the
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationECE534, Spring 2018: Solutions for Problem Set #4 Due Friday April 6, 2018
ECE534, Spring 2018: s for Problem Set #4 Due Friday April 6, 2018 1. MMSE Estimation, Data Processing and Innovations The random variables X, Y, Z on a common probability space (Ω, F, P ) are said to
More informationLecture 20 : Markov Chains
CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called
More informationAdvanced Computer Networks Lecture 2. Markov Processes
Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition
More informationPractical conditions on Markov chains for weak convergence of tail empirical processes
Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More informationModern Discrete Probability Spectral Techniques
Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite
More informationInsert your Booktitle, Subtitle, Edition
C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27
More informationStochastic Processes (Week 6)
Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationMarked Point Processes in Discrete Time
Marked Point Processes in Discrete Time Karl Sigman Ward Whitt September 1, 2018 Abstract We present a general framework for marked point processes {(t j, k j ) : j Z} in discrete time t j Z, marks k j
More informationStochastic Simulation
Stochastic Simulation Ulm University Institute of Stochastics Lecture Notes Dr. Tim Brereton Summer Term 2015 Ulm, 2015 2 Contents 1 Discrete-Time Markov Chains 5 1.1 Discrete-Time Markov Chains.....................
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationConvergence Rate of Markov Chains
Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution
More informationHomework 4 due on Thursday, December 15 at 5 PM (hard deadline).
Large-Time Behavior for Continuous-Time Markov Chains Friday, December 02, 2011 10:58 AM Homework 4 due on Thursday, December 15 at 5 PM (hard deadline). How are formulas for large-time behavior of discrete-time
More informationOptimization and Optimal Control in Banach Spaces
Optimization and Optimal Control in Banach Spaces Bernhard Schmitzer October 19, 2017 1 Convex non-smooth optimization with proximal operators Remark 1.1 (Motivation). Convex optimization: easier to solve,
More informationIEOR 6711: Stochastic Models I, Fall 2003, Professor Whitt. Solutions to Final Exam: Thursday, December 18.
IEOR 6711: Stochastic Models I, Fall 23, Professor Whitt Solutions to Final Exam: Thursday, December 18. Below are six questions with several parts. Do as much as you can. Show your work. 1. Two-Pump Gas
More informationLecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding
More informationµ n 1 (v )z n P (v, )
Plan More Examples (Countable-state case). Questions 1. Extended Examples 2. Ideas and Results Next Time: General-state Markov Chains Homework 4 typo Unless otherwise noted, let X be an irreducible, aperiodic
More informationStochastic Processes in Discrete Time
Stochastic Processes in Discrete Time May 5, 2005 Contents Basic Concepts for Stochastic Processes 3. Conditional Expectation....................................... 3.2 Stochastic Processes, Filtrations
More informationPoisson Processes. Stochastic Processes. Feb UC3M
Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written
More informationHW 2 Solutions. The variance of the random walk is explosive (lim n Var (X n ) = ).
Stochastic Processews Prof Olivier Scaillet TA Adrien Treccani HW 2 Solutions Exercise. The process {X n, n } is a random walk starting at a (cf. definition in the course. [ n ] E [X n ] = a + E Z i =
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationMATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015
ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which
More informationNon-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington
Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished
More informationStochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS
Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review
More informationKesten s power law for difference equations with coefficients induced by chains of infinite order
Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis
More informationIndeed, if we want m to be compatible with taking limits, it should be countably additive, meaning that ( )
Lebesgue Measure The idea of the Lebesgue integral is to first define a measure on subsets of R. That is, we wish to assign a number m(s to each subset S of R, representing the total length that S takes
More informationIowa State University. Instructor: Alex Roitershtein Summer Homework #5. Solutions
Math 50 Iowa State University Introduction to Real Analysis Department of Mathematics Instructor: Alex Roitershtein Summer 205 Homework #5 Solutions. Let α and c be real numbers, c > 0, and f is defined
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationApproximating irregular SDEs via iterative Skorokhod embeddings
Approximating irregular SDEs via iterative Skorokhod embeddings Universität Duisburg-Essen, Essen, Germany Joint work with Stefan Ankirchner and Thomas Kruse Stochastic Analysis, Controlled Dynamical Systems
More informationSupplementary Notes for W. Rudin: Principles of Mathematical Analysis
Supplementary Notes for W. Rudin: Principles of Mathematical Analysis SIGURDUR HELGASON In 8.00B it is customary to cover Chapters 7 in Rudin s book. Experience shows that this requires careful planning
More informationRANDOM WALKS IN ONE DIMENSION
RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in
More informationg 2 (x) (1/3)M 1 = (1/3)(2/3)M.
COMPACTNESS If C R n is closed and bounded, then by B-W it is sequentially compact: any sequence of points in C has a subsequence converging to a point in C Conversely, any sequentially compact C R n is
More informationStat 516, Homework 1
Stat 516, Homework 1 Due date: October 7 1. Consider an urn with n distinct balls numbered 1,..., n. We sample balls from the urn with replacement. Let N be the number of draws until we encounter a ball
More informationNote that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +
Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationNotes on uniform convergence
Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean
More informationBehaviour of Lipschitz functions on negligible sets. Non-differentiability in R. Outline
Behaviour of Lipschitz functions on negligible sets G. Alberti 1 M. Csörnyei 2 D. Preiss 3 1 Università di Pisa 2 University College London 3 University of Warwick Lars Ahlfors Centennial Celebration Helsinki,
More information