MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 5

Similar documents
Convergence of random variables. (telegram style notes) P.J.C. Spreij

TCOM 501: Networking Theory & Fundamentals. Lecture 3 January 29, 2003 Prof. Yannis A. Korilis

Lecture 4. We also define the set of possible values for the random walk as the set of all x R d such that P(S n = x) > 0 for some n.

Sequences and Series of Functions

6.3 Testing Series With Positive Terms

Distribution of Random Samples & Limit theorems

This section is optional.

Stochastic Simulation

Lecture Chapter 6: Convergence of Random Sequences

Massachusetts Institute of Technology

MATH301 Real Analysis (2008 Fall) Tutorial Note #7. k=1 f k (x) converges pointwise to S(x) on E if and

EECS564 Estimation, Filtering, and Detection Hwk 2 Solns. Winter p θ (z) = (2θz + 1 θ), 0 z 1

2 Banach spaces and Hilbert spaces

Chapter 2. Periodic points of toral. automorphisms. 2.1 General introduction

Random Models. Tusheng Zhang. February 14, 2013

7.1 Convergence of sequences of random variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

The Method of Least Squares. To understand least squares fitting of data.

1 Convergence in Probability and the Weak Law of Large Numbers

7.1 Convergence of sequences of random variables

Lecture 12: November 13, 2018

Lecture 12: September 27

Math 525: Lecture 5. January 18, 2018

Solutions of Homework 2.

Generalized Semi- Markov Processes (GSMP)

ECE534, Spring 2018: Final Exam

Chapter 6 Infinite Series

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

1 Lecture 2: Sequence, Series and power series (8/14/2012)

University of Colorado Denver Dept. Math. & Stat. Sciences Applied Analysis Preliminary Exam 13 January 2012, 10:00 am 2:00 pm. Good luck!

Math 113 Exam 3 Practice

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Approximations and more PMFs and PDFs

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 21 11/27/2013

Probability and Random Processes

Advanced Stochastic Processes.

lim za n n = z lim a n n.

ACO Comprehensive Exam 9 October 2007 Student code A. 1. Graph Theory

ST5215: Advanced Statistical Theory

It is often useful to approximate complicated functions using simpler ones. We consider the task of approximating a function by a polynomial.

Stochastic Matrices in a Finite Field

Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Quick Review of Probability

Entropy Rates and Asymptotic Equipartition

Lecture 10 October Minimaxity and least favorable prior sequences

Continuous Functions

Beurling Integers: Part 2

The Growth of Functions. Theoretical Supplement

An Introduction to Randomized Algorithms

Lecture 8: Convergence of transformations and law of large numbers

Complex Analysis Spring 2001 Homework I Solution

6 Integers Modulo n. integer k can be written as k = qn + r, with q,r, 0 r b. So any integer.

Definition 4.2. (a) A sequence {x n } in a Banach space X is a basis for X if. unique scalars a n (x) such that x = n. a n (x) x n. (4.

Notes on Snell Envelops and Examples

s = and t = with C ij = A i B j F. (i) Note that cs = M and so ca i µ(a i ) I E (cs) = = c a i µ(a i ) = ci E (s). (ii) Note that s + t = M and so

n=1 a n is the sequence (s n ) n 1 n=1 a n converges to s. We write a n = s, n=1 n=1 a n

2.1. The Algebraic and Order Properties of R Definition. A binary operation on a set F is a function B : F F! F.

We are mainly going to be concerned with power series in x, such as. (x)} converges - that is, lims N n

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

CHAPTER 5. Theory and Solution Using Matrix Techniques

Quick Review of Probability

Ma 530 Introduction to Power Series

Lecture 19: Convergence

2.1. Convergence in distribution and characteristic functions.

On forward improvement iteration for stopping problems

6. Uniform distribution mod 1

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

Feedback in Iterative Algorithms

Glivenko-Cantelli Classes

Recurrence Relations

4. Partial Sums and the Central Limit Theorem

Math 113 Exam 3 Practice

Lecture 2: April 3, 2013

Lecture 2 February 8, 2016

Approximation Results for Non-Homogeneous Markov Chains and Some Applications

Probability for mathematicians INDEPENDENCE TAU

Lecture 11 and 12: Basic estimation theory

Introduction to Probability. Ariel Yadin

The standard deviation of the mean

Infinite Sequences and Series

EE 4TM4: Digital Communications II Probability Theory

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Define a Markov chain on {1,..., 6} with transition probability matrix P =

Mathematical Statistics - MS

7 Sequences of real numbers

Empirical Processes: Glivenko Cantelli Theorems

Introduction to Extreme Value Theory Laurens de Haan, ISM Japan, Erasmus University Rotterdam, NL University of Lisbon, PT

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

Basics of Probability Theory (for Theory of Computation courses)

Topic 8: Expected Values

Achieving Stationary Distributions in Markov Chains. Monday, November 17, 2008 Rice University

Lecture 6 Chi Square Distribution (χ 2 ) and Least Squares Fitting

Differentiable Convex Functions

The second is the wish that if f is a reasonably nice function in E and φ n

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Chapter 10: Power Series

ECE 330:541, Stochastic Signals and Systems Lecture Notes on Limit Theorems from Probability Fall 2002

Learning Theory: Lecture Notes

MATH 112: HOMEWORK 6 SOLUTIONS. Problem 1: Rudin, Chapter 3, Problem s k < s k < 2 + s k+1

SOME TRIBONACCI IDENTITIES

Transcription:

M&E 321 prig 12-13 tochastic ystems Jue 1, 2013 Prof. Peter W. Gly Page 1 of 5 ectio 6: Harris Recurrece Cotets 6.1 Harris Recurret Markov Chais............................. 1 6.2 tochastic Lyapuov Fuctios.............................. 2 6.1 Harris Recurret Markov Chais We have previously developed a fairly complete steady-state theory for discrete state space Markov chais. I particular, we established that whe X = (X : 0) is a irreducible Markov chai o discrete state space, the X has a steady-state (i the sese of the law of large umbers) precisely whe X has a statioary distributio. Oe problem with this theory is that we caot establish stability without essetially solvig the liear system π = πp. We ow wish to develop recurrece theory that permits us to establish stability eve for models for which the statioary equatios caot be explicitly solved. Ideally, this theory will also permit us to verify stability for Markov chais evolvig i o-compact state spaces. Defiitio 6.1.1 A Markov chai X = (X : 0) with state space is said to be recurret i the sese of Harris if there exists A, λ > 0, m 1, ad a distributio ϕ o such that i.) P x (T A < ) = 1, x, where T A = if{ 0 : X A}; ii.) P x (X m ) λ ϕ( ), x A. Remark 6.1.1 If is discrete, the X is Harris recurret if ad oly if there exists z such that P x (τ(z) < ) = 1 for x, where τ(z) = if{ 1 : X = z}. Note that coditio i.) guaratees that X visits A ifiitely ofte a.s. Usig the same coi flip idea, coditio ii.) esures that every time X visits A (ot havig visited A i the previous m time steps), there is a probability of λ of a successful coi toss (i.e. a heads ). It follow that there will be ifiitely may times T 1, T 2, at which X has distributio ϕ. Each such radom time T i iitiates a cycle havig a idetical distributio. Furthermore, X Ti m is idepedet of X Ti. Hece, if m = 1, the resultig cycles are iid, where if m > 1, the cycles are correlated. However, the degree of correlatio is modest, ad the followig theorem is easily established. Theorem 6.1.1 Let X = (X : 0) is Harris recurret. Put τ i = T i T i 1. If Eτ 1 <, the X possesses a uique statioary distributio π ad for each o-egative f, as. 1 f(x 1 ) 1 π(dx)f(x) a.s

Remark 6.1.2 A sufficiet coditio guarateeig Eτ 1 < is to require that where τ A = if{ 1 : X A}. sup E x τ A <, (6.1.1) To illustrate what is ivolved i verifyig coditio ii.), suppose that P x (X m B) = p m (x, y)ξ(dy) (6.1.2) for some distributio z( ) ad trasitio desity p m. uppose that ϕ(b) = φ(y)ξ(dy) B for some desity φ( ). (The Rado-Nikodym theorem actually guaratees that ϕ must take this form i the presece of coditio ii.).) If p m (, y) is cotiuous ad positive, with A compact, the we ca take φ(y) = if p m(x, y), ad λ = B if p m(x, z)ξ(dz). Hece i some geerality, it follows that ay compact set A satisfies coditio ii.). (Cautio: The above aalysis assumes that the m-step trasitio probabilities P x (X m ) ca be represeted as i (6.1.2), with a trasitio desity that is positive ad cotiuous. This must be verified separately for each example. If this fails to be true, oe must verify coditio ii.) from first priciples.) 6.2 tochastic Lyapuov Fuctios It remais to provide a techique for verifyig coditio i.) ad coditio (6.1.1). Propositio 6.2.1 Let w : [1, ) for which there exists r < 1 such that E x w(x 1 ) rw(x) for x A c. The, for x A c. E x τ A (1 r) 1 w(x) Proof: Via use of operator/fuctio orms with weight fuctio w( ), we coclude that T A 1 E x for x A c. But sice w(x) 1 for x, w(x j ) (1 r) 1 w(x) τ A τ A 1 2 w(x j ),

yieldig the result. Usig first trasitio aalysis, we see that for x A, E x τ A = 1 + P (x, dy)e y τ A. A c I view of Propositio 6.2.1, we coclude that sup E x τ A 1 + (1 r) 1 sup E x w(x 1 ). Propositio 6.2.2 uppose that there exists A, X > 0, m 1, λ > 0, a distributio ϕ, r < 1, ad w : [1, ) such that: i.) E x w(x 1 ) rw(x), x A c ; ii.) sup E x w(x 1 ) < ; iii.) P x (X m ) λϕ( ), x A. The, X possesses a uique statioary distributio π ad for each o-egative f, as. 1 f(x i ) π(dx)f(x) Remark 6.2.1 The fuctio w( ) is called a stochastic Lyapuov fuctio. Example 6.2.1 uppose that ( : 0) is a sequece of rv s describig reservoir storage i the presece of a liear release rule, so that +1 = + Z +1 a +1 for a > 0. Assume that (Z : 1) is a sequece of iid o-egative rv s for which EZ 1 <. Put w(x) = 1 + x for x 0. The, E x w( 1 ) = 1 + E x 1 = 1 + E(x + Z 1) 1 + a ( 1 + a 2 a.s ) 1 w(x) + 1 + EZ 1 (1 + a) ax(1 + a) 1 (2 + a) 1 ( 1 + a 2) 1 w(x) for x (1 + a)(2 + a)/a + EZ 1 (2 + a)/a. Put A = [0, (1 + a)(2 + a)/a + EZ 1 (2 + a)/a]. Coditio ii.) of Propositio 10.2.2 is easily verified here. Hece, it remais oly to show that coditio iii.) is satisfied. But this is straightforward to carry out if we assume that Z 1 has a cotiuous positive desity o [0, ). A weaker Lyapuov coditio is offered by our ext result. 3

Theorem 6.2.1 uppose that there exists A, λ > 0, m 1, a distributio ϕ, ɛ > 0, ad g : [0, ) such that: I.) E x g(x 1 ) g(x) ɛ, x A c ; II.) sup E x g(x 1 ) < ; III.) P x (X m ) λϕ( ), x A. The, X possesses a uique statioary distributio π ad for each o-egative f, 1 f(x i ) f(x)π(dx) a.s. as. Proof: Let B = (B(x, dy) : x, y A c ) be the restrictio of P = (P (x, dy) : x, y ) to A c, so that B(x, dy) = P (x, dy) for x, y A c. Put e(x) = 1 for x A c. Note that coditio i.) implies that B(x, dy)g(y) = P (x, dy)g(y) P (x, dy)g(y) = E x s(x 1 ) g(x) ɛe(x), A c A c so that Hece, Bg g ɛe. ɛe g Bg. ice B is a o-egative operator, it follows that ɛb j e B j g B j+1 g. ummig over j {0, 1,, }, we get ɛ B j e g B +1 g g. edig, we coclude that But so yieldig the iequality ɛ B j e g. (B j e)(x) = P x (τ A > j), (B j e)(x) = P x (τ A > j) = E x τ A, E x τ A g(x)/ɛ for x A c. Cosequetly, P x (τ A < ) = 1 for x A c (ad hece for all x ). Furthermore, sup E x τ A 1 + sup E x g(x 1 ). The coclusio of the theorem therefore follows as for Propositio 6.2.2. 4

Remark 6.2.2 A ice physical way to thik about g is to view g(x) as represetig the potetial eergy associated with x. Coditio i.) asserts that, i expectatio, the potetial eergy has a tedecy to decrease by ɛ o A c. Hece, sice the system wishes to move to poits of lower potetial eergy, it follows that the system should evetually eter A. Remark 6.2.3 A high level of igeuity may be required to fid a suitable Lyapuov fuctio. Oe approach to fidig a suitable g is to try some cadidate fuctios. Whe R d, typical cadidates to try are: a.) x p ; b.) exp(a x p ); c.) (log(1 + x )) p. 5