Random Walks Conditioned to Stay Positive

Similar documents
A Note on the Central Limit Theorem for a Class of Linear Systems 1

3 Multiple Discrete Random Variables

Math 6810 (Probability) Fall Lecture notes

The Modified Keifer-Weiss Problem, Revisited

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

I forgot to mention last time: in the Ito formula for two standard processes, putting

Problem Points S C O R E Total: 120

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

Shifting processes with cyclically exchangeable increments at random

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Notes 15 : UI Martingales

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Chp 4. Expectation and Variance

Faithful couplings of Markov chains: now equals forever

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Positive and null recurrent-branching Process

Chapter 7. Markov chain background. 7.1 Finite state space

1.1 Review of Probability Theory

Modern Discrete Probability Branching processes

Sums of exponentials of random walks

1 Gambler s Ruin Problem

Infinitely iterated Brownian motion

Math 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14

Useful Probability Theorems

A Martingale Central Limit Theorem

This exam contains 6 questions. The questions are of equal weight. Print your name at the top of this page in the upper right hand corner.

A generalization of Strassen s functional LIL

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10. x n+1 = f(x n ),

3 Conditional Expectation

Expectation, inequalities and laws of large numbers

Hints/Solutions for Homework 3

Some Results on the Ergodicity of Adaptive MCMC Algorithms

Stat 516, Homework 1

Solutions to Problem Set 5

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MAS113 Introduction to Probability and Statistics. Proofs of theorems

When is a Markov chain regenerative?

Brownian Bridge and Self-Avoiding Random Walk.

Path transformations of first passage bridges

Lecture 20: Reversible Processes and Queues

ST5215: Advanced Statistical Theory

Rare-Event Simulation

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

Continuous-Time Markov Chain

Continuous-time Markov Chains

A central limit theorem for randomly indexed m-dependent random variables

LIST OF FORMULAS FOR STK1100 AND STK1110

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY (formerly the Examinations of the Institute of Statisticians) GRADUATE DIPLOMA, 2004

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

q P (T b < T a Z 0 = z, X 1 = 1) = p P (T b < T a Z 0 = z + 1) + q P (T b < T a Z 0 = z 1)

Problem Points S C O R E Total: 120

180B Lecture Notes, W2011

Fundamentals of Statistics

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

STA 711: Probability & Measure Theory Robert L. Wolpert

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

On Characteristic Properties of the Uniform Distribution

SPA07 University of Illinois at Urbana-Champaign August 6 10, 2007

Lecture 4: Conditional expectation and independence

Lectures for APM 541: Stochastic Modeling in Biology. Jay Taylor

Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.

MAS113 Introduction to Probability and Statistics. Proofs of theorems

EXACT CONVERGENCE RATE AND LEADING TERM IN THE CENTRAL LIMIT THEOREM FOR U-STATISTICS

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2018 Kannan Ramchandran February 14, 2018.

Formulas for probability theory and linear models SF2941

Notes 18 : Optional Sampling Theorem

arxiv: v1 [math.pr] 17 May 2009

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Poisson Approximation for Two Scan Statistics with Rates of Convergence

Importance Sampling Methodology for Multidimensional Heavy-tailed Random Walks

Practical conditions on Markov chains for weak convergence of tail empirical processes

Convergence Rates for Renewal Sequences

arxiv: v2 [math.pr] 4 Sep 2017

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

Coding on Countably Infinite Alphabets

Necessary and sufficient conditions for strong R-positivity

LECTURE 3. Last time:

Conditional moment representations for dependent random variables

Logarithmic scaling of planar random walk s local times

Strong approximation for additive functionals of geometrically ergodic Markov chains

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

MA6451 PROBABILITY AND RANDOM PROCESSES

Discrete Random Variables

Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications

Stein s Method and the Zero Bias Transformation with Application to Simple Random Sampling

Potentials of stable processes

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 19: Properties of Expectation

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Math 456: Mathematical Modeling. Tuesday, April 9th, 2018

Solution: (Course X071570: Stochastic Processes)

STAT Chapter 5 Continuous Distributions

Limit Theorems for Exchangeable Random Variables via Martingales

Probability Theory Overview and Analysis of Randomized Algorithms

ECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.

Transcription:

1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then S n as n. If A n is the event that S k 0 for k = 1,..., n, then P (A n ) 0 as n. In this talk we will consider conditional distributions for the random walk given A n. The main result will show that finite dimensional distributions for the random walk given A n converge to those for a time homogeneous Markov chain on {0, 1,...}.

2 Exponential Families Let h denote the mass function for an integer valued random variable X under P 0. Assume that E 0 X = xh(x) = 0, and define e ψ(ω) = E 0 e ωx = x e ωx h(x). (1) Then f ω (x) = h(x) exp[ωx ψ(ω)], is a probability mass function whenever ω Ω = {ω : ψ(ω) < }.. Let X, X 1,... be i.i.d. under P ω with marginal mass function f ω. Differentiating (1), ψ (ω)e ψ(ω) = x xe ωx h(x), and so E ω X = ψ (ω). Similarly, Var ω (X) = ψ (ω). Note that since ψ > 0, ψ is increasing, and since ψ (0) = E 0 X = 0, ψ (ω) < 0 when ω < 0.

3 Classical Large Deviation Theory Exponential Tilting: Let S n = X 1 + + X n and X n = S n /n. Since X 1,..., X n have joint mass function n ( h(xi ) exp[ωx i ψ(ω)] ) [ n ] = h(x i ) exp[ωs n nψ(ω)], and i=1 In particular, i=1 E ω f(x 1,..., X n ) = E 0 f(x 1,..., X n ) exp[ωs n nψ(ω)]. P ω (S n 0) = e nψ(ω) E 0 [e ωs n ; S n 0]. For notation, E[Y ; A] def = E[Y 1 A ]. Using this, it is easy to argue that if ω < 0, 1 n log P ω(s n 0) ψ(ω), or P ω (S n 0) = e nψ(ω) e o(n). Also, for any ɛ > 0, P ω (X n > ɛ X n 0) 0, as n. [For regularity, need 0 Ω o.]

4 Refinements Local Central Limit Theorem: If the distribution for X is lattice with span 1, then as n, P 0 [S n = k] 1/ 2πnψ (0). Using this, for ω < 0, P ω (S n 0) = e nψ(ω) E 0 [e ωs n ; S n 0] e nψ(ω) = k=0 e ωk 2πnψ (0) e nψ(ω) (1 e ω ) 2πnψ (0), and P (S n = k S n 0) (1 e ω )e ωk, the mass function for a geometric distribution with success probability e ω.

5 Goal, Sufficiency, and Notation Let τ = inf{n : S n < 0} and note that τ > n if and only if S j 0, j = 1,..., n. Goal: Study the behavior of the random walk given τ > n for large n. Sufficiency: Under P ω, the conditional distribution for X 1,..., X n given S n does not depend on ω. New Measures: Under P ω (a), the summands X 1, X 2,... are still i.i.d. with common mass function f ω, but S n = a + X 1 + + X n. Finally, P n (a,b) denotes conditional probability under P ω (a) this measure, S k goes from a to b in n steps. given S n = b. Under

6 Positive Drift If ω > 0, then E ω X > 0. In this case, P ω (a) (τ = ) > 0, and conditioning on τ = is simple. For x 0, P (a) ω (S 1 = x τ = ) = P (a) ω (S 1 = x, τ = ) P (a) ω (τ = ) = P (a) ω (S 1 = x) P ω (x) (τ = ) P ω (a) (τ = ). Given τ =, the process S n, n 0, is a random walk, and the conditional transition kernel is an h-transform of the original transition kernel for S n.

7 Simple Random Walk If X i = ±1 with p = P ω (X i = 1), q = 1 p = P ω (X i = 1), and ω = 1 2 log(p/q), then S n, n 0, is a simple random walk. Theorem: For a simple random walk, as n, P (a,b) n (τ > n) 2(a + 1)(b + 1). n Proof. By the reflection principle, P (a) 0 (τ < n, S n = b) = P (a) 0 (S n = b 2). Dividing by P 0 (S n = b) (a binomial probability), P (a,b) n (τ < n) = Result follows from Stirling s formula. ( n+b a 2 ( n a b 2 2 ) (! n b+a )! ).! 2 ) (! n+a+b+2 2

8 Result for Simple Random Walks Let Y n, n 0, be a Markov chain on {0, 1,...} with Y 0 = 0 and transition matrix 0 1 0 0 0 1/4 0 3/4 0 0 0 2/6 0 4/6 0 0 0 3/8 0 5/8........... Theorem: For a simple random walk with p < 1/2, P ω (S 1 = y 1,..., S j = y j τ > n) P (Y 1 = y 1,..., Y j = y j ) as n.

9 Proof: Let B = {S 1 = y 1,..., S j = y j = y} with y 1 = 1, y i 0, and y i+1 y i = 1. Then Use this in P (0,b) n (B, τ > n) = P 0(B, τ > n, S n = b) P 0 (S n = b) = (1/2)j P (y,b) n j (τ > n j)p 0(S n j = b y) P 0 (S n = b) 2(y + 1)(b + 1) 2 j. n P ω (B τ > n) = b P (0,b) n (B, τ > n)p ω (S n = b) b P (0,b) n (τ > n)p ω (S n = b).

10 General Results For the general case, let Y n, n 0, be a stationary Markov chain with Y 0 = 0 and P (Y n+1 = z Y n = y) = P 0 (X = z y) E(z) 0 (z S τ ) E (y) 0 (y S τ ). Remark: the transition kernel for Y is an h-transform of the kernel for the random walk under P 0. Theorem: For ω < 0, as n. P ω (S 1 = y 1,..., S k = y k τ > n) P (Y 1 = y 1,..., Y k = y k ) Theorem: Let τ + (b) = inf{n : S n > b}. For ω < 0, P ω (S n = b τ > n) ebω E 0 S τ + (b). k=0 ekω E 0 S τ + (k)

11 Approximate Reflection: P (a,b) n (τ > n), b of order n. Proposition: If 0 b = O( n), P (a,b) n (τ > n) = 2b nψ (0) E(a) 0 (a S τ ) + o ( 1/ n ). Proof: Let g k denote the P 0 mass function for S k, and define L k (x) = P n (a,b) (S k = x) P n (a, b) (S k = x) = g n k(b x)g n ( b a) g n k ( b x)g n (b a). Then L k (S k ) is dp n (a,b) /dp n (a, b) restricted to σ(s k ) or σ(x 1,..., X k ), and P n (a,b) (τ n) = E (a, b) n L τ (S τ ) = E (a, b) n [ 1 2b nψ (0) (a S τ ) + o ( 1/ n ) Finish by arguing that E n (a, b) S τ E (a) 0 S τ. ].

12 Approximate Reflection: P n (a,b) (τ > n), b of order one. Corollary: As n, P (a,b) n (τ > n) = 2 nψ (0) E(a) 0 (a S τ )E (b) 0 (b S τ ) + o(1/n). Proof: Take m = n/2. Then P (a,b) n (τ > n) = c=0 P (a,b) n (S m = c)p m (a,c) (τ > m)p n m(τ (b,c) > n m). Result follows using the prior result since S m under P n (a,b) normal. is approximately

13 References Keener (1992). Limit theorems for random walks conditioned to stay positive. Ann. Probab. Iglehart (1974). Functional central limit theorems for random walks conditioned to stay positive. Ann. Probab. Durrett (1980). Conditioned limit theorems for random walks with negative drift. Z. Wahrsch. verw. Gebiete Bertoin and Doney (1992). On conditioning a random walk to stay nonnegative. Ann. Probab.