Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d
|
|
- Joanna Cunningham
- 5 years ago
- Views:
Transcription
1 Random Walks and Brownian Motion Tel Aviv University Spring 0 Lecture date: Apr 9, 0 Lecture 8 Instructor: Ron Peled Scribe: Uri Grupel In this lecture we compute asumptotics estimates for the Green s function and apply it to the exiting annuli problem. Also we define Capacity, Polar set Prove the B-P-P theorem of Martin s Capacity for Markov chains[3] and use apply it on the intersection of RW problem. Tags for today s lecture: Green s function, exiting annuli, Capacity, Polar set, Martin Capacity, intersection of RW. Asymptotics for Green s funcion Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: Local Centeral Limit Theorem (LCLT): sup P(S n = x) x Z d x and n of same parity ( )d d πn e d x n = O(n d+ as n We ll denote p(n,x) = ( )d d πn e d x n and E(n,x) = P(S n = x) ( d Large Deviation: C d,c d > 0 such that r,n P( S n r n) C d e c dr Green s Function: In dimension d 3 the Green s functions is defined by G(x,y) = G(x y) = E x (number of vists to y) = P x (S n = y) n= ) )d πn e d x n Since in d =, this sum is always In dimesion d =, the ptential kernel plays a similar role a(x) = P(S n = 0) P(S n = x) = G(0) G(x) n= The use of quation marks is due to the fact that in dimension d =, the sum in the Green s function definition would be 8-
2 Theorem For d 3, G(x) a d x d as x where a d = d Γ( d ) π d = (d )w d and w d = vol(b d ) the volume of the unit ball in R d. More precisley α < d, G(x) = a d x d +o( x α ) as x ProofFix x 0 of even parity, G(x) = E(number of visits to x) = n= P(S n = x). Note first p(n,x) = n= n= ( )d d e d x πn 4n = = d Γ ( d )π d x d +O( x d ) 0 ( )d d e d x 4t +O( x d ) = πt Where O( x d ) is with respect to x. Note that the estimate in is according to the order of the first term for x n Now G(x) = n= p(n,x) + E(n,x) we need only to show that for any α < d n= E(n,x) = o( x α ), to see this we pick a threshold n 0 close to x E.g, n 0 = c x, log x then write E(n,x) = n= n 0 E(n, x) n= By large deviation and choice of c =O( x d ) For x of odd parity note G(x) is harmonic for x 0, so G(x) = e i are the unit vectors) giving the result. + n 0 + E(n, x) By LCLT =O(n d 0 )=o( x α ) for any α<d d i= G(x+e i) (where Remark. In the continuem, for Brownien Motion in R d (d 3) the Green function is exactly a d x d, this function is harmonic except at x = 0 and has laplacian δ 0 at 0.. The error in the previus theorem is even O( x d ) Application (exiting annuli or quantitative transience) In d 3 denote A n,m = {x Z d ; n < x < m}, τ n,m = min{j; S j / A n,m }, then for x A n,m P x ( Sτ n) = x d m d +O(n d ) n d m d In particular taking m, we get P x ( j; S j n) = x d n d +o(n ) ProofSince G(x) is harmonic except in x = 0, M j := G(S j τ) is a bounded martingale so G(x) = E x M j = optional sampeling x d d 8-
3 = P( S τ n)e x (G(S τ ) S τ n) +( P( S τ n))e x (G(S τ ) S τ m) n d m d Noting that G(x) = x d +O( x d ) (weaker than the error in the theorem). By isolation P( S τ n) we get the result. Similarly one can prove Proposition 3 In d 3 letting C n = {x Z d ; x < n} and τ = τ n = min{j; S j / C n ors j = 0} for x C n as x P(S τ = 0) = a d G(0) ( x d n d )+O( x d ) Reminder: (Second type of Green s function) For any A Z d, G A (x,y) = E x (number of visits to y before exiting A) Question 4 How do we estimate G a (x,y)? Definition 5 For A Z d we define the boundry of A as A = {x Z d ; x / A, x y for some y A} Proposition 6 For finite A Z d x,z A G A (x,z) = G(x,z) y A where H A (x,y) = P x (y is first exit point of A) ProofLet τ = min{j; S j / A} then τ G A (x,z) = E x j=0 Sj =z H A (x,y)g(y,z) = E x Sj =z j=0 j=τ Sj =z The first term is G(x,z), and the second is the same as in the proposition. Combining proposition 3 and proposition 6 we have the following proposiotion Proposition 7 G Cn (x,0) = a d ( x d n d )+O( x d ) ProofExercise. Analogous results for dimension d =, [][] 8-3
4 In d = : a(x) = x. In d = : k such that α < a(x) = π log x +k +o( x α ) With k = γ π log where γ = lim n n j= j logn the Eulrt constant. G A (x,z) = y A H A(x,y)a(y x) a(z x) for some finite A and x,y A For d = In the annulus A n,m, x A n,m (Quantitative recurrence) For d = x C n, α < P x ( S τ n) = logm log x +O(n ) logm logn G Cn (x,0) = π (logn log x )+o( x α )+O(n ) Capacity In this section we define Capacity and Polar sets, intoduce Kakutanis theorem charactrizing polar sets, and prove the Benjamini-Pemantle-Peres theorem which gives a quantitive connection between the Capacity of a set and a Markov Chain on the set. In the end of the section we see some application to the intersection of random walks. This section follow [3] Definition 8 Given a measure space (,F), a measurable function F : [0, ) (kernel) and a finite measure µ of, the underlinef - energy of µ with respect to F is I F (µ) = F(x, y)dµ(x)dµ(y) Remark 9 It is useful to think of R 3, µ charge density of a certain material and F(x,y) = x y Definition 0 The capacity of with respect to F is cap F () = ( ) inf I F(µ) µ Where µ is a probability measure on, and =
5 Remark Capacity is monotone increasing in the set. It is a measure of the size of λ with respect to F. Remark For is will be a countable with F = {all subsets of } Sometimes we ll also mention R d with F Borel σ-field. Kakutani s theorem (944): Definition 3 A Borel A R d is called Polar if P x ( t > 0,B(t) A) = 0 x R d for a Brownian Motion B(t) Theorem 4 (Kakutani) A Borel A R d is Polar if and only if cap F (A) = 0 for F(x,y) = { log ( x y ) d = x y d d 3 Now let {X n } be a Markov chain on a countable set Y, with transition probability p(x,y). Let G(x,y) = E x (# visits to y) = p (n) (x,y) n=0 Theorem 5 (Benjamini, Pemantle, Peres) If {X n } is a transient chain (any state is visited onlt finitly many times a.s) then for all starting point ρ Y and any Y cap K() P ρ ( n; X n ) cap K () where K is the Martin kernel K(x,y) = G(x,y) G(ρ,y). Furthermore, letting cap( ) K := inf 0 finitecap K (\ 0 ) cap( ) K () P(X n i.o) cap ( ) K () ProofTo get upper bound it is enough to find one µ. Let τ = min{n; S n } (τ = if is not hit), for x ν(x) = P ρ (τ <, S τ = x) ν() = P ρ (τ < ) if ν() = 0, nothing to prove. Assume () > 0. Note, y, g(x,y)dν(x) = x ν(x) P x (X n = y) = G(ρ,y) n=0 8-5
6 so ) k( K(x,y)dν(x) =. Hence I ν ν() = ν() and cap K (λ) ν() For the lower bound, use the second moment method. Given a probability measure µ on, let z = G(ρ,y) dµ(y) By Fubini, E rho z = E rho z = E ρ E ρ n=0 Xn=y E ρ ()=G(ρ,y) G(ρ,z) G(ρ,z) G(ρ,y) G(ρ,z) m,n>0 0 m n Xm=z, X n=ydµ(y)dµ(z) Xm=z, X n=ydµ(y)dµ(z) For each m, E rho n m X m=z, X n=y = P rho (X m = z)g(z,y) by the markov property. Taking the sum over m E ρ z G(z, y) G(ρ,y) dµ(z)dµ(y) = I K(µ) By Cauchy-Schwartz: P rho ( n,x n ) P rho (z > 0) (Eρ z) E ρ z Proving the first part of the theorem. For the second part, note that by transience I K (µ) P( n; X n ) cap K() {X n i.o} = stackrelmod 0= 0 finite {visits\ 0 } Hence if n is a sequence of finite sets increasing to, the by monotonicity and P(visits i.o) = lim n Pρ (visits\ 0 ) cap ( ) K () = lim cap K(\ 0 ) n Corollary 6 In Z d, d 3, for any A Z d, we deduce cap( ) K (A) P(A is hit i.o) cap ( ) K (A) {0,} by Hewitt Savage 0 law So A is hit i.o a.s if and only if cap ( ) K (A) > 0, now note that this remains true if we replace K by F(x,y) = y d since by the asyptotics of G(x) we have c F(x,y) + x y d K(x,y) c for some c,c >
7 Remark 7 In a different application, the B-P-P theorem show a theorem of Lyons that the chance that a path from root to leaves survive under a general Perculation (keep each eage with probability p e independet) Process is given up to by a certain capacity on leaves, by looking at the Markov chain of surviving leaves from left to right. Remark 8 x,y. The theorem still applies if the chainis not transient but G(x,y) <. You can replace K by its symmetrized version (K(x,y) + K(y,x)) in the theorem, since it doesn t affect capacity. Intersections of Random Walks Define S[n,m] := {S j ; j [n,m]} Theorem 9 c,c and ϕ such that n c ϕ(n) P(S[0,n] S[n,3n] ) P(S[0,n] S[n, ) ) c ϕ(n) d 3 ϕ(n) = (logn) d = 4 n d 4 d 5 ProofTrivial uper bound for d 3 and the lower bound for d followes from lower bound for d = 3, therefore we can assume d 3. Proof is by second moment method, with additional trick for d = 4. Denote J n = n 3n j=0 k=n Sj =S k, K n = n j=0 k=n Sj =S k By the Markov property and LCLT we get, (when k j is even) So P(S j = S k ) = P(S k j = 0) c n d/ c n 4 d EJ n c n 4 d we can even get (with a bit more calculation) cn d = 3 EJn clogn d = 4 d 5 And we get the lower bound for all d since cn d P(S[0,n] S[n,3n] ) P(J n > 0) (E(J n)) EJ n 8-7
8 We get the upper bpund for all d 4 since For the upper bound in d = 4 need to show P(S[0,n] S[n, ) ) = P(K n ) EK n E(K n K n ) clogn this is the expected number of pairs of times of intesec for s SRW starting in origin, this is like what happens after first intersection of walks. Note c = EK n = P(K n )E(K n K n ) Remark 0 By taking n we get that for BM in R d P(B[0,] B[,3] ) From the other side, it is intresting to consider { > 0 d =,,3 = 0 d 4 q(n) = P(S [0,n] S (0,n] ) for d =,3,4 where S,S are independet SRW. Defining Y n = P(S [0,n] S (0,n] S [0,n]) We have EY n = q n, and it is possible to calculate EY n = P(S [0,n] (S (0,n] S 3 (0,n]) ) up to a constant { logn d = 4 n d 4,,3 Since 0 Y n we have EYn EY n EYn Finally one can show that in d = 4 the upper inequality us sharp q(n) logn in d = 4 Known: q(n) n f d. where f =, f = 5 const 8, for d = 3 it is an open question, by simulation it is known that f References [] G. Lawler. Intersection of Random Walks Birkha user Boston; edition (August 8, 996) [] G. Lawler, V. Limic. Random Walk: a Modern Introduction Cambridge Studies in Advanced Mathematics (No. 3) June 00 [3] I. Benjamini, R. Pemantle, Y. Peres. Martin Capacity for Markov Chains Ann. Probab. Volume 3, Number 3 (995),
Lecture 7. 1 Notations. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction
More informationLecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding
More informationMARTIN CAPACITY FOR MARKOV CHAINS
MARTIN CAPACITY FOR MARKOV CHAINS Itai Benjamini 1, Robin Pemantle 2 and Yuval Peres 3 Cornell University, University of Wisconsin and University of California Abstract The probability that a transient
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationLecture 2. 1 Wald Identities For General Random Walks. Tel Aviv University Spring 2011
Random Walks and Brownian Motion Tel Aviv University Spring 20 Lecture date: Feb 2, 20 Lecture 2 Instructor: Ron Peled Scribe: David Lagziel The following lecture will consider mainly the simple random
More informationRecurrence of Simple Random Walk on Z 2 is Dynamically Sensitive
arxiv:math/5365v [math.pr] 3 Mar 25 Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive Christopher Hoffman August 27, 28 Abstract Benjamini, Häggström, Peres and Steif [2] introduced the
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationA note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip
arxiv:1608.04578v1 [math.pr] 16 Aug 2016 A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip Alberto Chiarini Alessandra Cipriani Abstract
More informationLecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.
Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution
More informationJUSTIN HARTMANN. F n Σ.
BROWNIAN MOTION JUSTIN HARTMANN Abstract. This paper begins to explore a rigorous introduction to probability theory using ideas from algebra, measure theory, and other areas. We start with a basic explanation
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationApplied Stochastic Processes
Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................
More informationlim n C1/n n := ρ. [f(y) f(x)], y x =1 [f(x) f(y)] [g(x) g(y)]. (x,y) E A E(f, f),
1 Part I Exercise 1.1. Let C n denote the number of self-avoiding random walks starting at the origin in Z of length n. 1. Show that (Hint: Use C n+m C n C m.) lim n C1/n n = inf n C1/n n := ρ.. Show that
More informationPrime numbers and Gaussian random walks
Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk
More informationChapter 7. Markov chain background. 7.1 Finite state space
Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider
More informationSome remarks on the elliptic Harnack inequality
Some remarks on the elliptic Harnack inequality Martin T. Barlow 1 Department of Mathematics University of British Columbia Vancouver, V6T 1Z2 Canada Abstract In this note we give three short results concerning
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationProbability Theory. Richard F. Bass
Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2
More information1 Math 285 Homework Problem List for S2016
1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:
More informationMultiple points of the Brownian sheet in critical dimensions
Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationStochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)
Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the
More informationCOLORING A d 3 DIMENSIONAL LATTICE WITH TWO INDEPENDENT RANDOM WALKS. Lee Dicker A THESIS. Mathematics. Philosophy
COLORING A d 3 DIMENSIONAL LATTICE WITH TWO INDEPENDENT RANDOM WALKS Lee Dicker A THESIS in Mathematics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements
More informationLecture 10. Theorem 1.1 [Ergodicity and extremality] A probability measure µ on (Ω, F) is ergodic for T if and only if it is an extremal point in M.
Lecture 10 1 Ergodic decomposition of invariant measures Let T : (Ω, F) (Ω, F) be measurable, and let M denote the space of T -invariant probability measures on (Ω, F). Then M is a convex set, although
More informationOn the range of a two-dimensional conditioned simple random walk
arxiv:1804.00291v1 [math.pr] 1 Apr 2018 On the range of a two-dimensional conditioned simple random walk Nina Gantert 1 Serguei Popov 2 Marina Vachkovskaia 2 April 3, 2018 1 Technische Universität München,
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationMath 456: Mathematical Modeling. Tuesday, March 6th, 2018
Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary
More informationBrownian motion with variable drift: 0-1 laws, hitting probabilities and Hausdorff dimension
Brownian motion with variable drift: - laws, hitting probabilities and Hausdorff dimension Yuval Peres Perla Sousi Abstract By the Cameron Martin theorem, if a function f is in the Dirichlet space D, then
More informationPositive and null recurrent-branching Process
December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationThe range of tree-indexed random walk
The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université
More informationMATH 56A: STOCHASTIC PROCESSES CHAPTER 2
MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on
More informationA D VA N C E D P R O B A B I L - I T Y
A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2
More informationMath 6810 (Probability) Fall Lecture notes
Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),
More informationPart III Advanced Probability
Part III Advanced Probability Based on lectures by M. Lis Notes taken by Dexter Chua Michaelmas 2017 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationHarmonic Functions and Brownian motion
Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F
More informationERRATA: Probabilistic Techniques in Analysis
ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,
More information4 Expectation & the Lebesgue Theorems
STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does
More informationMarkov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.
Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce
More informationBrownian Motion and Stochastic Calculus
ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s
More informationOpen problems from Random walks on graphs and potential theory
Open problems from Random walks on graphs and potential theory edited by John Sylvester University of Warwick, 18-22 May 2015 Abstract The following open problems were posed by attendees (or non attendees
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More informationLecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.
Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal
More information1. Suppose that a, b, c and d are four different integers. Explain why. (a b)(a c)(a d)(b c)(b d)(c d) a 2 + ab b = 2018.
New Zealand Mathematical Olympiad Committee Camp Selection Problems 2018 Solutions Due: 28th September 2018 1. Suppose that a, b, c and d are four different integers. Explain why must be a multiple of
More informationSharpness of second moment criteria for branching and tree-indexed processes
Sharpness of second moment criteria for branching and tree-indexed processes Robin Pemantle 1, 2 ABSTRACT: A class of branching processes in varying environments is exhibited which become extinct almost
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationErgodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.
Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions
More informationarxiv: v1 [math.pr] 6 Jan 2014
Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice
More informationRANDOM WALKS AND PERCOLATION IN A HIERARCHICAL LATTICE
RANDOM WALKS AND PERCOLATION IN A HIERARCHICAL LATTICE Luis Gorostiza Departamento de Matemáticas, CINVESTAV October 2017 Luis GorostizaDepartamento de Matemáticas, RANDOM CINVESTAV WALKS Febrero AND 2017
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationarxiv:math/ v3 [math.pr] 20 Nov 2007
The Annals of Probability 2007, Vol. 35, No. 6, 2044 2062 DOI: 10.1214/009117907000000169 c Institute of Mathematical Statistics, 2007 arxiv:math/0404105v3 [math.pr] 20 Nov 2007 WHAT IS THE PROBABILITY
More informationIntroduction to Markov Chains and Riffle Shuffling
Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and
More informationSimple random walk on the two-dimensional uniform spanning tree and its scaling limits
Simple random walk on the two-dimensional uniform spanning tree and its scaling limits 5th Cornell Conference on Analysis, Probability, and Mathematical Physics on Fractals, 11 June, 2014 Takashi Kumagai
More informationHitting Probabilities
Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and
More information7 Convergence in R d and in Metric Spaces
STA 711: Probability & Measure Theory Robert L. Wolpert 7 Convergence in R d and in Metric Spaces A sequence of elements a n of R d converges to a limit a if and only if, for each ǫ > 0, the sequence a
More informationLecture 6. 2 Recurrence/transience, harmonic functions and martingales
Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification
More information215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that
15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that
More informationRANDOM WALKS AND THE PROBABILITY OF RETURNING HOME
RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why
More informationNotes 15 : UI Martingales
Notes 15 : UI Martingales Math 733 - Fall 2013 Lecturer: Sebastien Roch References: [Wil91, Chapter 13, 14], [Dur10, Section 5.5, 5.6, 5.7]. 1 Uniform Integrability We give a characterization of L 1 convergence.
More informationBiased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres
Biased random walk on percolation clusters Noam Berger, Nina Gantert and Yuval Peres Related paper: [Berger, Gantert & Peres] (Prob. Theory related fields, Vol 126,2, 221 242) 1 The Model Percolation on
More informationInference for Stochastic Processes
Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space
More informationUnpredictable Walks on the Integers
Unpredictable Walks on the Integers EJ Infeld March 5, 213 Abstract In 1997 Benjamini, Premantle and Peres introduced a construction of a walk on Z that has a faster decaying predictability profile than
More informationExtrema of discrete 2D Gaussian Free Field and Liouville quantum gravity
Extrema of discrete 2D Gaussian Free Field and Liouville quantum gravity Marek Biskup (UCLA) Joint work with Oren Louidor (Technion, Haifa) Discrete Gaussian Free Field (DGFF) D R d (or C in d = 2) bounded,
More informationHomework 11. Solutions
Homework 11. Solutions Problem 2.3.2. Let f n : R R be 1/n times the characteristic function of the interval (0, n). Show that f n 0 uniformly and f n µ L = 1. Why isn t it a counterexample to the Lebesgue
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationMartingale Theory for Finance
Martingale Theory for Finance Tusheng Zhang October 27, 2015 1 Introduction 2 Probability spaces and σ-fields 3 Integration with respect to a probability measure. 4 Conditional expectation. 5 Martingales.
More information4 Expectation & the Lebesgue Theorems
STA 7: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on the same probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω,
More informationMeasurable functions are approximately nice, even if look terrible.
Tel Aviv University, 2015 Functions of real variables 74 7 Approximation 7a A terrible integrable function........... 74 7b Approximation of sets................ 76 7c Approximation of functions............
More informationSTA205 Probability: Week 8 R. Wolpert
INFINITE COIN-TOSS AND THE LAWS OF LARGE NUMBERS The traditional interpretation of the probability of an event E is its asymptotic frequency: the limit as n of the fraction of n repeated, similar, and
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationLecture 22: Variance and Covariance
EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce
More informationMarkov Chains. Chapter Existence and notation. B 2 B(S) and every n 0,
Chapter 6 Markov Chains 6.1 Existence and notation Along with the discussion of martingales, we have introduced the concept of a discrete-time stochastic process. In this chapter we will study a particular
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More informationNecessary and sufficient conditions for strong R-positivity
Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We
More informationSolution Set for Homework #1
CS 683 Spring 07 Learning, Games, and Electronic Markets Solution Set for Homework #1 1. Suppose x and y are real numbers and x > y. Prove that e x > ex e y x y > e y. Solution: Let f(s = e s. By the mean
More informationHAUSDORFF DIMENSION AND ITS APPLICATIONS
HAUSDORFF DIMENSION AND ITS APPLICATIONS JAY SHAH Abstract. The theory of Hausdorff dimension provides a general notion of the size of a set in a metric space. We define Hausdorff measure and dimension,
More informationNon-homogeneous random walks on a semi-infinite strip
Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our
More informationBrownian Motion and the Dirichlet Problem
Brownian Motion and the Dirichlet Problem Mario Teixeira Parente August 29, 2016 1/22 Topics for the talk 1. Solving the Dirichlet problem on bounded domains 2. Application: Recurrence/Transience of Brownian
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationCROSSING ESTIMATES AND CONVERGENCE OF DIRICHLET FUNCTIONS ALONG RANDOM WALK AND DIFFUSION PATHS
The Annals of Probability 1999, Vol. 27, No. 2, 970 989 CROSSING ESTIMATES AND CONVERGENCE OF DIRICHLET FUNCTIONS ALONG RANDOM WALK AND DIFFUSION PATHS By Alano Ancona, Russell Lyons 1 and Yuval Peres
More informationLecture 7: February 6
CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationRandom Walk: A Modern Introduction. Gregory F. Lawler and Vlada Limic
Random Walk: A Modern Introduction Gregory F. Lawler and Vlada Limic Contents Preface page 6 Introduction 9. Basic definitions 9.2 Continuous-time random walk 2.3 Other lattices 4.4 Other walks 6.5 Generator
More information{σ x >t}p x. (σ x >t)=e at.
3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ
More informationC.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series
C.7 Numerical series Pag. 147 Proof of the converging criteria for series Theorem 5.29 (Comparison test) Let and be positive-term series such that 0, for any k 0. i) If the series converges, then also
More informationUseful Probability Theorems
Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)
More informationLecture 23. Random walks
18.175: Lecture 23 Random walks Scott Sheffield MIT 1 Outline Random walks Stopping times Arcsin law, other SRW stories 2 Outline Random walks Stopping times Arcsin law, other SRW stories 3 Exchangeable
More informationExpansion and Isoperimetric Constants for Product Graphs
Expansion and Isoperimetric Constants for Product Graphs C. Houdré and T. Stoyanov May 4, 2004 Abstract Vertex and edge isoperimetric constants of graphs are studied. Using a functional-analytic approach,
More information1 Stat 605. Homework I. Due Feb. 1, 2011
The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due
More informationON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT
Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department
More informationON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES
ON THE SHAPE OF THE GROUND STATE EIGENFUNCTION FOR STABLE PROCESSES RODRIGO BAÑUELOS, TADEUSZ KULCZYCKI, AND PEDRO J. MÉNDEZ-HERNÁNDEZ Abstract. We prove that the ground state eigenfunction for symmetric
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationSelf-similar fractals as boundaries of networks
Self-similar fractals as boundaries of networks Erin P. J. Pearse ep@ou.edu University of Oklahoma 4th Cornell Conference on Analysis, Probability, and Mathematical Physics on Fractals AMS Eastern Sectional
More informationOn the convergence of sequences of random variables: A primer
BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :
More information(z 0 ) = lim. = lim. = f. Similarly along a vertical line, we fix x = x 0 and vary y. Setting z = x 0 + iy, we get. = lim. = i f
. Holomorphic Harmonic Functions Basic notation. Considering C as R, with coordinates x y, z = x + iy denotes the stard complex coordinate, in the usual way. Definition.1. Let f : U C be a complex valued
More information