On the range of a two-dimensional conditioned simple random walk

Size: px
Start display at page:

Download "On the range of a two-dimensional conditioned simple random walk"

Transcription

1 arxiv: v1 [math.pr] 1 Apr 2018 On the range of a two-dimensional conditioned simple random walk Nina Gantert 1 Serguei Popov 2 Marina Vachkovskaia 2 April 3, Technische Universität München, Fakultät für Mathematik, Boltzmannstr. 3, Garching, Germany gantert@ma.tum.de 2 Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University of Campinas UNICAMP, rua Sérgio Buarque de Holanda 651, , Campinas SP, Brazil s: {popov,marinav}@ime.unicamp.br Abstract We consider the two-dimensional simple random walk conditioned on never hitting the origin. This process is a Markov chain, namely it is the Doob h-transform of the simple random walk with respect to the potential kernel. It is known to be transient and we show that it is almost recurrent in the sense that each infinite set is visited infinitely often, almost surely. We prove that, for a large set, the proportion of its sites visited by the conditioned walk is approximately a Uniform[0, 1] random variable. Also, given a set G R 2 that does not surround the origin, we prove that a.s. there is an infinite number of k s such that kg Z 2 is unvisited. These results suggest that the range of the conditioned walk has fractal behavior. Keywords: random interlacements, range, transience, simple random walk, Doob s h-transform AMS 2010 subject classifications: Primary 60J10. Secondary 60G50, 82C41. 1

2 1 Introduction and results We start by introducing some basic notation and defining the conditioned random walk Ŝ, the main object of study in this paper. Besides being interesting on its own, this random walk is the main ingredient in the construction of the two-dimensional random interlacements of [3, 4]. Write x y if x and y are neighbours in Z 2. Let S n, n 0 be two-dimensional simple random walk, i.e., the discrete-time Markov chain with state space Z 2 and transition probabilities defined in the following way: 1, if x y, P xy = 4 1 0, otherwise. We assume that all random variables in this paper are constructed on a common probability space with probability measure P and we denote by E the corresponding expectation. When no confusion can arise, we will write P x and E x for the law and expectation of the 1 random walk started from x. Let τ 0 A = inf{k 0 : S k A}, 2 τ 1 A = inf{k 1 : S k A} 3 be the entrance and the hitting time of the set A by simple random walk S we use the convention inf = +. For a singleton A = {x}, we will write τ i A = τ i x, i = 0, 1, for short. One of the key objects needed to understand the twodimensional simple random walk is the potential kernel a, defined by ax = P0 [S k =0] P x [S k =0]. 4 k=0 It can be shown that the above series indeed converges and we have a0 = 0, ax > 0 for x 0. It it straightforward to check that the function a is harmonic outside the origin, i.e., 1 4 y:y x 1 the simple one, or the conditioned one defined below ay = ax for all x

3 Also, using 4 and the Markov property, one can easily obtain that 1 4 x 0 ax = 1, which implies by symmetry that ax = 1 for all x 0. 6 Observe that 5 immediately implies that as k τ0 0 is a martingale, we will repeatedly use this fact in the sequel. Further, once can show that with γ = the Euler-Mascheroni constant ax = 2 π ln x + 2γ + 3 ln 2 π + O x 2 7 as x, cf. Theorem of [7]. Let us define another random walk Ŝn, n 0 on Z 2 \ {0} in the following way: its transition probability matrix equals compare to 1 ay, if x y, x 0, P xy = 4ax 8 0, otherwise. It is immediate to see from 5 that the random walk Ŝ is indeed well defined. The walk Ŝ is the Doob h-transform of the simple random walk, under the condition of not hitting the origin see Lemma 3.3 of [4] and its proof. Let τ 0, τ 1 be defined as in 2 3, but with Ŝ in the place of S. We summarize the basic properties of the walk Ŝ in the following Proposition 1.1. The following statements hold: i The walk Ŝ is reversible, with the reversible measure µ x := a 2 x. ii In fact, it can be represented as a random walk on the two-dimensional lattice with conductances axay, x, y Z 2, x y. iii Let N be the set of the four neighbours of the origin. 1/aŜk τ 0 N is a martingale. Then the process iv The walk Ŝ is transient. v Moreover, for all x 0 P x [ τ1 x < ] = 1 1 2ax, 9 3

4 and for all x y, x, y 0 P x [ τ0 y < ] = P x [ τ1 y < ] = ax + ay ax y. 10 2ax The statements of Proposition 1.1 are not novel they appear already in [4], but we found it useful to collect them here for the sake of completeness and also for future reference. We will prove Proposition 1.1 in the next section. It is curious to observe that 10 implies that, for any x, P x [ τ 1 y < ] converges to 1 as y. 2 As noted in [4], this is related to the remarkable fact that if one conditions on a very distant site being vacant, then this reduces the intensity near the origin of the two-dimensional random interlacement process by the factor of four. Let be the Euclidean norm. Define the discrete ball Bx, r = {y Z 2 : y x r} note that x and r need not be integer, and abbreviate Br := B0, r. internal boundary of A Z 2 is defined by A = {x A : there exists y Z 2 \ A such that x y}. Now we introduce some more notations and state the main results. For a set T Z + thought of as a set of time moments let Ŝ T = } {Ŝm m T The be the range of the walk Ŝ with respect to that set. For simplicity, we assume in the following that the walk Ŝ starts at a fixed neighbour x 0 of the origin, and we write P for P x0 it is, however, clear that our results hold for any fixed starting position of the walk. For a nonempty and finite set A Z 2, let us consider random variables A Ŝ [0, RA =, A A \ Ŝ [0, VA = = 1 RA; A that is, RA respectively, VA is the proportion of visited respectively, unvisited sites of A by the walk Ŝ. Let us also abbreviate, for M 0 > 0, l A = A 1 max A B y, n. y A ln M 0 n 11 Our main result is the following 4

5 Theorem 1.2. Let M 0 > 0 be a fixed constant, and assume that A Bn \ Bn ln M 0 n. Then, for all s [0, 1], we have, with positive constants c 1,2 depending only on M 0, P[VA s] s ln 1/3 ln 2/3, c 1 + c2 l A 12 and the same result holds with R on the place of V. The above result means that if A Bn \ Bε 0 n is big enough and well distributed, then the proportion of visited sites has approximately Uniform[0, 1] distribution. In particular, one can obtain the following Corollary 1.3. Assume that D R 2 is a bounded open set. Then both sequences RnD Z 2, n 1 and VnD Z 2, n 1 converge in distribution to the Uniform[0, 1] random variable. Indeed, it is straightforward to obtain it from Theorem 1.2 since nd Z 2 is of order n 2 as n note that D contains a disk, and so l nd Z 2 will be of order ln 2M 0 n. We can choose M 0 large enough such that the r.h.s. of 12 goes to 0. Also, we prove that the range of Ŝ contains many big holes. To formulate this result, we need the following Definition 1.4. We say that a set G R 2 does not surround the origin, if there exists c 1 > 0 such that G Bc 1, i.e., G is bounded; there exist c 2,3 > 0 and a function f = f 1, f 2 : [0, 1] R 2 such that f0 = 0, f1 = c 1, f 1,2s c 2 for all s [0, 1], and inf f 1s, f 2 s y c 3, s [0,1],y G i.e., one can escape from the origin to infinity along a path which is uniformly away from G. Then, we have Theorem 1.5. Let G R 2 be a set that does not surround the origin. Then, P [ ng Ŝ[0, = for infinitely many n ] =

6 We also establish some additional properties of the conditioned walk Ŝ, which will be important for the proof of Theorem 1.5 and are of independent interest. Consider an irreducible Markov chain. Recall that a set is called recurrent with respect to the Markov chain, if it is visited infinitely many times almost surely; a set is called transient, if it is visited only finitely many times almost surely. It is clear that any nonempty set is recurrent with respect to a recurrent Markov chain, and every finite set is transient with respect to a transient Markov chain. Note that, in general, a set can be neither recurrent nor transient think e.g. of the simple random walk on a binary tree, fix a neighbour of the root and consider the set of vertices of the tree connected to the root through this fixed neighbour. In many situations it is possible to characterize completely the recurrent and transient sets, as well as to answer the question if any set must be either recurrent or transient. For example, for the simple random walk in Z d, d 3, each set is either recurrent or transient and the characterization is provided by the Wiener s test see e.g. Corollary of [7], formulated in terms of capacities of intersections of the set with exponentially growing annuli. Now, for the conditioned two-dimensional walk Ŝ the characterization of recurrent and transient sets is particularly simple: Theorem 1.6. A set A Z 2 is recurrent with respect to infinite. Ŝ if and only if A is Next, we recall that a Markov chain has the Liouville property, see e.g. Chapter IV of [12], if all bounded harmonic with respect to that Markov chain functions are constants. Since Theorem 1.6 implies that every set must be recurrent or transient, we obtain the following result as its corollary: Theorem 1.7. The conditioned two-dimensional walk Ŝ has the Liouville property. These two results, besides being of interest on their own, will also be operational in the proof of Theorem Some auxiliary facts and proof of Proposition 1.1 For A Z d, recall that A denotes its internal boundary. We abbreviate τ 1 R = τ 1 BR. We will consider, with a slight abuse of notation, the function ar = 2 π ln r + 2γ + 3 ln 2 π 6

7 of a real argument r 1. To explain why this notation is convenient, observe that, due to 7, we may write, as r, x 1 νyay = ar + O 14 r y Bx,r for any probability measure ν on Bx, r. For all x Z 2 and R 1 such that x, y BR and x y, we have P x [τ 1 R < τ 1 y] = ax y ar + O, 15 R 1 y 1 as R. This is an easy consequence of the optional stopping theorem applied to the martingale as n τ0 0, together with 14. Also, an application of the optional stopping theorem to the martingale 1/aŜk τ 0 N yields P x [ τ 1 R < τ 1 r] = ar 1 ax 1 + OR 1 ar 1 ar 1 + Or 1, 16 for 1 < r < x < R <. 1 r x Sending R to infinity in 16 we see that for P x [ τ 1 r = ] = 1 ar + Or ax We need the fact that the walks S and Ŝ are almost indistinguishable on a distant from the origin set. For A Z 2, denote Γ x A to be the set of all finite nearest-neighbour trajectories that start at x A \ {0} and end when entering A for the first time. For V Γ x A write S V if there exists k such that S 0,..., S k V and the same for the conditioned walk Ŝ. We write Γx 0,R for Γ x BR. Lemma 2.1. Assume that V Γ x 0,R ; then we have P x [S V τ 1 R < τ 1 0] = P x [Ŝ V ] 1 + OR ln R Proof. This is Lemma 3.3 i of [4]. If A A are finite subsets of Z 2, then the excursions between A and A are pieces of nearest-neighbour trajectories that begin on A and end on A, see Figure 1, which is, hopefully, self-explanatory. We refer to Section 3.4 of [4] for formal definitions. 7

8 A A Figure 1: Excursions pictured as bold pieces of trajectories of random walks between A and A. Proof of Proposition 1.1. It is straightforward to check i iii directly, we leave this task for the reader. Item iv the transience follows from iii and Theorem of [8]. As for v, we first observe that 9 is a consequence of 10, although it is of course also possible to prove it directly, see Proposition 2.2 of [4]. Indeed, using 8 and then 10, 5 and 6, one can write P x [ τ1 x < ] = 1 4ax = 1 4ax ayp y [ τ1 x < ] y x y x = 1 1 2ax. 1 ay + ax ay x 2 Now, to prove 10, we essentially use the approach of Lemma 3.7 of [4], although here the calculations are simpler. Let us define note that all the probabilities below are for the simple random walk S h 1 = P x [τ 1 0 < τ 1 R], h 2 = P x [τ 1 y < τ 1 R], q 12 = P 0 [τ 1 y < τ 1 R], 8

9 q 21 p 2 y BR 0 q 12 x p 1 1 p 1 + p 2 Figure 2: Trajectories for the probabilities of interest. q 21 = P y [τ 1 0 < τ 1 R], p 1 = P x [τ 1 0 < τ 1 R τ 1 y], p 2 = P x [τ 1 y < τ 1 R τ 1 0], see Figure 2. Using 15 and in addition the Markov property and 5 for 21 we have for x, y 0, x y which implies that ax h 1 = 1 ar + OR 1, 19 ax y h 2 = 1 ar + OR 1 y, 20 ay q 12 = 1 ar + OR 1 y, 21 q 21 = 1 ay ar + OR 1, 22 lim 1 h 1aR = ax, 23 R 9

10 lim 2aR = ax y, R 24 lim 12aR = ay, R 25 lim 21aR = ay. R 26 Observe that, due to the Markov property, it holds that h 1 = p 1 + p 2 q 21, h 2 = p 2 + p 1 q 12. Solving these equations with respect to p 1, p 2, we obtain p 1 = h 1 h 2 q 21 1 q 12 q 21, 27 p 2 = h 2 h 1 q 12 1 q 12 q Let us denote h 1 = 1 h 1, h 2 = 1 h 2, q 12 = 1 q 12, q 21 = 1 q 21. Next, using Lemma 2.1, we have that P x [ τ 1 y < τ 1 R] = P x [τ 1 y < τ 1 R τ 1 R < τ 1 0] 1 + or 1 = P x[τ 1 y < τ 1 R < τ 1 0] 1 + or 1 P x [τ 1 R < τ 1 0] = p 21 q or 1 1 h 1 = h 2 h 1 q 12 1 q 21 1 q 12 q 21 1 h or 1 = h 1 + q 12 h 2 h 1 q 12 q 21 q 12 + q 21 q 12 q 21 h Since P x [ τ 1 y < ] = lim R P x [ τ 1 y < τ 1 R], using we obtain 10 observe that the product terms in 29 are of smaller order and will disappear in the limit. We now use the ideas contained in the last proof to obtain some refined bounds on the hitting probabilities for excursions of the conditioned walk. Let us assume that x n ln M 0 n and y A, where the set A is as in Theorem 1.2. Also, abbreviate R = n ln 2 n. 10

11 Ex 0 Ex1 A Bn \ B 0 n ln M 0 n Ex 2 Bn Bn ln 2 n Figure 3: Excursions and their visits to A Lemma 2.2. In the above situation, we have P x [ τ 1 y < τ 1 R] = 1+Oln 3 n axar + ayar ax yar axay. ax2ar ay 30 Proof. Analogously to 29, using together with Lemma 2.1, we obtain that P x [ τ 1 y < τ 1 R] = P x [τ 1 y < τ 1 R τ 1 R < τ 1 0] 1 + On 1 = B 1 B On 1, where ay ax B 1 = ar + OR 1 ar + OR 1 + ay ar + OR 1 y ax y ar + OR 1 y axay ar + OR 1 ar + OR 1 y = 1 + OR ln R 1 ay axar + ayar ax yar axay ar 1 + Oln 3 na 2 R 11

12 = 1 + Oln 3 n ay ar axar + ayar ax yar axay, a 2 R and B 2 = ax ay ar + OR 1 ar + OR 1 y + ay ar + OR 1 a 2 y ar + OR 1 ar + OR 1 y = 1 + OR ln R 1 ax ar 2ayaR a2 y + Oln 1 n 1 + Oln 3 na 2 R = 1 + Oln 3 n ax ar 2ayaR a2 y. a 2 R Gathering the pieces, we obtain Proofs of the main results We start with Proof of Theorem 1.2. First, we describe informally the idea of the proof. We consider the visits to the set A during excursions of the walk from Bn to Bn ln 2 n, see Figure 3. The crucial argument is the following: the randomness of VA comes from the number of excursions and not from the excursions themselves. If the number of excursions is around c, then it is possible to show ln using a standard weak-lln argument that the proportion of uncovered sites in A is concentrated around e c. On the other hand, that number of excursions can be modeled roughly as Y, where Y is an Exponential1 random variable. ln Then, P[VA s] P[Y ln s 1 ] = s, as required. We now give a rigorous argument. Let Ĥ be the conditional entrance measure for the conditioned walk Ŝ, i.e., Ĥ A x, y = P x [Ŝ τ1 A = y τ 1 A < ]. 31 Let us first denote the initial piece of the trajectory by Ex 0 = Ŝ[0, τn ]. Then, we consider a Markov chain Ex k, k 1 of excursions between Bn and Bn ln 2 n, defined in the following way: for k 2 the initial site of Ex k is chosen according to the measure ĤBn z k 1,, where z k 1 Bn ln 2 n is the last 12

13 site of the excursion Ex k 1 ; also, the initial site of Ex 1 is the last site of Ex 0 ; the weights of trajectories are chosen according to 8 i.e., each excursion is an Ŝ-walk trajectory. It is important to observe that one may couple Ex k, k 1 with the true excursions of the walk Ŝ in an obvious way: one just picks the excursions subsequently, each time tossing a coin to decide if the walk returns to Bn. Let ψ n = min P x [ τn = ] x Bn ln 2 n be the minimal probability to avoid Bn, starting at sites of Bn ln 2 n. Using 17 it is straightforward to obtain that P x [ τn = ] = for any x Bn ln 2 n, and so it also holds that ψ n = ln 1 + On ln ln 1 + On ln Let us consider a sequence of i.i.d. random variables η k, k 0 such that P[η k = 1] = 1 P[η k = 0] = ψ n. Let N = min{k : η k = 1}, so that N is a Geometric random variable with mean ψn 1. Now, 32 implies that P x [ τn = ] ψ n O ln n for any x Bn ln 2 n, so it is clear 2 that N can be coupled with the actual number of excursions N in such a way that N N a.s. and P[N N] On Note that this construction preserves the independence of N from the excursion sequence Ex k, k 1 itself. Define A R k Ex0 Ex 1... Ex k =, A and V k = A \ Ex0 Ex 1... Ex k A = 1 R k 2 Let Z n, n 1 be a sequence of {0, 1}-valued random variables adapted to a filtration F n, n 1 and such that P[Z n+1 = 1 F n ] [p, p + ε] a.s.. Then it is elementary to obtain that the total variation distance between the random variable min{k : Z k = 1} and the Geometric random variable with mean p 1 is bounded above by Oε/p. 13

14 to be the proportions of visited and unvisited sites in A with respect to the first k excursions together with the initial piece Ex 0. Now, it is straightforward to check that 30 implies that, for any x Bn and y A P x [ τ1 y < τ 1 n ln 2 n ] ln ln = 1 + O 34 and, for y, z Bn \ B n such that y z = n/b with b 2 ln M 0 n 2 ln M 0 n P z [ τ1 y < τ 1 n ln 2 n ] = 2 ln + ln b 1 + O For y A and a fixed k 1 consider the random variable ξ k y = 1{y / Ex 0 Ex 1... Ex k }, ln so that V k = A 1 y A ξk y. Now 34 implies that, for all j 1, and 35 implies that for any y A. Let µ k y µ k P[y / Ex j ] = 1 ln 1 + O ln ln P[y / Ex 0 Ex 1 ] = 1 O = Eξ y k. Then we have y = P[y / Ex 0 Ex 1... Ex k ] ln = 1 O 1 = exp k 1 + O k 1 + ln Next, we need to estimate the covariance of ξ y k First note that, for any x Bn ln 1 + O ln,. 35 ln k and ξ k z in case y z n ln M 0 n. [ P x {y, z} Ex1 = ] [ ] = 1 P x [y Ex 1 ] P x [z Ex 1 ] + P x {y, z} Ex1 ln ln [ ] = O + P x {y, z} Ex1 14

15 by 34, and P x [ {y, z} Ex1 ] = Px [ max{ τ1 y, τ 1 z} < τ 1 n ln 2 n ] = P x [ τ1 y < τ 1 z < τ 1 n ln 2 n ] + P x [ τ1 z < τ 1 y < τ 1 n ln 2 n ] P x [ τ1 y < τ 1 n ln 2 n ] P y [ τ1 z < τ 1 n ln 2 n ] + P x [ τ1 z < τ 1 n ln 2 n ] P z [ τ1 y < τ 1 n ln 2 n ] ln M 0 ln ln 2 = O. Therefore, similarly to 36 we obtain Eξ k y ξ z k = exp 2k 1 + O ln 1 + O k 1 + ln ln, which, together with 36, implies after some elementary calculations that, for all y, z A such that y z n ln M 0 n uniformly in k, since ln covξ k y ln, ξ z k = O ln 2 + k exp 2k ln = O ln uniformly in k. Recall the notation l A from 11. Now, using Chebyshev s inequality, we write [ A 1 P ] µ k > ε y A ξ k y ε A 2 Var ξ y k = ε A 2 = ε A 2 y,z A y A y covξ y k, ξ z k y,z A, y z < n ln M 0 n covξ y k, ξ z k + 15 y,z A, y z n ln M 0 n covξ y k, ξ z k 37

16 Let ε A 2 A By, n ln ln M 0 n + A 2 O y A ln ε 2 l A + ε 2 O. 38 Φ s = min { k : V k s } be the number of excursions necessary to make the unvisited proportion of A at most s. We have P[VA s] = P[Φ s N] = P[Φ s N, N = N] + P[Φ s N, N N] = P[Φ s N] + P[Φ s N, N N] P[Φ s N, N N], so, recalling 33, P[VA s] P[Φ s N] P[N N] On Next, we write P[Φ s N] = E P[ N Φ s Φ s ] = E1 ψ n Φs, 40 and concentrate on obtaining lower and upper bounds on the expectation in the right-hand side of 40. For this, assume that s 0, 1 is fixed and abbreviate 1/3 ln δ n = kn 1 = 1 δ n ln s, ln k n + 1 = 1 + δ n ln s ; ln we also assume that n is sufficiently large so that δ n 0, 1 2 and 1 < k n < k + n. Now, according to 36, µ k± n y = exp 1 ± δ n ln s O = s exp ln s 1 ± δ n + O k n ± k n ± 1 ln + ln

17 = s 1 + O δ n ln s 1 ln ln s 1, so in both cases it holds that observe that s ln s 1 1/e for all s [0, 1] µ k± n y = s + O δ n + With a similar calculation, one can also observe that We then write, using 41 P[Φ s > k n + ] = P[V k+ n > s] [ = P A 1 y A ln = s + Oδ n ψ n k± n = s + Oδ n. 42 ξ k+ n y ] > s [ = P A 1 ξ k+ n y µ k+ n y > s A ] 1 µ k+ n y y A y A [ = P A ] 1 ξ k+ n y µ k+ n y > Oδ n. 43 y A Then, 38 implies that P[Φ s > k n + ] O l A ln Quite analogously, one can also obtain that P[Φ s < kn ] O Using 42 and 44, we then write l A ln E1 ψ n Φs E 1 ψ n Φs 1{Φ s k + n } 1 ψ n k+ n P[Φ s k n + ] ln 1/3 s O 1 O 2/3 ln 1/ /3 ln 1/ l A ln 2/3 ln 1/3 +, 46 17

18 and, using 42 and 45, E1 ψ n Φs = E 1 ψ n Φs 1{Φ s k n } + E 1 ψ n Φs 1{Φ s < k n } 1 ψ n k n + P[Φ s < kn ] ln 1/3 s + O ln 2/3 ln 1/3 + 1 O l A Therefore, using also 39 40, we obtain 12, thus concluding the proof of Theorem 1.2. Next, we will prove Theorems 1.6 and 1.7, since the latter will be needed in the course of the proof of Theorem 1.5. Proof of Theorem 1.6. Clearly, we only need to prove that every infinite subset of Z d is recurrent for Ŝ. Basically, this is a consequence of the fact that, due to 10, lim y P x 0 [ τ1 y < ] = 1 2 for any x 0 Z 2. Indeed, let Ŝ0 = x 0 ; since A is infinite, by 48 one can find y 0 A and R 0 such that {x 0, y 0 } BR 0 and P x0 [ τ1 y 0 < τ 1 R 0 ] 1 3. Then, for any x 1 BR 0, we can find y 1 A and R 1 > R 0 such that y 1 BR 1 \ BR 0 and P x1 [ τ1 y 1 < τ 1 R 1 ] 1 3. Continuing in this way, we can construct a sequence R 0 < R 1 < R 2 <... depending on the set A such that, for each k 0, the walk Ŝ hits A on its way from BR k to BR k+1 with probability at least 1, regardless of the past. This 3 clearly implies that A is a recurrent set. Proof of Theorem 1.7. Indeed, Theorem 1.6 implies that every subset of Z 2 must be either recurrent or transient, and then Proposition 3.8 in Chapter 2 of [10] implies the Liouville property. Still, for the reader s convenience, we include the 48 18

19 proof here. Assume that h : Z 2 \ {0} R is a bounded harmonic function for Ŝ. Let us prove that lim inf hy = lim sup hy, 49 y y that is, h must have a limit at infinity. Indeed, assume that 49 does not hold, which means that there exist two constants b 1 < b 2 and two infinite sets B 1, B 2 Z 2 such that hy b 1 for all y B 1 and hy b 2 for all y B 2. Now, on one hand hŝn is a bounded martingale, so it must a.s. converge to some limit; on the other hand, Theorem 1.6 implies that both B 1 and B 2 will be visited infinitely often by Ŝ, and so hŝn cannot converge to any limit, thus yielding a contradiction. This proves 49. Now, if lim y hy = c, then it is easy to obtain from the Maximum Principle that hx = c for any x. This concludes the proof of Theorem 1.7. Finally, we are able to prove that there are big holes in the range of Ŝ: Proof of Theorem 1.5. Clearly, if G does not surround the origin in the sense of Definition 1.4, then G Bc 1 \ Bc 3. For the sake of simplicity, let us assume that G B1 \ B1/2; the general case can be treated in a completely analogous way. Consider the two sequences of events E n = { τ 1 2 3n 1 G < τ 1 2 3n, Ŝj > 2 3n 1 for all j τ 1 2 3n }, E n = { Ŝj > 2 3n 1 for all j τ 1 2 3n } and note that E n E n and 2 3n 1 G Ŝ[0, = on E n. Our goal is to show that a.s. an infinite number of events E n, n 1 occurs. Observe, however, that the events in each of the above two sequences are not independent, so the basic second Borel-Cantelli lemma will not work. In the following, we use a generalization of the second Borel-Cantelli lemma, known as the Kochen-Stone theorem [6]: for any sequence of events A 1, A 2, A 3,... it holds that [ P k=1 ] 1{A k } = lim sup k k i=1 P[A i] 2 k i,j=1 P[A i A j ]. 50 Let us prove that there exists a positive constant c 4 such that P[E n ] c 4 n for all n

20 Indeed, since G B1 \ B1/2 does not surround the origin, by comparison with Brownian motion it is elementary to obtain that, for some c 5 > 0, P x [ τ1 2 3n 1 G < τ 1 2 3n ] > c 5 for all x B2 3n 1. Lemma 2.1 then implies that, for some c 6 > 0, P x [ τ1 2 3n 1 G < τ 1 2 3n ] > c 6 52 for all x B2 3n 1. Let us denote, recalling 7, γ = π 1 2 2γ+3 ln 2. Using 17, we then obtain 2 ln 2 ln 2 2γ+3 ln 2 π = [ P z Ŝ j > 2 3n 1 for all j 0 ] = 1 a23n 1 + O2 3n a2 3n + O2 3n 1 = 1 + o2 3n. 53 3n + γ for any z B2 3n. The inequality 51 follows from 52 and 53. Now, we need an upper bound for P[E m E n ], m n. Clearly, E m E n E m E n, and note that the event E m E n means that the particle hits B2 3n before B2 3m 1 starting from a site on B2 3m, and then never hits B2 3n 1 starting from a site on B2 3n. So, again using 17 and Lemma 2.1, we write analogously to 53 and also omitting a couple of lines of elementary calculations P[E m E n ] P[E m E n] = a23m 1 1 a2 3m 1 + O2 3m a2 3m 1 1 a2 3n 1 + O2 3m 1 a23n 1 + O2 3n a2 3n + O2 3n 1 = 1 + o2 3m. 54 3n m + 13m + γ Now, 51 implies that k i=1 P[E i] c 9 ln k, and 54 implies after some elementary calculations that k i,j=1 P[E i E j ] c 10 ln 2 k. So, applying 50 to the sequence of events E n, n 1, we obtain that [ ] P 1{E k } = c 11 > 0. k=1 Now, note that, again due to Proposition 3.8 in Chapter 2 of [10], the Liouville property implies that every tail event must have probability 0 or 1, and so the probability in the above display must be equal to 1. This concludes the proof of Theorem

21 References [1] J. Černý, A. Teixeira 2012 From random walk trajectories to random interlacements. Ensaios Matemáticos [Mathematical Surveys] 23. Sociedade Brasileira de Matemática, Rio de Janeiro. [2] F. Comets, C. Gallesco, S. Popov, M. Vachkovskaia 2013 On large deviations for the cover time of two-dimensional torus. Electr. J. Probab., 18, article 96. [3] F. Comets, S. Popov 2017 The vacant set of two-dimensional critical random interlacement is infinite. Ann. Probab. 45, [4] F. Comets, S. Popov, M. Vachkovskaia 2016 Two-dimensional random interlacements and late points for random walks. Commun. Math. Phys. 343, [5] A. Drewitz, B. Ráth, A. Sapozhnikov 2014 An Introduction to Random Interlacements. Springer. [6] S. Kochen, C. Stone 1964 A note on the Borel-Cantelli lemma. Illinois J. Math. 8 2, [7] G. Lawler, V. Limic 2010 Random walk: a modern introduction. Cambridge Studies in Advanced Mathematics, 123. Cambridge University Press, Cambridge. [8] M. Menshikov, S. Popov, A. Wade 2017 Non-homogeneous Random Walks: Lyapunov Function Methods for Near-Critical Stochastic Systems Cambridge University Press, Cambridge. [9] S. Popov, A. Teixeira 2015 Soft local times and decoupling of random interlacements. J. European Math. Soc , [10] D. Revuz 1984 Markov chains. North-Holland Publishing Co., Amsterdam, 2nd edition. [11] A.-S. Sznitman 2010 Vacant set of random interlacements and percolation. Ann. Math. 2, 171 3,

22 [12] W. Woess 2009 Denumerable Markov chains. Generating functions, boundary theory, random walks on trees. EMS Textbooks in Mathematics. European Mathematical Society EMS, Zürich. 22

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Non-homogeneous random walks on a semi-infinite strip

Non-homogeneous random walks on a semi-infinite strip Non-homogeneous random walks on a semi-infinite strip Chak Hei Lo Joint work with Andrew R. Wade World Congress in Probability and Statistics 11th July, 2016 Outline Motivation: Lamperti s problem Our

More information

arxiv: v1 [math.pr] 13 Nov 2018

arxiv: v1 [math.pr] 13 Nov 2018 PHASE TRANSITION FOR THE FROG MODEL ON BIREGULAR TREES ELCIO LEBENSZTAYN AND JAIME UTRIA arxiv:1811.05495v1 [math.pr] 13 Nov 2018 Abstract. We study the frog model with death on the biregular tree T d1,d

More information

On decoupling inequalities and percolation of excursion sets of the Gaussian free field

On decoupling inequalities and percolation of excursion sets of the Gaussian free field On decoupling inequalities and percolation of excursion sets of the Gaussian free field arxiv:1307.2862v3 [math.pr] 4 Feb 2015 Serguei Popov 1 Balázs Ráth 2 February 5, 2015 1 Department of Statistics,

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

arxiv: v2 [math.pr] 17 Jul 2011

arxiv: v2 [math.pr] 17 Jul 2011 On the transience of random interlacements Balázs Ráth Artëm Sapozhnikov May 2, 2018 arxiv:1102.4758v2 math.pr] 17 Jul 2011 Abstract We consider the interlacement Poisson point process on the space of

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Conditional independence, conditional mixing and conditional association

Conditional independence, conditional mixing and conditional association Ann Inst Stat Math (2009) 61:441 460 DOI 10.1007/s10463-007-0152-2 Conditional independence, conditional mixing and conditional association B. L. S. Prakasa Rao Received: 25 July 2006 / Revised: 14 May

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

arxiv: v1 [math.pr] 20 May 2018

arxiv: v1 [math.pr] 20 May 2018 arxiv:180507700v1 mathr 20 May 2018 A DOOB-TYE MAXIMAL INEQUALITY AND ITS ALICATIONS TO VARIOUS STOCHASTIC ROCESSES Abstract We show how an improvement on Doob s inequality leads to new inequalities for

More information

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008

Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv: v2 [math.pr] 12 May 2008 Asymptotic behaviour of randomly reflecting billiards in unbounded tubular domains arxiv:0802.1865v2 [math.pr] 12 May 2008 M.V. Menshikov,1 M. Vachkovskaia,2 A.R. Wade,3 28th May 2008 1 Department of Mathematical

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Brownian Motion and the Dirichlet Problem

Brownian Motion and the Dirichlet Problem Brownian Motion and the Dirichlet Problem Mario Teixeira Parente August 29, 2016 1/22 Topics for the talk 1. Solving the Dirichlet problem on bounded domains 2. Application: Recurrence/Transience of Brownian

More information

Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive

Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive arxiv:math/5365v [math.pr] 3 Mar 25 Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive Christopher Hoffman August 27, 28 Abstract Benjamini, Häggström, Peres and Steif [2] introduced the

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip

A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip arxiv:1608.04578v1 [math.pr] 16 Aug 2016 A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip Alberto Chiarini Alessandra Cipriani Abstract

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Multiple points of the Brownian sheet in critical dimensions

Multiple points of the Brownian sheet in critical dimensions Multiple points of the Brownian sheet in critical dimensions Robert C. Dalang Ecole Polytechnique Fédérale de Lausanne Based on joint work with: Carl Mueller Multiple points of the Brownian sheet in critical

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d

Lecture 8. Here we show the Asymptotics for Green s funcion and application for exiting annuli in dimension d 3 Reminders: P(S n = x) 2 ( d Random Walks and Brownian Motion Tel Aviv University Spring 0 Lecture date: Apr 9, 0 Lecture 8 Instructor: Ron Peled Scribe: Uri Grupel In this lecture we compute asumptotics estimates for the Green s

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

On the recurrence of some random walks in random environment

On the recurrence of some random walks in random environment ALEA, Lat. Am. J. Probab. Math. Stat. 11 2, 483 502 2014 On the recurrence of some random walks in random environment Nina Gantert, Michael Kochler and Françoise Pène Nina Gantert, Technische Universität

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

Probability Theory. Richard F. Bass

Probability Theory. Richard F. Bass Probability Theory Richard F. Bass ii c Copyright 2014 Richard F. Bass Contents 1 Basic notions 1 1.1 A few definitions from measure theory............. 1 1.2 Definitions............................. 2

More information

On the Borel-Cantelli Lemma

On the Borel-Cantelli Lemma On the Borel-Cantelli Lemma Alexei Stepanov, Izmir University of Economics, Turkey In the present note, we propose a new form of the Borel-Cantelli lemma. Keywords and Phrases: the Borel-Cantelli lemma,

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Zeros of lacunary random polynomials

Zeros of lacunary random polynomials Zeros of lacunary random polynomials Igor E. Pritsker Dedicated to Norm Levenberg on his 60th birthday Abstract We study the asymptotic distribution of zeros for the lacunary random polynomials. It is

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

INTERLACEMENTS AND THE WIRED UNIFORM SPANNING FOREST

INTERLACEMENTS AND THE WIRED UNIFORM SPANNING FOREST INTERLACEMENTS AND THE WIRED UNIFORM SPANNING FOREST TOM HUTCHCROFT Abstract. We extend the Aldous-Broder algorithm to generate the wired uniform spanning forests WUSFs) of infinite, transient graphs.

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

Stability of the two queue system

Stability of the two queue system Stability of the two queue system Iain M. MacPhee and Lisa J. Müller University of Durham Department of Mathematical Science Durham, DH1 3LE, UK (e-mail: i.m.macphee@durham.ac.uk, l.j.muller@durham.ac.uk)

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

arxiv: v2 [math.pr] 26 Jun 2017

arxiv: v2 [math.pr] 26 Jun 2017 Existence of an unbounded vacant set for subcritical continuum percolation arxiv:1706.03053v2 math.pr 26 Jun 2017 Daniel Ahlberg, Vincent Tassion and Augusto Teixeira Abstract We consider the Poisson Boolean

More information

Lecture 7. 1 Notations. Tel Aviv University Spring 2011

Lecture 7. 1 Notations. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 2011 Lecture date: Apr 11, 2011 Lecture 7 Instructor: Ron Peled Scribe: Yoav Ram The following lecture (and the next one) will be an introduction

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

Biased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres

Biased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres Biased random walk on percolation clusters Noam Berger, Nina Gantert and Yuval Peres Related paper: [Berger, Gantert & Peres] (Prob. Theory related fields, Vol 126,2, 221 242) 1 The Model Percolation on

More information

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1

Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension. n=1 Chapter 2 Probability measures 1. Existence Theorem 2.1 (Caratheodory). A (countably additive) probability measure on a field has an extension to the generated σ-field Proof of Theorem 2.1. Let F 0 be

More information

Lattice spin models: Crash course

Lattice spin models: Crash course Chapter 1 Lattice spin models: Crash course 1.1 Basic setup Here we will discuss the basic setup of the models to which we will direct our attention throughout this course. The basic ingredients are as

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

arxiv: v1 [math.pr] 11 Dec 2017

arxiv: v1 [math.pr] 11 Dec 2017 Local limits of spatial Gibbs random graphs Eric Ossami Endo Department of Applied Mathematics eric@ime.usp.br Institute of Mathematics and Statistics - IME USP - University of São Paulo Johann Bernoulli

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Pour les 59 ans de Patrick Cattiaux et Christian Léonard. La promenade autour des points tardifs

Pour les 59 ans de Patrick Cattiaux et Christian Léonard. La promenade autour des points tardifs Pour les 59 ans de Patrick Cattiaux et Christian Léonard Toulouse, 9 Juin 2017 La promenade autour des points tardifs Francis Comets Université Paris-Diderot, LPMA Basé sur des travaux avec Christophe

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

1 Random Walks and Electrical Networks

1 Random Walks and Electrical Networks CME 305: Discrete Mathematics and Algorithms Random Walks and Electrical Networks Random walks are widely used tools in algorithm design and probabilistic analysis and they have numerous applications.

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

arxiv:math/ v2 [math.pr] 1 Oct 2008

arxiv:math/ v2 [math.pr] 1 Oct 2008 The nnals of Probability 2008, Vol. 36, No. 4, 209 220 DOI: 0.24/07-OP352 c Institute of Mathematical Statistics, 2008 arxiv:math/06369v2 [math.pr] Oct 2008 SUBCRITICL REGIMES IN THE POISSON BOOLEN MODEL

More information

1 Math 285 Homework Problem List for S2016

1 Math 285 Homework Problem List for S2016 1 Math 85 Homework Problem List for S016 Note: solutions to Lawler Problems will appear after all of the Lecture Note Solutions. 1.1 Homework 1. Due Friay, April 8, 016 Look at from lecture note exercises:

More information

A conditional quenched CLT for random walks among random conductances on Z d

A conditional quenched CLT for random walks among random conductances on Z d A conditional quenched CLT for random walks among random conductances on Z d Christophe Gallesco Nina Gantert Serguei Popov Marina Vachkovskaia October, 3 Department of Statistics, Institute of Mathematics,

More information

From loop clusters and random interlacements to the Gaussian free field

From loop clusters and random interlacements to the Gaussian free field From loop clusters and random interlacements to the Gaussian free field Université Paris-Sud, Orsay June 4, 014 Definition of the random walk loop soup G = (V, E) undirected connected graph. V at most

More information

Some remarks on the elliptic Harnack inequality

Some remarks on the elliptic Harnack inequality Some remarks on the elliptic Harnack inequality Martin T. Barlow 1 Department of Mathematics University of British Columbia Vancouver, V6T 1Z2 Canada Abstract In this note we give three short results concerning

More information

Biased random walks on subcritical percolation clusters with an infinite open path

Biased random walks on subcritical percolation clusters with an infinite open path Biased random walks on subcritical percolation clusters with an infinite open path Application for Transfer to DPhil Student Status Dissertation Annika Heckel March 3, 202 Abstract We consider subcritical

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice

Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice Yasunari Higuchi 1, Xian-Yuan Wu 2, 1 Department of Mathematics, Faculty of Science, Kobe University,

More information

The diameter of a random Cayley graph of Z q

The diameter of a random Cayley graph of Z q The diameter of a random Cayley graph of Z q Gideon Amir Ori Gurel-Gurevich September 4, 009 Abstract Consider the Cayley graph of the cyclic group of prime order q with k uniformly chosen generators.

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Propp-Wilson Algorithm (and sampling the Ising model)

Propp-Wilson Algorithm (and sampling the Ising model) Propp-Wilson Algorithm (and sampling the Ising model) Danny Leshem, Nov 2009 References: Haggstrom, O. (2002) Finite Markov Chains and Algorithmic Applications, ch. 10-11 Propp, J. & Wilson, D. (1996)

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Harmonic Functions and Brownian Motion in Several Dimensions

Harmonic Functions and Brownian Motion in Several Dimensions Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

MARTIN CAPACITY FOR MARKOV CHAINS

MARTIN CAPACITY FOR MARKOV CHAINS MARTIN CAPACITY FOR MARKOV CHAINS Itai Benjamini 1, Robin Pemantle 2 and Yuval Peres 3 Cornell University, University of Wisconsin and University of California Abstract The probability that a transient

More information

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME

RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales

Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Fundamental Inequalities, Convergence and the Optional Stopping Theorem for Continuous-Time Martingales Prakash Balachandran Department of Mathematics Duke University April 2, 2008 1 Review of Discrete-Time

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information