Biased random walks on subcritical percolation clusters with an infinite open path

Size: px
Start display at page:

Download "Biased random walks on subcritical percolation clusters with an infinite open path"

Transcription

1 Biased random walks on subcritical percolation clusters with an infinite open path Application for Transfer to DPhil Student Status Dissertation Annika Heckel March 3, 202 Abstract We consider subcritical percolation on the half-plane Z Z + 0 and additionally open the bonds along the line Z {0}. A biased random walk starting at the origin which can move along the open bonds shows anomalous behaviour due to trapping: If the bias is small enough, the speed tends to a positive deterministic limit, but if the bias is large enough, the speed tends to 0. We will also give a generalisation of this result attaching other types of subgraphs of Z d to an infinite line. Setting and main result Consider subcritical bond percolation with p < p c = 2 on the half-plane Z Z+ 0 with nearest-neighbour edges and additionally open all bonds along the line Z {0}. Let G = (V, E) denote the open cluster containing the line Z {0}. Since the percolation is subcritical, G is almost surely an infinite line with finite clusters attached to it. We now start a biased random walk Z n = (X n, Y n ) with bias β > towards the right at the origin (0, 0). More specifically, we assign to each edge e = (x, y)(x, y ) the weight w e = β x x e is open where x x denotes the maximum of x and x. For z Z 2, let π(z) be the sum of the edge weights w e of all edges incident to z. Then the next step of the random walk is distributed proportionally to the edge weights. In other words, if P β ω denotes the law of this random walk for some bond configuration ω, and z, z are vertices joined by an edge e such that π(z) > 0, then P β ω(z n+ = z Z n = z) = w e π(z).

2 This means that if the random walk is at a vertex v = (x, y) V with degree d(v) (in G), if the bond (x, y)(x +, y) is open, it jumps to (x +, y) with β probability β+d(v) and to all other neighbours of v with probability β+d(v). If the bond (x, y)(x +, y) is closed, the next step is distributed uniformly amongst the neighbours of v. Let P β denote the joint law of the percolation configuration and the random walk. We sometimes omit the indices β or ω in P β and P β ω for the sake of simpler notation if the context is clear. We will prove the following theorem: Theorem. There are constants < β l β u < which only depend on p such that a) if < β < β l, then there is a positive constant v = v(p, β) such that lim n X n /n = v P β -almost surely, i.e. the random walk is ballistic. b) if β > β u, then X n /n 0 P β -almost surely. Furthermore, if β l β β u, X n /n v = v(p, β) 0 P β -almost surely. The described anomalous behaviour where a small, but existing bias yields a positive speed while under a large bias the asymptotic speed is zero, is due to the random walk spending a large amount of time in traps or dead ends in the random structure. In our case, the traps are finite percolation clusters extending very far towards the right. Once the random walk moves into one of the traps, a large bias means that it takes a long time to move out of the trap against the direction of the bias. Lyons, Pemantle and Peres 6 considered the case of a random walk on Galton Watson trees with positive extinction probability and showed this behaviour for a random walk with a bias which is directed away from the root. For a biased random walk on an infinite percolation cluster on Z d, which is a model for transport in an inhomogeneous medium under an external field first introduced by Barma and Dhar, this behaviour was proved by Berger, Gantert and Peres 2 and by Snitzman 8. Recent progress has been made on this model in 3 and 4. Later, in Section 3, we will consider the case of an infinite line with more general subgraphs of Z d attached to it which may act as traps, and give a criterion for the ballisticity or non-ballisticity of a biased random walk in these environments. Unlike many other results in this area, our proof does not require regeneration times, and as a consequence the criterion and the derived formula for the asymptotic speed does not depend on such regeneration times and can in many cases be easily calculated. 2

3 This dissertation is organised in the following way. The main result will be proved in Section 2. It will turn out that the key point for the ballisticity or non-ballisticity of the random walk is the expected duration of its first visit to a section of the connected component containing the origin, weighted with the inverse of the length of the section roughly speaking, a section is a finite cluster attached to the line Z {0} and the line segment under it. The term section will be specified in Section 2. and some basic properties of the random walk will be discussed. In Section 2.2, we will prove some lemmas about the distribution of the size of sections. In Section 2.3, the weighted durations of visit lengths to a section will be studied. In Section 2.3., it will be shown that there are values < β l β u < such that for β < β l, the expected weighted duration of the first visit to a section is finite, whereas for β > β u, it is infinite. These will also be the values β l and β u in Theorem. In Section 2.3.2, this will be extended to the weighted overall time spent within a section. Next, in Section 2.3.3, it will be shown that the sequence of weighted overall time the random walk spends in the section containing (x, 0), x Z, is ergodic. This will be needed later on to show that their average tends to their expectation almost surely. Finally, the proof of Theorem will be concluded in Section 2.4. In Section 3, a generalisation of our result will be discussed and our methods will be applied to two other special cases. 2 Proof of the main result 2. Preliminaries and transience of the random walk The idea behind the term section is just a finite open cluster attached to the line Z {0} together with the line segment and smaller clusters below it. More formally, consider the infinite connected open cluster G = (V, E) which contains the vertices in Z {0} together with the open bonds E. We call a bond b between some sites (x, 0) and (x +, 0) on the line Z 0 a bridge if (G, E \ b) is disconnected. Suppose we cut all the bridges, i.e. we look at the graph G = (V, E \ {b b is a bridge}). Then the components of G are called sections. Cutting the bridges separates Z {0} into components, and there cannot be any open paths between different components of Z {0} using only vertices in the upper half-plane because otherwise one the adjacent bonds of the components would not be a bridge. So every section has a corresponding connected line segment. All these line segments are almost surely finite: Otherwise, suppose that w.l.o.g. {(x, 0), (x +, 0),...} is contained in the same section, then for every y x, (y, 0)(y +, 0) is not a bridge, so there is an open path from a vertex (l, 0) to a vertex (r, 0), l x < r, which apart from its end points only uses vertices of the upper half-plane. Since any two paths from (l, 0) to (r, 0) and from (l 2, 0) to (r 2, 0), respectively, intersect if l l 2 r r 2, it follows that 3

4 there are infinitely many integers r x such that there is an open path from a vertex (x, 0), x x, to (r, 0) in the upper half-plane. In particular, C((r, 0)) r x, where C((r, 0)) denotes the open cluster containing (r, 0) in the original percolation configuration. Since we are considering subcritical percolation, the cluster size distribution decays exponentially, so there is a constant η > 0 such that for every r, P(C((r, 0)) r x) e η(r x). But then r x P(C((r, 0)) r x) <, so by the Borel-Cantelli lemma, there are almost surely e η only finitely many r x with C((r, 0)) r x. So all line segments are almost surely finite. Let S(x) denote the section containing the vertex (x, 0). For a section S, let l S, r S Z such that (l S, 0) and (r S, 0) are the leftmost and rightmost vertices of S (Z {0}), respectively. Then S contains {l S, l S +,..., r S } {0} completely and no other vertices from Z {0}. By the section definition, (l S, 0)(l S, 0) and (r S, 0)(r S +, 0) are bridges. Let l(s) = r S l S + be the length of the section S on the line Z {0}. Since every section corresponds to a finite connected line segment on Z {0}, the sections appear linearly one after the other along the line Z {0}. Let S 0 := S(0) and number the sections S, S 2,... and S, S 2,... along the line Z 0. Denote by i x the index of the section S(x), i.e. S(x) = S ix. S 0 S S 2 S l 2 r 2 r = l l 0 0 r 0 l r l 2 Figure : Sections (S i ) i Z. We will often consider the corresponding electric network where each edge e is assigned the conductance w e, or equivalently resistance w e, and use the terms weight and conductance interchangeably. For the close relationship between reversible random walks and electric networks, see 7. The random walk is transient if and only if the conductance between the origin and infinity is positive (or equivalently, the resistance is finite). It can easily be seen by bounding the conductance from below by the positive conductance along the open path Z {0} that the random walk is transient. Since G is almost surely a concatenation of finite sections, this implies that for almost all configurations ω, ( ) lim X n = lim X n = P β n n ω-almost surely On the subgraph G {(x, y) x 0}, for all ω such that all sections are finite, the 4

5 resistance between the origin and infinity can be bounded from below by the sum of the resistances of the bridges, which is infinite. Therefore, the random walk is recurrent on the subgraph G {(x, y) x 0}, and hence, for almost all ω, lim X n = P β n ω-a.s. () For technical reasons, we will consider a slightly modified version of our random walk from now on: Instead of starting at the origin (0, 0), we will start at the left border vertex (l S(0), 0) of the section S(0). Both the original and the new version of the random walk almost surely go to infinity and therefore reach the vertex (r S(0), 0) eventually. Hence we can couple them so they coincide from the first time they visit (r S(0), 0). Since this almost surely only takes a finite amount of time and since the distance difference l S(0) is almost surely finite, this does not affect the overall asymptotic speed of the random walk. Therefore we can prove Theorem for the random walk started at (l S(0), 0) instead. Throughout the proof we will need the following lemma about the probability of never returning to a vertex z = (x, 0) on the line Z {0}. Lemma 2. For all n N 0, configurations ω, and x Z, P ω ( k > n : Z k (x, 0) Z n = (x, 0)) β β + 2 =: p esc Proof. The escape probability from a vertex z, which is just the probability that the random walk started at z never returns to z, is given by Ceff z, π(z), where Ceff z, denotes the effective conductance between z and infinity in the corresponding electric network where each edge e is assigned the conductance w e and where π(x) denotes the sum of the weights or conductances of all edges adjacent to x (see for example Theorem 9.25 in 5). β x+ β x+2 β x+3 β x+4... (x, 0) Figure 2: Lower bound for C eff z, along the direct path to infinity. If z = (x, 0) is on the line Z {0}, C eff z, can be bounded from below by the effective conductance of the direct path along the x-axis from z to infinity, which can be calculated with the series rule: ( Cz, eff β x+ + ) ( ) +... = β x+ β i = β x+ ( β ) βx+2 Therefore, the escape probability can be bounded from below as well: i=0 5

6 Cz, eff π(z) βx+ ( β ) β x+ + 2β x = β (0, ) β + 2 Since for almost every environment ω all sections are finite and X n almost surely, the universal lower bound for the escape probability implies the next lemma, which can be seen by considering the points (l Si, 0), i N 0. Lemma 3. For almost every environment ω, it almost surely happens infinitely many times that the random walk visits a section for the first time and never returns to previous sections. More precisely, for almost every ω there are P β ω- almost surely times 0 t 0 < t < t 2 <... such that for every j, there is an i j N such that Z tj S ij and for all t < t j, Z t / S ij and for all t t j, Z t / i<i j S i. 2.2 Sections and the distribution of their size We will need some results about the distribution of the size of the sections later. Lemma 4. For x Z, let diam(s(x)) be the diameter of the section S(x), i.e. diam(s(x)) = max{ x x 2 + y y 2 (x, y ), (x 2, y 2 ) S(x)} Then there is a constant η = η(p) > 0 which does not depend on x such that for all n, P(diam(S(x)) n) e ηn Proof. Note that the cluster C := C((l S(x), 0)) in the original percolation configuration is the same as C((r S(x), 0)). To see this, suppose that (r S(x), 0) / C. Let (r, 0) be the rightmost vertex in C (Z {0}) S. So l S(x) r < r S(x). Since (r, 0)(r +, 0) is not a bridge, there must be a path in the original percolation configuration from a vertex (v, 0) with v r to a vertex (v 2, 0) with v 2 r +. Since (l S(x), 0)(l S(x), 0) is a bridge, v must be at least l S(x). But any open path in the upper half-plane from l S(x), r to r +, ) must intersect C. So (v 2, 0) C S and v 2 > r, a contradiction. Therefore, C spans over the line segment l S(x), r S(x) and the entire section S(x). So for any (x, y ), (x 2, y 2 ) S(x), we have x x 2 + y y 2 C and therefore diam(s(x)) C. Let n 2, and let diam S(x) n, hence C n. Since C = C((l S(x), 0)) also contains (r S(x), 0), we have C((l S(x), 0)) r S(x) l S(x) x l S(x), so C((l S(x), 0)) (x l S(x) ) n. Therefore, P (diam(s(x)) n) P { C((x, 0)) (x y) n} y x 6

7 Since we consider subcritical percolation, the cluster size distribution decays exponentially, so there is a positive constant η such that for any z Z Z + 0 and any n 2, P( C(z) n) e η n. Therefore, the probability above can be bounded by P ( C((y, 0)) n) + P ( C((y, 0)) x y) x n<y x ne η n + y Z:y x n y Z:y x n e η n e η (x y) = ne η n + e η Since P(diam(S(x)) n) < for all n, this implies that there is a constant η = η(p) > 0 such that for all n, P(diam(S(x)) n) e ηn Since the number S(x) of vertices in S(x) is bounded by (diam(s(x)) + ) 2, it follows immediately that P( S(x) n) e η( n ). In particular, E S(x) = P( S(x) n) < (2) n= Recall that i x denotes the index of the section S(x), i.e. S(x) = S ix. Then the index behaves roughly linear in x, which is proved in the following lemma. Lemma 5. i x lim x x = lim x x x y=0 l(s(y)) = E < almost surely. Proof. It is a known fact and can easily be checked that if (X i ) i Z is an ergodic sequence of random variables with values in (E, E) and φ : (E N, E N ) (E, E ) is measurable, then the sequence (φ((x i+j ) i Z )) j Z is also ergodic. Therefore, the sequence ( l(s(x)) ) x Z is ergodic: Note that is a measurable function in the independent bond states, and these can be seen as an i.i.d. (and hence ergodic) sequence of bond state vectors. Shifting the bond configuration to the left x times and applying the same measurable function gives l(s(x)). The second equality now follows from Birkhoff s ergodic theorem, and since, the expectation is finite. For the first equality, note that the terms l(s(y)) for all y such that S(y) = S(x) add up to. So the sum x y=0 l(s(y)) counts the number of sections until (x, 0), except possibly a part of S 0 and of S ix, so i x x y=0 l(s(y)) i x + 7

8 Dividing by x and taking the limit gives the first equality. Lemma 6. n lim n n i=0 x l(s i ) = lim = x i x E Proof. Note that Hence, almost surely, n r Sn l(s i ) r Sn + l(s 0 ) i=0 n r Sn lim l(s i ) = lim n n n n i=0 = lim r Sn n i rsn i x = lim x x since r Sn as n. The second equality follows from the previous lemma. 2.3 Durations of visits to a section and their expectations We will see later that the ballisticity of the random walk will depend on the expected amount of time the random walk spends within a section during its first visit to this section, divided by the length of the section. We will first investigate when this expectation is finite or infinite for different values of β Expected duration of the first visit to a section and definition of β l and β u Let T i (x) be the time the random walk spends in S(x) during its i th visit to S(x). If there is no i th visit to S(x), let T i (x) = 0. More specifically, for x Z, let s x 0 = 0, and for i, define inductively: t x i = inf{t s x i Z t S(x)} s x i = inf{t t x i Z t / S(x)} Now for all x Z, i N, let T i (x) = s x i tx i if t x i < and let T i (x) = 0 if t x n i =. Since by (), X n almost surely, the random walk almost surely visits S(x) at least once for all x 0, so t x < and T (x) > 0. Let T (x) = i=0 T i(x) denote the overall amount of time spent in S(x). Recall that we are considering a modified version of the random walk that starts at the vertex (l S(0), 0). It will turn out that the key to the ballisticity of this random walk (and therefore of the original random walk) will be the expectation 8

9 of T (0) divided by the length of the section S(0), i.e. the expectation E T (0). We will later show that the speed of the random walk is positive if and only if this expectation is finite. To simplify notation later, let W i (x) = weighted visit lengths. T i(x) l(s(x)) and W (x) = T (x) l(s(x)) be the The aim of this section is to show that there are values < β l β u < such that for < β < β l, EW (0) is finite, whereas it is infinite for β > β u. These will then also be the constants β l and β u in Theorem. Whenever it needs to be made clear that an expectation corresponds to P β for a specific β, it is denoted by E β. Usually the index β is omitted. We want to study EW (0) more closely using expected return times to get bounds for the critical values in a similar way as it was done for β u in 2. However, since the possible exits from our traps (the sections) do not have the same x-coordinate, they may have very different weights, and this complicates things considerably. We must consider random walks started at the left border vertex and at the right border vertex of a section separately. So consider the time T L the random walk, started at time 0 at (l S(0), 0), takes to exit S(0) either via (l S(0), 0) or via (r S(0) +, 0), i.e. the smallest t such that Z t = (l S(0), 0) or Z t = (r S(0) +, 0). Let W L = T L. Then, since the random walk starts at (l S0, 0), EW (0) = EW L (3) Also consider the time T R it takes the random walk, started at (r S(0), 0), to exit section S(0) either via (l S(0), 0) or via (r S(0) +, 0), and let W R = T R. Let A be a possible configuration for S(0), i.e. finite percolation clusters and a corresponding line segment that form a possible shape for the section S(0), so that the leftmost point of A (Z {0}) is the origin. Let p A = P(S(0) A), where S(0) A means that S(0) has the same shape as A but is shifted along the x-axis (see Figure 3). Let l(a) be the length of A on the x-axis. Then (l(a), 0) is the rightmost vertex of A (Z {0}). Let A denote the union of A and the adjacent vertices (0, ) and (l(a), 0) together with the edges (, 0)(0, 0) and (l(a), 0)(l(A), 0). For x A, let π (x) be the sum of all edge weights of edges incident to x within A. Let π (A ) denote the sum of all weights π (x) of all vertices x in A. For a configuration A, consider a weighted random walk moving on A started at (0, 0). Let EW L A = E T L l(a) A denote the expected weighted time it takes this random walk to reach the exit vertices (, 0) or (l(a), 0). Now consider the same for a random walk started at (l(a), 0), and denote the corresponding 9

10 S(0) (l S(0), 0) (r S(0), 0) A (, 0) (l(a), 0) (0, 0) (l(a), 0) Figure 3: S(0) is of shape A. A is A with the exit vertices (, 0) and (l(a), 0). expectation by EW R A = E T R l(a) A. Then we have EW L = A EW R = A p A EW L A p A EW R A Lemma 7. For all β >, E β W L < E β W R < Proof. For every bond configuration ω, the probability that a random walk started at (l S(0), 0) never returns to (l S(0), 0) is at least p esc by Lemma 2. Since X n almost surely for almost every bond configuration ω, and since the random walk needs to pass through (r S(0), 0) if X n, it follows that the probability of its first step being up or towards the right and of then visiting (r S(0), 0) before returning to (l S(0), 0) (if ever) is at least p esc. Since this is true for almost every bond configuration (all configurations where all sections are well-defined and finite), it is true for every possible finite configuration A. So for every finite possible configuration A, the probability that the random walk started at (0, 0) takes it first step up or towards the right and visits (l(a), 0) before it returns to (0, 0) is at least p esc. For it to reach the vertices (, 0) or (l(a), 0), it needs to pass through (0, 0) or (l(a), 0) first. Hence, with the strong Markov property, the expected time for it to reach (, 0) or (l(a), 0) is at least p esc times the expected time it takes for the random walk 0

11 to reach (, 0) or (l(a), 0) if started from (l(a), 0). Therefore, ET L A p esc ET R A and hence, EW R = A T R p A E l(a) A T L p A E p esc l(a) A A = p esc EW L So if EW L <, it follows that EW R < as well since p esc > 0. Note that the converse statement (E β W R < E β W L < ), if it is true at all, can not be deduced in the same manner since there is no positive lower bound for the probability to reach (0, 0) before returning to (l(a), 0) which holds for all finite possible configurations A. Finally, consider a weighted random walk on A where both exit vertices (, 0) and (l(a), 0) are merged into one new vertex s which inherits the edges (, 0)(0, 0) and (l(a), 0)(l(A), 0) and their weights from (, 0) and (l(a), 0), and start the random walk at the new vertex s. The weight of each vertex x s in this new graph is still π (x), and let π (s) = π ((, 0))+π ((l(a), 0)) = + β l(a) be the weight of the new vertex s. The sum of all vertex weights is still π (A ). Let T be the return time of the random walk to s if it is started at s at time 0, and denote the corresponding weighted time by W, then EW A = E T l(a) A = l(a) ET A. Note that π defines an invariant measure for the Markov chain given by the transition probabilities of the random walk on A. So π divided by the overall weight π(a ) is the unique invariant distribution of the Markov chain. The expected return time ET A is then given by the inverse of this invariant distribution at s (see for example Theorem 7.5 in 5), so ET A = π (A ) π (s) = π (A ). Therefore, we have +β l(a) EW A = π (A ) l(a)( + β l(a) ) (4) Starting at s, the random walk moves to (0, 0) with probability and to +β l(a) (l(a), 0) with probability βl(a), and then moves from there according to +β l(a) the edge weights until it returns to s. Hence, And therefore, ET A = + + β l(a) ET L A + βl(a) + β l(a) ET R A EW A = l(a) + + β l(a) EW L A + βl(a) + β l(a) EW R A (5)

12 It follows with (4) that π (A ) l(a) = + βl(a) l(a) + EW L A + β l(a) EW R A EW L A (6) Let EW = A π (A ) p A l A ( + β l(a) ) = A p A EW A (7) Then we have the following lemma: Lemma 8. For any β >, E β W = E β W L = Proof. Suppose EW L <, then by Lemma 7, EW R <. For any configuration A, by (5), EW A + EW L A + EW R A, and therefore EW + EW L + EW R <. Hence, EW = implies EW L =. Now we can finally prove the two lemmas which are the aim of this section: Lemma 9. There is a constant β l = β l (p) > such that for all < β < β l, T (0) E β W (0) = E < Proof. By (3), E β W (0) = E β W L. By (6), for every A, E β W L A π (A ) l(a). Hence, it suffices to show that there is a β l such that for all < β < β l, A p A π (A ) l(a) <. If A is a configuration for S(0), then A (diam(a) + ) 2 (with diam(a) defined as in Lemma 4). Also, the weight π (x) of any vertex within A is bounded by 4β diam(a)+. So, since l(a), p A π (A ) p A ((diam(a) + ) 2 + 2) 4β diam(a)+ l(a) A A = p A ((n + ) 2 + 2) 4β n+ n A:diam(A)=n = (n 2 + 2n + 3) 4β n+ p A n n A:diam(A)=n 24n 2 β n+ P(diam(S(0)) = n) 24β n n 2 ( βe η) n, using Lemma 4 in the last step. This sum is convergent for < β < e η. Lemma 0. There is a constant β u = β u (p) such that for all β > β u, T (0) E β W (0) = E β = 2

13 Proof. Since by (3), E β W (0) = E β W L, and by Lemma 8, E β W = implies E β W L =, it suffices to show that there is a β u such that for all β > β u, E β W =. Let A n be the section in the form of a hook consisting of the open edges ((0, 0), (0, )) and ((0, ), (, )),..., ((n, ), (n, )), with l(a n ) = (see Figure 4). β β 2... β n β... (0, 0) (n, 0) Figure 4: The hook A n and the box B n. Then π (A n) l(a n )( + β l(an) ) = 2 e E(A ) w e + β = βn+ β 2 > 2 βn+ β 2 = 2 ( + β + + β + β β n ) + β = (8) We want to give a lower bound for p An = P(S(0) A). For this, consider the box B n = {0,..., n} {0, } as in Figure 4. Note that S(0) A n is equivalent to the bonds ((0, 0), (0, )) and ((0, ), (, )),..., ((n, ), (n, )) being open and all other bonds in B n being closed (so that S(0) has the shape A n ) and that the boundary bonds are closed and there is no open path from a vertex (i, 0), i < 0, to a vertex (j, 0), j > n, which contains none of the vertices in B n. This ensures that the bonds (, 0)(0, 0) and (0, 0)(, 0) are bridges, so S(0) is indeed hook-shaped and the hook is not just part of a larger section. For n N, let D n be the event that there is an open path from a vertex (i, 0), i < 0, to a vertex (j, 0), j > n, which contains none of the vertices in B n. Then D n is independent from the bond states inside and on the boundary of B n and therefore, p An = p n+ ( p) 2n+3 P(D C n ) D n implies that there is a j n such that C((j, 0)) j, where C((j, 0)) is the open cluster containing (j, 0) in the original percolation configuration. Since we are considering subcritical percolation, this probability decays exponentially in j, so P( C((j, 0)) j) e η j for some η > 0, and hence P(D n ) P { C((j, 0)) j} e η j = j n j n 3 e ηn e η n 0

14 Let N = N(p) N be large enough so that for all n N, the last term is at most 2. So for all n N, p An = p n+ ( p) 2n+3 P(D C n ) 2 pn+ ( p) 2n+3. (9) Then E β W (7) = (8),(9) = π (A ) p A l(a)( + β l(a) ) π (A p n) An l(a A n N n )( + β l(an) ) ( ) 2 pn+ ( p) 2n+3 2 βn+ β 2 n N p( p) 3 β 2 p n ( p) 2n (β n+ ) n N The last sum is infinite for any β p ( p) Expected total visit length We will now prove that the expected total weighted visit length W (0) to S(0) is finite if and only if the expectation of the weighted first visit length EW (0) is finite. Since W (0) W (0), it suffices to show the following lemma. Lemma. If E β W (0) = E β T (0) <, then E β W (0) <. Proof. Let β > be such that E β W (0) <. This also means that the expected weighted visit length EW R if the random walk is started at (r S(0), 0), averaged over all possible configurations for S(0) as defined in the previous section, is finite (see Lemma 7). Let C = C(p, β) < be an upper bound for both of these expectations. Every time the random walk enters the section S(0), it goes to (r S(0), 0) with probability at least p esc (if it is not already there), and then with probability at least p esc exits towards the right from (r S(0), 0) and never returns to S(0). So the probability of never returning to section S(0) after the current visit is ( 2 at least p 2 esc = β β+2) for any environment where S(0) is finite. So if J denotes the total number of visits to S(0), for all finite possible configurations A and j, Hence, since J, P(J j + J j, S(0) A) p 2 esc P(J j S(0) A) ( p 2 esc) j (0) 4

15 If we only condition on J j, the j th visit is not distributed as the first visit starting from one of the border vertices (without conditions) because the condition J j gives information on the environment and therefore on the distribution of the visit length. However, this is not a problem when also conditioning on a fixed configuration A for S(0) and we can use the upper bound EW j (0) J j, S(0) A EW L A + EW R A, where W L and W R are the weighted visit lengths when starting at the (l S(0), 0) and (r S(0), 0), respectively. As mentioned above, we know that A p A(EW L A+EW R A) 2C. Hence, E W (0) = A = A (0) j p A E W (0) S(0) A = p A E W j (0) J j S(0) A A j p A E W j (0) J j, S(0) A P(J j S(0) A) j ( p 2 esc ) j A p A (E W L A + E W R A ) 2C p 2 esc < Ergodicity Lemma We want to use the convergence or divergence of the expectation of W (x) later by taking averages which tend to EW (x). To make this possible, we need the ergodicity of the sequence (W (x)) x N0. ( ) Lemma 2. The sequence (W (x)) x N0 = T (x) l(s(x)) is ergodic if the random x N 0 walk is started at the left border vertex (l S(0), 0) of S(0). Proof. The sequence is stationary: We started the random walk at (l S(0), 0) and the first visit to any section S(x), x 0 starts at (l S(x), 0). Now consider the sequence (W (x)) x. Since X n almost surely, the random walk visits S() eventually and starts its first visit at (l S(), 0). The distribution of (W (x)) x now only depends on (S(x + )) x Z in the same way that the distribution of (W (x)) x 0 depends on (S(x)) x Z, and as (S(x + )) x Z and (S(x)) x Z have the same distribution, so do (W (x)) x and (W (x)) x 0. So (W (x)) x N0 is stationary. So we only need to prove that the σ-algebra I of shift-invariant events (in the sequence (W (x)) x N0 ) is almost surely trivial. More specifically, consider the space ( Q N 0, F = σ ( )) (W (x)) x N0 with the shift operator τ : (xn ) n N0 (x n+ ) n N0. Let P be the probability measure on this space generated by the sequence (W (x)) x N0, that is for every A F, P(A) := P β ( (W (x)) x N0 A ). Then the σ-algebra of shift-invariant events is I = {A F τ (A) = A} 5

16 Let A I, then we need to prove P(A) = P β ( (W (x)) x N0 A ) {0, }. It is possible to generate the bond environment ω and the random walk on it in the following way: Assign to each bond i.i.d. Bernoulli random variables which describe the bond states as usual. Independently from this, assign to each vertex 2 4 independent infinite sequences of i.i.d. random variables: Each sequence corresponds to a possible configuration of open bonds incident to this vertex (except when the vertex is isolated), and each entry is a random variable valued {left, right, up, down} which is distributed according to the corresponding bond configuration. We can then piece together the values of all these random variables to generate the random walk: First work out the environment, then for every vertex take the infinite sequence which corresponds to its local bond configuration and use the n th entry for the decision if the random walk visits this vertex for the n th time, which way does it jump?. Let d denote a possible outcome of all jump decision variables and ω a bond state configuration as usual. It is almost surely the case that all sections are finite and that the random walk tends to infinity, and by Lemma 3 it almost surely happens infinitely many times that the random walk visits a section for the first time and never returns to previous sections afterwards. Let E be the event that all these events occur (so E has probability ), and let E be the event that E holds and we cannot reach a configuration (ω, d) / E by changing only finitely many of the bond states and jump decisions. Then E also holds almost surely otherwise, since we only have countably many bond states and jump decisions and each of them can only take finitely many values, there would be a finite set of bond state and jump decision random variables so that conditional on them taking certain values (which has positive probability), E has probability strictly less than. But this contradicts P(E ) =. So we have P(E) =. Now let A = { (ω, d) (W (x)) x N0 A } E Then, since P β (E) =, we have P β (A ) = P β ( (W (x)) x N0 A ) = P(A). We claim that A is a tail event in the independent bond state and jump decision random variables. This suffices to prove the claim since then by Kolmogorov s 0- law, P(A ) {0, }. To see this, suppose (ω, d) A, and suppose in (ω, d ) finitely many of the outcomes of the bond states and jump decision variables are changed. Then (ω, d ) E, so all sections are still finite, the random walk tends to positive infinity and it happens infinitely many times that it visits a section for the first time and never goes back. Denote by (Z t ) t N0 the original and by (Z t) t N0 the new random walk, and by S (x), T (x) the new sections and visit lengths, and T (x) l(s (x)) let W (x) =. Changing finitely many bond states affects at most finitely many sections (which may be altered, split, merged). Let x 0 N 0 such that (Z t ) t N0 never returns to previous sections after its first visit to S(x 0 ), and such that S(x 0 ) is the first such section after the 6

17 changes to the bond states and jump decision variables. This means that (S (x 0 ), S (x 0 + ),...) = (S(x 0 ), S(x 0 + )...) and the jump decision variables for all z x x 0 S(x) remain unchanged. Then, as (Z t) t N0 also reaches the section S(x 0 ) eventually, from this point onwards Z and Z agree. Hence for all x x 0, T (x) = T (x) and l(s (x)) = l(s(x)), so (W (x)) x x0 = ( W (x) ) x x 0 Since A is shift-invariant, τ x 0 (A) = A, so (W (x)) x 0 A (W (x)) x x0 A Hence (W (x)) x 0 A, and since we also have (ω, d ) E, it follows that (ω, d ) A. So A is a tail event in the independent bond state and jump decision random variables, and as described before, the claim follows with Kolmogorov s 0- law. With Birkhoff s ergodic theorem, we have the following corollary: Corollary 3. x T (y) T (0) lim x x l(s(y)) = E y=0 almost surely 2.4 Finishing the proof Let T i be the total time spent in section S i, so (T i ) i N0 is a (random) subsequence of (T (x)) x N0 where repetitions have been eliminated. Then T (x) = T ix. For all x, the terms W (y) = T (y) l(s(y)) with y such that S(y) = S(x) add up to T (x). Let (r i, 0) denote the right border vertex of a section S i. Then Hence, r n + n r n y=0 y=0 n W (y) i=0 r n W (y) n r n + n i=0 r n T i T 0 + y=0 W (y) T i T 0 n + r n + n r n r n + y=0 W (y) r By Lemma 6, we have lim nn x n = lim x i x = E almost surely, so with Corollary 3, since T 0 < almost surely, n E T (0) lim T i = a.s. () n n E i=0 7

18 Looking at () and Lemma 6, the following proposition seems immediate (but a formal proof requires some more work): Proposition 4. X n lim n n = E T (0) almost surely where := 0. Of course this proposition, together with Lemmas 9 and 0, suffices to prove Theorem. For the proof of Proposition 4, we need two more technical lemmas. The first one ensures that sections do not extend too far over their border vertices. Lemma 5. Let u > 0, and for n N 0, let G n be the event that S n > un or S n > un. Then ( ) P lim sup G n = P n n=0 k n G k = 0, i.e. there are almost surely only finitely many n such that G n holds. Proof. First note that for every u > 0, x= P( S(x) > u x ) = x= P( S(0) > u x ) ( ) u + P( S(0) x ) ( ) u u + P( S(0) x) x= = + u + 2 ( u + ) E S(0) < x= by (2). So by the Borel-Cantelli Lemma there almost surely only finitely many x such that S(x) > u x. Note that we cannot simply use the same argument for the sequence (S i ) i Z because the S i are not identically distributed. By r Lemma 6, Sn =: C > 0 almost surely. So there almost surely n E is a (random) n 0 that for all n n 0, r Sn < 2Cn. If S n > un infinitely often, this implies S(r Sn ) > un > u 2C r n for infinitely many n > n 0, and hence S(x) > u 2C x for infinitely many x. By the previous argument for u = u 2C, this has probability 0. So almost surely S n > un only finitely many times. The corresponding statement for S n follows because (S n ) n N has the same distribution as (S n ) n N. 8

19 The second lemma ensures that the random walk does not go back and forth too much between distant sections. Lemma 6. Let q >. For n N 0, let H n be the event that the random walk visits the section S n again after its first visit to S qn. Then ( ) P lim sup H n = P H k = 0, n n=0 k n i.e. there are almost surely only finitely many n such that H n occurs. Proof. Condition on a bond configuration ω such that all sections are finite (which has probability ). For any k Z, if the random walk is started at (l k, 0) or (r k, 0), with probability at least p esc it never returns to its starting point (see Lemma 2). Since by (), X n almost surely, this implies that the random walk almost surely does not enter S k again (since it would then have to pass through (l k, 0) and (r k, 0) again in order to go to + later). Hence, the probability that the random walks ever enters S k when started at (l k, 0) or (r k, 0) is at most p esc. Whenever the random walk enters a section S k from the right, it does so via (r k, 0). Therefore, we can conclude via induction with the strong Markov property of the random walk (conditioned on the bond configuration ω) that for any k, l N, l < k, the probability of returning to S l after the first visit to S k is at most ( p esc ) k l. In particular, P ω (H n ) ( p esc ) qn n ( p esc ) qn n, and therefore, P ω (H n ) ( p esc ) (q )n = ( p esc )( ( p esc ) q ) < n=0 n=0 Hence by the Borel-Cantelli lemma, P ω (lim sup n H n ) = 0 for almost every ω, and therefore, P (lim sup n H n ) = 0. Now we have all the ingredients to prove Proposition 4. Proof of Proposition 4. For n N 0, denote by F n the first time and by L n the last time the random walk is in section S n. Condition on the probability event that all clusters are finite and that X n. Then for all n 0, F n and L n are almost surely well-defined and finite, and F 0 < F <..., L 0 < L <.... If T denotes the total amount of time spent in sections S k with k < 0, then F n L n n T i + T + (2) i=0 n T i (3) i=0 9

20 Now fix u > 0 and q > and let N be a (random) integer large enough so that for all n N, G n and H n from Lemmas 5 and 6 do not hold. N < exists almost surely. Let t 0 = max{f Nq, L N } <, and let t t 0. Then there are (random) indices n, n 2 such that F n t < F n + and L n2 t < L n2 +. By the definition of t 0, we have n Nq and n 2 N. Note that n, n 2 as t. Since n Nq, H n q does not hold because n q Nq q = N. So the random walk does not return to any sections S n with n < n q after time F n because it would have to pass through S n q to get to any of the earlier sections. Also, for any section S n with n q n n, G n does not hold, so S n un < un. So for all vertices (x, y) in sections S n with n q n n, x l n q un. Hence, X t l n q un n q i= and so with (2), since t < F n +, X t t n q i=0 l(s i ) l(s 0 ) un n i=0 T i + T + l(s i ) un = n q i=0 l(s i ) l(s 0 ) un n q n n q n q i=0 l(s i ) l(s 0) n u = n n i=0 T i + T + n (4) With Lemma 6 and (), since n as t and l(s 0 ) and T are almost surely finite, the right-hand side tends to ( qe ) u T (0) E E = q ue E T (0) almost surely as t. Since (4) holds for all t t 0, we have almost surely: lim inf t X t t q ue Since u > 0 and q > were arbitrary and E almost surely, lim inf t X t t E E T (0) T (0) <, it follows that (5) For the other direction, since L n2 t < L n2 +, the walk never returns to S n2 after time t. Since n 2 N, H n2 + does not hold, so at time t the walk must be in a section S n with n 2 n < q(n 2 + ) because it will return to S n2 + at 20

21 time L n2 + > t. For any such n, G n does not hold, so S n un uq(n 2 + ). Let a = q(n 2 + ), then So with (3), since t L n2, a X t r a + uq(n 2 + ) l(s i ) + uq(n 2 + ) i=0 X t t a n2 i=0 l(s i) + uq(n 2 + ) i=0 T i = a n 2 + a a n 2 + i=0 l(s i) + uq n2 i=0 T i n 2 + (6) With Lemma 6 and (), since n 2 as t, and the right-hand side tends to q E T (0) E E lim sup t + uq X t t = q + uqe E T (0) a n 2 + = q(n 2+) n 2 + q, almost surely as t. Since (6) holds for all t t 0, we have almost surely: q + uqe Since u > 0 and q > were arbitrary and E almost surely, lim sup t X t t E E Together with (5), this implies the claim. T (0) T (0) <, it follows that 3 Generalisations The same proof, except for some of the results in Section 2.2 which were specific to subcritical percolation clusters and the definition of β u and β l, can be used to show a more general result. Only the ergodicity lemma in Section needs to be proved slightly differently for this more general case. Take the line Z {0} and attach subgraphs of Z d to the vertices they may overlap each other, or we can take them to be disjoint (and not embed the whole graph into Z d ). Suppose this is done in an ergodic way (the sequence of environments of (x, 0), x Z needs to be ergodic). Define sections as before, and suppose that almost surely every section is finite and the expectation of the size of the section containing the origin is finite. 2

22 Then the proof we used for the case of subcritical percolation on the half-plane carries over to this more general case we only need to slightly adjust the proof of the ergodicity lemma in Section by proving it in two steps, first conditioning on a fixed environment ω and proceeding as before, then showing that the set of all environments ω such that P ω ((W (x)) x Z A) = is in turn shift-invariant and therefore has probability 0 or. It follows that a random walk with bias β in the direction of the line Z {0} started at the leftmost vertex of S(0) (Z {0}) (and hence a walk started at the origin) is ballistic if and only if E <, where S(0), T (0) and so T (0) on are defined as before. The speed almost surely tends to T (0) E We will now apply our results and methods to two other special cases to determine the asymptotic behaviour of the speed of the biased random walk for different β in these contexts. In these cases, the critical value for the bias can be calculated explicitly. Example 7. We attach to each vertex (x, 0) the most basic kind of trap consisting of a hook of length L(x), i.e. a graph with the edges (0, 0)(0, ), (0, )(, ), (, )(2, ),... (L(x) 2, L(x) ), as in Figure 5. We attach them so they do not overlap. Let the L(x) be i.i.d. (or alternatively, they could also be chosen in an ergodic way), and let EL(0) <. Then the critical value for a biased random walk on the resulting graph is the radius of convergence r of the probability generating function G L(0) (z) = Ez L(0) of L(0). In other words, if < β < r, the speed of the random walk tends to a positive deterministic limit, while for β > r, it tends to 0. At the critical point, the speed tends to 0 if and only if G L(0) (r) =.. L(0) (0, 0) Figure 5: Line with i.i.d. hook-shaped traps. Proof. In this example the section S(0) consists exactly of (0, 0) and the hook attached to it. In particular, =, so the weighted visit lengths to S(x) are just the normal visit lengths. Every time the random walk is at (0, 0), it exits S(0) with probability β+ β+2 > 0. So the number of times it is at (0, 0) during one visit is geometrically distributed and has finite expectation. In between visits to (0, 0), the random walk makes independent excursions within 22

23 {0,..., L(0) } {}. So the overall first visit length to S(x) is finite if and only if the expected length of each excursion is. This is just the expected return time to (0, 0), which as before is given by the overall weight of S(0) divided by the weight of (0, 0) within S(0): + + β + β β L(0) E = E and this is convergent if and only if G L(0) (β) is. + βl(0) β = G L(0)(β) β + β 2 β The reason the calculations worked out so easily in this example is that there was just one attachment vertex for the traps and sections. In this case, the model is equivalent to a random walk on Z with waiting times whose distribution at each vertex is determined randomly first and which change in a certain way with β. Once there are several attachment vertices in S(0) (Z {0}), calculations quickly get more complicated. In some cases the critical point can still be calculated with a little more effort, as in the next example. Example 8. Consider bond percolation on the ladder Z {0, } where all the bonds (x, 0)(x +, 0) are always open, the bonds (x, 0)(x, ) are open with probability p and the bonds (x, )(x+, ) are open with probability q, 0 < p, q <, independently. Then β c := ( p)q is the critical value for the biased random walk starting at the origin with bias β in positive x-direction: If < β < β c, the speed tends to a positive deterministic limit, and if β β c, the speed tends to 0 almost surely. Sketch proof. It can easily be checked that indeed the sequence of environments of (x, 0), x Z, is ergodic and almost surely every section is finite and the expectation of the size of the section containing (0, 0) is finite, so our result applies. So if we start the random walk at the leftmost vertex of S(0) (Z {0}) (as before, this can be done without loss of generality and the result carries over to starting the walk at (0, 0)), the walk is ballistic if and only if E T (0) <. As in Figure 6, let (l, 0) and (r, 0) be the border vertices of S(0) (Z {0}), = r l+ the length of the section, (x l, ) and (x r, ) the border vertices of S(0) Z ({}). Note that the definition of sections implies x l l r x r and that {l, l+,..., r} {0} and {x l, x l +,..., x r } {} are completely contained in S(0) and (l, 0), (l, ) and (r, 0), (r, ) are open. Otherwise there would be bridges between (l, 0) and (r, 0) or S(0) would not be connected. Now T (0) is bounded by the sum of + T r, where T r is the first time the random walk either exits S(0) or visits one of the vertices in {(r, 0), (r, )}, and the expected time it takes the random walk to exit S(0) when started on the vertices (r, 0) and (r, ), respectively. If we only look at the first coordinate of the random walk on ({x l,..., r} {0, }) S(0), the random walk may stay where it is for an amount of time which 23

24 (x l, ) (x r, ) (l, 0) (0, 0) (r, 0) Figure 6: The section S(0) on a ladder. is geometrically distributed with finite mean m (if it has the option to move up and down in the second coordinate), and then it moves left with probability +β and right with probability β +β, or it moves right with probability if it cannot go left. So we can couple the walk with a walk on Z which jumps left with probability β +β and right with probability +β by making the same left/right jumps after the waiting times except when our original walk can only go right, in that case the new walk on Z may still go left. Then the random walk on Z has speed β β+ = v, so the time T r this walk takes to reach r (or (l, 0)), divided by r l, has expectation at most β+ β. Since T r r l, the +T r expectation of = +T r r l+ T r r l is also at most β β+. Our original random walk moves to the right whenever the new one does, and taking the possible waiting times with mean at most m into account, which are independent of the left/right movements, this expectation changes by a factor of at most + m. So E +Tr β+ ( + m) β <. Therefore, if the expected visit length when started at one of the vertices (r, 0) or (r, ) divided by is finite, so is E T (0). The converse statement is also true due to the positive escape probability and is proved in the same way as the corresponding Lemma 7 for subcritical percolation clusters. So start a random walk at s {(r, 0), (r, )}. Every time the random walk is at s, it can exit the section S(0) in at most two steps. Hence, the probability that it is there for the last time before exiting the section is at least β+2 β β+2 a lower bound which does not depend on the shape of the section. So if the expected weighted return time to s within S(0) is finite, so is the expected weighted total visit length when starting the random walk at s. If the random walk takes a step to the left and stays within {x l, x l +,..., r} {0, }, the expected weighted return time is the overall weight of this subgraph of S(0) divided by the weight of s and by, which is bounded from above by a constant. So if the expected weighted return time to s when the random walk stays in S(0) ({r, r +,..., x r } {0, }) is finite, so is the overall weighted visit length when starting the random walk at s. For the converse statement, note that whenever x r > r, when starting at s the probability of stepping into r +, x r {} and staying within S(0) (r, x r {0, }) during the entire visit is bounded by a constant from below. So if the expected overall weighted visit length is finite, so is the weighted return time 24

25 to s in the subgraph r, x r {}. This is just 2(β r + β r+ + β r β xr ) E π(s) 2β r (β xr r+ ) = E π(s)(β ) where the weight π(s) within S(0) (r, x r {0, }) is either β r or β r + β r+. Recapitulating, this means that E T (0) <, and therefore the random walk is ballistic, if and only if E 2β r (β xr r+ ) π(s)(β ) < for s {(r, 0), (r, )}. Since π(s) {β r, β r + β r+ } this is equivalent to E β xr r <, and this is a quantity we can calculate explicitly: β x r r E = β j P(l = m, x l = (m + n), r = i, x r = i + j) i + m + i,j,m,n 0 = i,j,m,n 0 q i+j+n+m ( q) 2 p 2 ( p) j+n β j (i + m + ) = p 2 ( q) 2 (q( p)β) j j 0 p 2 ( q) = q( p) (q( p)β) j j 0 i,m,n 0 This is convergent if and only if β < ( p)q. q i+m+n ( p) n (i + m + ) It seems reasonable to conjecture that also for subcritical percolation clusters, there is just one critical point β c. The proof in the last example does not directly carry over to this case though since the expected weighted time to reach the right border vertex if started at the left border vertex of a section may not be finite. References M. Barma and D. Dhar. Directed diffusion in a percolation network. Journal of Physics C: Solid State Physics, 6:45 458, N. Berger, N. Gantert, and Y. Peres. The speed of biased random walk on percolation clusters. Probability Theory and Related Fields, 26:22 242, A. Fribergh. The speed of a biased random walk on a percolation cluster at high density. The Annals of Probability, 38(5):77 782, A. Fribergh and A. Hammond. Phase transition for the speed of the biased random walk on the supercritical percolation cluster. Preprint, 20. Available at 25

26 5 A. Klenke. Probability Theory. Springer, London, R. Lyons, R. Pemantle, and Y. Peres. Biased random walks on Galton- Watson trees. Probability Theory and Related Fields, 06: , R. Lyons with Y. Peres. Probability on Trees and Networks. Cambridge University Press, 202. In preparation. Current version available at 8 A.-S. Sznitman. On the anisotropic walk on the supercritical percolation cluster. Communications in Mathematical Physics, 240:23 48,

Biased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres

Biased random walk on percolation clusters. Noam Berger, Nina Gantert and Yuval Peres Biased random walk on percolation clusters Noam Berger, Nina Gantert and Yuval Peres Related paper: [Berger, Gantert & Peres] (Prob. Theory related fields, Vol 126,2, 221 242) 1 The Model Percolation on

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

arxiv:math/ v1 [math.pr] 24 Apr 2003

arxiv:math/ v1 [math.pr] 24 Apr 2003 ICM 2002 Vol. III 1 3 arxiv:math/0304374v1 [math.pr] 24 Apr 2003 Random Walks in Random Environments Ofer Zeitouni Abstract Random walks in random environments (RWRE s) have been a source of surprising

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007 CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < +

Note that in the example in Lecture 1, the state Home is recurrent (and even absorbing), but all other states are transient. f ii (n) f ii = n=1 < + Random Walks: WEEK 2 Recurrence and transience Consider the event {X n = i for some n > 0} by which we mean {X = i}or{x 2 = i,x i}or{x 3 = i,x 2 i,x i},. Definition.. A state i S is recurrent if P(X n

More information

HW Graph Theory SOLUTIONS (hbovik) - Q

HW Graph Theory SOLUTIONS (hbovik) - Q 1, Diestel 3.5: Deduce the k = 2 case of Menger s theorem (3.3.1) from Proposition 3.1.1. Let G be 2-connected, and let A and B be 2-sets. We handle some special cases (thus later in the induction if these

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS

MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS MIXING TIMES OF RANDOM WALKS ON DYNAMIC CONFIGURATION MODELS Frank den Hollander Leiden University, The Netherlands New Results and Challenges with RWDRE, 11 13 January 2017, Heilbronn Institute for Mathematical

More information

Percolation and Random walks on graphs

Percolation and Random walks on graphs Percolation and Random walks on graphs Perla Sousi May 14, 2018 Contents 1 Percolation 2 1.1 Definition of the model................................... 2 1.2 Coupling of percolation processes.............................

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

CHOOSING A SPANNING TREE FOR THE INTEGER LATTICE UNIFORMLY

CHOOSING A SPANNING TREE FOR THE INTEGER LATTICE UNIFORMLY CHOOSING A SPANNING TREE FOR THE INTEGER LATTICE UNIFORMLY Running Head: RANDOM SPANNING TREES Robin Pemantle 1 Dept. of Mathematics, White Hall 2 Cornell University Ithaca, NY 14853 June 26, 2003 ABSTRACT:

More information

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series C.7 Numerical series Pag. 147 Proof of the converging criteria for series Theorem 5.29 (Comparison test) Let and be positive-term series such that 0, for any k 0. i) If the series converges, then also

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

A simple branching process approach to the phase transition in G n,p

A simple branching process approach to the phase transition in G n,p A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

Percolation by cumulative merging and phase transition for the contact process on random graphs.

Percolation by cumulative merging and phase transition for the contact process on random graphs. Percolation by cumulative merging and phase transition for the contact process on random graphs. LAURENT MÉNARD and ARVIND SINGH arxiv:502.06982v [math.pr] 24 Feb 205 Abstract Given a weighted graph, we

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Graph Theory. Thomas Bloom. February 6, 2015

Graph Theory. Thomas Bloom. February 6, 2015 Graph Theory Thomas Bloom February 6, 2015 1 Lecture 1 Introduction A graph (for the purposes of these lectures) is a finite set of vertices, some of which are connected by a single edge. Most importantly,

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Bootstrap Percolation on Periodic Trees

Bootstrap Percolation on Periodic Trees Bootstrap Percolation on Periodic Trees Milan Bradonjić Iraj Saniee Abstract We study bootstrap percolation with the threshold parameter θ 2 and the initial probability p on infinite periodic trees that

More information

arxiv:math/ v1 [math.pr] 29 Nov 2002

arxiv:math/ v1 [math.pr] 29 Nov 2002 arxiv:math/0211455v1 [math.pr] 29 Nov 2002 Trees and Matchings from Point Processes Alexander E. Holroyd Yuval Peres February 1, 2008 Abstract A factor graph of a point process is a graph whose vertices

More information

Necessary and sufficient conditions for strong R-positivity

Necessary and sufficient conditions for strong R-positivity Necessary and sufficient conditions for strong R-positivity Wednesday, November 29th, 2017 The Perron-Frobenius theorem Let A = (A(x, y)) x,y S be a nonnegative matrix indexed by a countable set S. We

More information

Hitting Probabilities

Hitting Probabilities Stat25B: Probability Theory (Spring 23) Lecture: 2 Hitting Probabilities Lecturer: James W. Pitman Scribe: Brian Milch 2. Hitting probabilities Consider a Markov chain with a countable state space S and

More information

Stochastic modelling of epidemic spread

Stochastic modelling of epidemic spread Stochastic modelling of epidemic spread Julien Arino Centre for Research on Inner City Health St Michael s Hospital Toronto On leave from Department of Mathematics University of Manitoba Julien Arino@umanitoba.ca

More information

arxiv: v3 [math.pr] 10 Nov 2017

arxiv: v3 [math.pr] 10 Nov 2017 Harmonic measure for biased random walk in a supercritical Galton Watson tree Shen LIN LPMA, Université Pierre et Marie Curie, Paris, France -mail: shenlinmath@gmailcom arxiv:70708v3 mathpr 0 Nov 207 November

More information

Lecture Notes Introduction to Ergodic Theory

Lecture Notes Introduction to Ergodic Theory Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,

More information

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Counting Clusters on a Grid

Counting Clusters on a Grid Dartmouth College Undergraduate Honors Thesis Counting Clusters on a Grid Author: Jacob Richey Faculty Advisor: Peter Winkler May 23, 2014 1 Acknowledgements There are a number of people who have made

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Ω = e E d {0, 1} θ(p) = P( C = ) So θ(p) is the probability that the origin belongs to an infinite cluster. It is trivial that.

Ω = e E d {0, 1} θ(p) = P( C = ) So θ(p) is the probability that the origin belongs to an infinite cluster. It is trivial that. 2 Percolation There is a vast literature on percolation. For the reader who wants more than we give here, there is an entire book: Percolation, by Geoffrey Grimmett. A good account of the recent spectacular

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

General Glivenko-Cantelli theorems

General Glivenko-Cantelli theorems The ISI s Journal for the Rapid Dissemination of Statistics Research (wileyonlinelibrary.com) DOI: 10.100X/sta.0000......................................................................................................

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem 1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1

More information

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ).

Connectedness. Proposition 2.2. The following are equivalent for a topological space (X, T ). Connectedness 1 Motivation Connectedness is the sort of topological property that students love. Its definition is intuitive and easy to understand, and it is a powerful tool in proofs of well-known results.

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

Sharpness of second moment criteria for branching and tree-indexed processes

Sharpness of second moment criteria for branching and tree-indexed processes Sharpness of second moment criteria for branching and tree-indexed processes Robin Pemantle 1, 2 ABSTRACT: A class of branching processes in varying environments is exhibited which become extinct almost

More information

E-Companion to The Evolution of Beliefs over Signed Social Networks

E-Companion to The Evolution of Beliefs over Signed Social Networks OPERATIONS RESEARCH INFORMS E-Companion to The Evolution of Beliefs over Signed Social Networks Guodong Shi Research School of Engineering, CECS, The Australian National University, Canberra ACT 000, Australia

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Graph coloring, perfect graphs

Graph coloring, perfect graphs Lecture 5 (05.04.2013) Graph coloring, perfect graphs Scribe: Tomasz Kociumaka Lecturer: Marcin Pilipczuk 1 Introduction to graph coloring Definition 1. Let G be a simple undirected graph and k a positive

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip

A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip arxiv:1608.04578v1 [math.pr] 16 Aug 2016 A note on the Green s function for the transient random walk without killing on the half lattice, orthant and strip Alberto Chiarini Alessandra Cipriani Abstract

More information

Propp-Wilson Algorithm (and sampling the Ising model)

Propp-Wilson Algorithm (and sampling the Ising model) Propp-Wilson Algorithm (and sampling the Ising model) Danny Leshem, Nov 2009 References: Haggstrom, O. (2002) Finite Markov Chains and Algorithmic Applications, ch. 10-11 Propp, J. & Wilson, D. (1996)

More information

Laplacian Integral Graphs with Maximum Degree 3

Laplacian Integral Graphs with Maximum Degree 3 Laplacian Integral Graphs with Maximum Degree Steve Kirkland Department of Mathematics and Statistics University of Regina Regina, Saskatchewan, Canada S4S 0A kirkland@math.uregina.ca Submitted: Nov 5,

More information

On the intersection of infinite matroids

On the intersection of infinite matroids On the intersection of infinite matroids Elad Aigner-Horev Johannes Carmesin Jan-Oliver Fröhlich University of Hamburg 9 July 2012 Abstract We show that the infinite matroid intersection conjecture of

More information

arxiv: v2 [math.pr] 22 Aug 2017

arxiv: v2 [math.pr] 22 Aug 2017 Submitted to the Annals of Probability PHASE TRANSITION FOR THE ONCE-REINFORCED RANDOM WALK ON Z D -LIKE TREES arxiv:1604.07631v2 math.pr] 22 Aug 2017 By Daniel Kious and Vladas Sidoravicius, Courant Institute

More information

Ring Sums, Bridges and Fundamental Sets

Ring Sums, Bridges and Fundamental Sets 1 Ring Sums Definition 1 Given two graphs G 1 = (V 1, E 1 ) and G 2 = (V 2, E 2 ) we define the ring sum G 1 G 2 = (V 1 V 2, (E 1 E 2 ) (E 1 E 2 )) with isolated points dropped. So an edge is in G 1 G

More information

Random graphs: Random geometric graphs

Random graphs: Random geometric graphs Random graphs: Random geometric graphs Mathew Penrose (University of Bath) Oberwolfach workshop Stochastic analysis for Poisson point processes February 2013 athew Penrose (Bath), Oberwolfach February

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

Lecture 19: November 10

Lecture 19: November 10 CS294 Markov Chain Monte Carlo: Foundations & Applications Fall 2009 Lecture 19: November 10 Lecturer: Prof. Alistair Sinclair Scribes: Kevin Dick and Tanya Gordeeva Disclaimer: These notes have not been

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

Isomorphism of free G-subflows (preliminary draft)

Isomorphism of free G-subflows (preliminary draft) Isomorphism of free G-subflows (preliminary draft) John D. Clemens August 3, 21 Abstract We show that for G a countable group which is not locally finite, the isomorphism of free G-subflows is bi-reducible

More information

MA131 - Analysis 1. Workbook 4 Sequences III

MA131 - Analysis 1. Workbook 4 Sequences III MA3 - Analysis Workbook 4 Sequences III Autumn 2004 Contents 2.3 Roots................................. 2.4 Powers................................. 3 2.5 * Application - Factorials *.....................

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

On improving matchings in trees, via bounded-length augmentations 1

On improving matchings in trees, via bounded-length augmentations 1 On improving matchings in trees, via bounded-length augmentations 1 Julien Bensmail a, Valentin Garnero a, Nicolas Nisse a a Université Côte d Azur, CNRS, Inria, I3S, France Abstract Due to a classical

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Random Geometric Graphs

Random Geometric Graphs Random Geometric Graphs Mathew D. Penrose University of Bath, UK Networks: stochastic models for populations and epidemics ICMS, Edinburgh September 2011 1 MODELS of RANDOM GRAPHS Erdos-Renyi G(n, p):

More information

The speed of biased random walk among random conductances

The speed of biased random walk among random conductances The speed of biased random walk among random conductances Noam Berger, Nina Gantert, Jan Nagel April 28, 207 Abstract We consider biased random walk among iid, uniformly elliptic conductances on Z d, and

More information

IEOR 6711: Professor Whitt. Introduction to Markov Chains

IEOR 6711: Professor Whitt. Introduction to Markov Chains IEOR 6711: Professor Whitt Introduction to Markov Chains 1. Markov Mouse: The Closed Maze We start by considering how to model a mouse moving around in a maze. The maze is a closed space containing nine

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

25.1 Ergodicity and Metric Transitivity

25.1 Ergodicity and Metric Transitivity Chapter 25 Ergodicity This lecture explains what it means for a process to be ergodic or metrically transitive, gives a few characterizes of these properties (especially for AMS processes), and deduces

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Maximising the number of induced cycles in a graph

Maximising the number of induced cycles in a graph Maximising the number of induced cycles in a graph Natasha Morrison Alex Scott April 12, 2017 Abstract We determine the maximum number of induced cycles that can be contained in a graph on n n 0 vertices,

More information

arxiv: v1 [math.co] 13 May 2016

arxiv: v1 [math.co] 13 May 2016 GENERALISED RAMSEY NUMBERS FOR TWO SETS OF CYCLES MIKAEL HANSSON arxiv:1605.04301v1 [math.co] 13 May 2016 Abstract. We determine several generalised Ramsey numbers for two sets Γ 1 and Γ 2 of cycles, in

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Markov Processes Hamid R. Rabiee

Markov Processes Hamid R. Rabiee Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete

More information

Lecture 06 01/31/ Proofs for emergence of giant component

Lecture 06 01/31/ Proofs for emergence of giant component M375T/M396C: Topics in Complex Networks Spring 2013 Lecture 06 01/31/13 Lecturer: Ravi Srinivasan Scribe: Tianran Geng 6.1 Proofs for emergence of giant component We now sketch the main ideas underlying

More information

A mathematical model for a copolymer in an emulsion

A mathematical model for a copolymer in an emulsion J Math Chem (2010) 48:83 94 DOI 10.1007/s10910-009-9564-y ORIGINAL PAPER A mathematical model for a copolymer in an emulsion F. den Hollander N. Pétrélis Received: 3 June 2007 / Accepted: 22 April 2009

More information

Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada. and

Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada. and NON-PLANAR EXTENSIONS OF SUBDIVISIONS OF PLANAR GRAPHS Sergey Norin Department of Mathematics and Statistics McGill University Montreal, Quebec H3A 2K6, Canada and Robin Thomas 1 School of Mathematics

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero Chapter Limits of Sequences Calculus Student: lim s n = 0 means the s n are getting closer and closer to zero but never gets there. Instructor: ARGHHHHH! Exercise. Think of a better response for the instructor.

More information

The expansion of random regular graphs

The expansion of random regular graphs The expansion of random regular graphs David Ellis Introduction Our aim is now to show that for any d 3, almost all d-regular graphs on {1, 2,..., n} have edge-expansion ratio at least c d d (if nd is

More information

Weak quenched limiting distributions of a one-dimensional random walk in a random environment

Weak quenched limiting distributions of a one-dimensional random walk in a random environment Weak quenched limiting distributions of a one-dimensional random walk in a random environment Jonathon Peterson Cornell University Department of Mathematics Joint work with Gennady Samorodnitsky September

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

Nonamenable Products are not Treeable

Nonamenable Products are not Treeable Version of 30 July 1999 Nonamenable Products are not Treeable by Robin Pemantle and Yuval Peres Abstract. Let X and Y be infinite graphs, such that the automorphism group of X is nonamenable, and the automorphism

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Sample Spaces, Random Variables

Sample Spaces, Random Variables Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted

More information