Cut-off and Escape Behaviors for Birth and Death Chains on Trees

Size: px
Start display at page:

Download "Cut-off and Escape Behaviors for Birth and Death Chains on Trees"

Transcription

1 ALEA, Lat. Am. J. Probab. Math. Stat. 8, Cut-off and Escape Behaviors for Birth and Death Chains on Trees Olivier Bertoncini Department of Physics, University of Aveiro, Aveiro, Portugal. address: olivier@ua.pt Abstract. We consider families of discrete time birth and death chains on trees, and show that in presence of a drift toward the root of the tree, the chains exhibit cut-off behavior along the drift and escape behavior in the opposite direction.. Introduction Although they originally come from different research fields and seem apparently to be very different phenomena, cut-off and escape behaviors have been related at the level of hitting times for birth and death chains on the line see Barrera et al., 009; Bertoncini, 007; Bertoncini et al., 008. Cut-off behavior refers to the famous cut-off phenomenon first discovered and studied by Aldous and Diaconis in the 80 s see Aldous, 983; Aldous and Diaconis, 986, 987, which is characterized by an almost deterministic asymptotical abrupt convergence to equilibrium for families of Markov processes. In opposition, escape behavior is usually associated to the exit from metastability for a system trapped in local minimum of the energy profile. In that case, the transition to equilibrium occurs at unpredictable exponential times see Cassandro et al., 984. We refer to the introduction in Barrera et al. 009 for a comparative historical discussion of the phenomena. In this reference dealing with birth-and-death chains on the line with drift toward the origin cut-off and escape phenomena are characterized by their distinct hitting-time behavior. It is shown that, under suitable drift hypotheses, the chains exhibit both cut-off behavior toward zero and escape behavior for excursions in the opposite direction. Furthermore, as the evolutions are reversible, the law of the final escape trajectory coincides with the time reverse of the law of cut-off paths. In the present work, we extend this study to birth-and-death chains on trees with drift toward the root of the tree. Our chains are the discrete time counterpart of the birth-and-death processes studied by Martínez and Ycart in Martínez and Ycart 00. Under a uniform drift condition, they prove that hitting times of zero starting from any state a denoted by T a 0 exhibit cut-off at mean times when a goes to infinity Theorem 5. in Martínez and Ycart, 00. Their drift condition is expressed as an exponential decaying tail of the invariant probability measure π, in the sense that, for all a, the quantity πb a /πa is uniformly bounded by some K. In comparison, our drift condition concerns only the branch which contains the state a see formulas.3 and.5, and we allow the upper bound K a to go to infinity, as long as.4 remains valid see Received by the editors August 3, 00; accepted February 8, Mathematics Subject Classification. 60J0, 05C8. Key words and phrases. Random walks, First Passage Times, Cut-off/Escape behavior. 49

2 50 Olivier Bertoncini Section.4. With this assumption, we can prove the cut-off behavior of T a 0 Proposition. and that the typical time scale E [ T a 0 of this convergence is negligible compare to the mean escape time from zero to a Proposition.3. If in addition we have a control on trajectories outside the branch which contains a Conditions.7, we get the escape behavior of T 0 a Theorem.4. Note that until the drift condition is satisfied, our results apply to any tree. No particular assumptions are made on the degrees which can be non-homogeneous and non-bounded. The paper is organized as follows. We first recall the general definition of the two types of behaviors in Section.. Then, we define our birth-and-death chains model on trees Sections.,. and.3, and the strong drift toward the root in Section.4. The main results are given in Section.5 and we discuss the basic example of regular trees in Section.6. Section 3 contains exact expressions for the mean hitting times and their second moments. Finally, the main results are proven in Section 4... Cut-off and escape behaviors. Cut-off and escape behaviors, will be studied at the level of hitting times. Both types of behavior are asymptotic, in the sense that they are characterized by what happens when a certain parameter a diverges. Let us recall here the relevant definitions. Definition.. i A family of random variables U a exhibits cut-off behavior at mean times if U a E [ Proba U a.. [equivalently, lim P U a > ce[u a for c < and 0 for c >. ii A family of random variables V a exhibits escape-time behavior at mean times if V a E [ V a L exp.. We refer to Barrera et al. 009 for a discussion about the motivations of these definitions, as well as general sufficient conditions for them to occur.. Model and results In orderto study cut-offand escapebehaviors, we will considerfamilies ofrandomwalksx a t defined on a tree I a. Each of the chains we are defining below is the discrete time counterpart of the birth-and-death processes on trees studied in Martínez and Ycart The trees. A tree is an undirected connected graph G I, E with no nontrivial closed loop, where I is the set of vertices or nodes and E the set of edges or links. We define a partial order on I by choosing a node denoted by 0 to be the root of the tree: for x,y I, we say that x is before y x y if x y or if it exists a path from y to zero containing x. Thus each element x of the tree except the root has a unique parent denoted by px: x J,! px s.t. px x and px,x E,. where J : I \{0}. There also exists a unique path from x to zero, and we denote by dx its length depth and by lx the set of vertices on that path except 0: x J, lx : {x 0,x,...,x dx },. wherex i : p i x fori 0,...,dx. Notethataccordingtothis notationx 0 xandx dx 0. Forthesakeofnotations,wedenotebyαx thelaststatebeforezeroonthatpath: αx x dx. Let us also define σx the set of the children of x, and B x the branch stemming from x: σx : {y : x py}, and B x : {y : x y}..3

3 Cut-off and Escape Behaviors for Birth and Death Chains on Trees 5 In Section 3, we will make use of the following decompositions, for any k 0,...,da, of the branch B ak stemming from the k-th parent a k p k a of some a J, and its complementary B ak see Figure.: k B ak B a {a l } B c,.4 B ak da lk+ l {a l } c σa l c a l c σa l c a l B c..5 We also need to define the set C ak C ak : da lk+ c σa l c a l B c..6 Note that C ak is the complementary of the path from a k to zero in B ak : C ak B ak \ da lk+ {a l }. Figure.. The path from a to zero: sites, transitions and branches.. The chains. We consider irreducible discrete time birth and death chains Xt on I defined by transition probabilities λ x from parent to children and µ x from children to parent for each x J. That is λ x : Ppx,x and µ x : Px,px, where Px,y : P Xt+ y Xt x. The λ i s and µ i s sum to in each site: µ x + λ y, x J, and λ x,.7 y σx x σ0 and since the chain is irreducible they are supposed to be non-null. Note that, as we will consider families of chains X a t, the λ s and µ s may vary with a. As it doesn t change our results and computations, we can also add at least one non-null waiting probabilities κ x : Px,x in order to make the chain aperiodic. In that case, equations.7 become µ x +κ x + y σx λ y, for all x J, and κ 0 + x σ0 λ x. If the tree is finite, then the chain is positive recurrent and admits a unique reversible invariant

4 5 Olivier Bertoncini probability measure i.p.m. π, which can be expressed for each x J in terms of the transition probabilities λ y πx π0..8 µ y y lx This expression is a consequence of the reversibility of π, which ensures that for each x J we have π px λ x πx µ x. Then, iteration along the path lx from x to zero gives.8, where the normalization constant π0 is chosen such that x I πx, that is: π0 λ y..9 x I y lx µ y In the case where the tree is not finite, we have to suppose that the quantity x I finite, and then the chain is positive recurrent and.8 gives the unique i.p.m. π. y lx λ y µ y is.3. Hitting times. As mentioned above cut-off and escape behavior are studied at the level of hitting times. More precisely, the families of random variables under consideration are the hitting times also called first passage times in the literature of zero starting from some a J resp. times from zero to a, denoted by T a 0 resp.t 0 a, with T x y min { t 0 : X x t y },.0 where the upper index x means that this is the initial state of the chain X x 0 x. Our results rely on exact expressions for the first and second moments of T pj a p n a, for any 0 j,n da, which will be given in Section 3. Let us just mention now, for all x J, the expression in terms of the invariant measure for the mean time E [ T x px from child to parent see Lemma 3.: E [ T x px πb x µ x πx.. This expression is a key ingredient in our computations and allows also to interpret our drift condition as an exponential decay of π see next section..4. The drift condition. As seen in Section., cut-off and escape behaviors are defined for families of variables indexed by some diverging parameter. In the present work, we take a state a J such that the length da of the path la from a to zero tends to infinity to be our asymptotical parameter. We denote by a the limit when da, and we define by l the path from zero to infinity followed by a when da : l lim la. Let us give now the definition of strong drift which applies to families of previous defined chains X a t on trees I a. When it is necessary, we mark with an upper index a the quantities corresponding to X a t. Definition.. The family of birth-and-death chains X a t has a strong drift toward 0 along the path l we note 0 l -SD if i the transition probabilities µ a x satisfy ii the constant satisfies inf µ a b B b : K µ > 0, for all a l,. αa K a : sup µ a b E [ T b pb a,.3 b B αa Ka E [ T a 0..4 a 0

5 Cut-off and Escape Behaviors for Birth and Death Chains on Trees 53 Condition i ensures that transitions to the left remain non null asymptotically in a. It also provides a useful bound see Lemma 4.. The drift is given by ii and can be interpreted in two ways. Note first that by., K a can be written K a sup b B αa π a B b π a b..5 Thus, condition ii can be seen as an exponential decay of the i.p.m. π a along the branch B αa which contains a: we have π a b e γa db, with γ a log /K a, valid for all b B αa. That is for any state of any subbranch of the branch containing a, since by definition B αa x la B x. In terms of energy profile, γ a also provides a lower bound to the energy profile of any subbranch of the branch containing a. We refer to Barrera et al. 009 for a more detailed discussion about these interpretations of our drift condition..5. Results. For the fixed path l which contains the sequence of a s going to infinity, the 0 l -SD drift condition is a sufficient condition for the cut-off behavior of hitting times T a a 0: Proposition.. If the family X a t of irreducible discrete time birth and death chains on I a satisfies the 0 l -SD condition, then lim Var T a 0/E a [ T a 0 a 0. As a consequence, the random variables T a a 0 exhibit cut-off behavior at mean times. Moreover, under this drift condition the typical time-scale E [ T a a 0 of the abrupt convergence toward zero is negligible comparing to the mean times E [ T a 0 a of the escape from zero to a: Proposition.3. If the family X a t of irreducible discrete time birth and death chains on I a satisfies the 0 l -SD condition, then lim E[ T a [ a a 0 /E T 0 a 0. Thistime-scalespropertyisthe keyelement ofthe presenceofescapebehaviorfort 0 a. a But, as seen in Section 3 of Barreraet al. 009, this requiresalso that starting from any state of the state space, the mean hitting times of zero are negligible comparing to E [ T 0 a a, and that the sequence T 0 a/e a [ T a 0 a is uniformly integrable. The requirement that supx Ia E [ T a [ a x 0 /E T 0 a 0 is obtained if we suppose, in addition to the 0 l -SD condition, that the mean hitting times of zero are comparable. That is if the ratio sup x Ia E [ T a [ a x 0 /E T a 0 remains bounded for all a. For the uniform integrability of T 0 a/e a [ T a [ 0 a, we need to control E T 0 a see Corollary 3.5. A sufficient condition is to have inf b I a µ a b > 0 uniformly in a, and K a /E [ T a 0 a 0, where K a : sup µ a b E [ T a π b pb a B b sup b I a b I a π a b..6 As above, the second condition is obtained if in addition to the 0 l -SD condition, the ratio /E [ T a 0 a remains bounded for all a see Section 4.3. K a Theorem.4. If the family X a t of irreducible discrete time birth and death chains on I a satisfies the 0 l -SD condition, and if there exist K, K and K µ such that then: sup x Ia E [ T a x 0 E [ T a K, a 0 K a E [ T a K, and inf µ a b I b : K µ > 0 a l,.7 a a 0 the random variables T a a 0 exhibit cut-off behavior at mean times. the random variables T a 0 a exhibit escape-time behavior at mean times.

6 54 Olivier Bertoncini Moreover, as the quantity K a is a supremum taken over all sites of the branch B αa which contains a, these assumptions are sufficient to have cut-off and escape behaviors between zero and every state b of B αa for which E [ T a [ a b 0 diverges with E T a 0. Let us define La : {b B αa : C s.t. E [ T a [ a b 0 CE T a 0 }..8 Corollary.5. Theorem.4 holds for T a and Ta b 0 0 b for all b La. As we will see in Section 4 Lemmas 4. and 4.3, the conditions needed to obtain the two time scales E [ T a [ a a 0 /E T 0 a 0 which are typical of escape behavior are actually weaker than those which imply cut-off behavior. But unlike the case when the chain is defined on N where the drift condition implies both cut-off and escape behaviors see Bertoncini et al., 008, Theorem., here the 0 l -SD condition implies only the cut-off behavior directly. As the chain started in zero could escape to other branches than B αa, we need in addition to have a control on that part of the trajectories from zero to a conditions.7 to get the escape behavior of T 0 a. a The assumption that sup x Ia E [ T a [ a x 0 /E T 0 a 0 is the analogue of conditions.7 in Theorem of Barrera et al. 009 for chains on Z. This is a central argument of the proof of Theorem in Barrera et al. 009 which gives sufficient conditions for escape behavior of T a The idea is that there exists an intermediate time scale a between E [ T a a 0 and E [ T a 0 a. 0 a such that the walk will almost surely in the limit a visit zero in an interval of length a. Thus, the walk started in zero will make a great number of unfructuous attempts to escape the attraction of zero before eventually hitting a. The exponential character randomness without memory of T a 0 a is a consequence of this situation: each trial is followed by a return to zero, and then by Markovianness, the walk starts again in the same situation. Remark.6. If the tree I a is reduced to the branch B αa which contains a, the conditions of Theorem.4 reduce to 0 l -SD plus sup E [ T a [ a x 0 /E T a 0 bounded for all a. x B αa Remark.7. Propositions. and.3 above also apply when the chains X a t are defined on an infinite tree I i.e. when I a I infinite for all a. We can also apply Theorem.4 in that case by considering the restrictions of the chains to I a : {x I : dx da}. Another feature of the complementarity between both phenomena, is that the successful final excursion which leads to a is cut-off like see Theorem and Section 7 in Barrera et al., 009: a a if we consider the hitting times T 0 a of a after the last visit to zero, and T a 0 hitting time of zero after last visit to a, we can show by a reversibility argument that they have the same laws. Thus, cut-off and escape behaviors are related by time inversion..6. Example: biased random walk on a regular tree. A regular tree is a tree I in which each node has the same number r of children. In its infinite version also called Bethe lattice, the degree number of neighbors of each node is r +, except the root which is of degree r. When we consider the finite version I a : {x I : dx da} for some a, the boundary nodes sites x s.t. dx da also called leaves of the tree are dead ends of degree. In that case, we talk about a Cayley tree. We refer to Dorogovtsev et al. 008; Dorogovtsev 00 for discussions about the differences between Bethe lattice and Cayley tree, and to Chapter 5 of Aldous and Fill for the biased random walk on a regular tree. We consider biased random walks X a t on I a which transitions are defined by λ x : λ+r and µ x : λ λ+r,.9 with 0 < λ <, for each site x 0. As transitions must sum to one in each site, we add also waiting probabilities at root and leaves: κ 0 : λ/λ+r and κ x : r/λ+r when dx da.

7 Cut-off and Escape Behaviors for Birth and Death Chains on Trees 55 The parameter λ represents the bias of the random walk: at each step, the chain moves toward the root of the tree with probability λ/λ+r and in the opposite direction with probability r/λ+r. Thus, there is a concentration phenomenon near the root level when λ > r localization, and near the leaves level when λ < r delocalization. Due to the regular structure of the tree, we can compute the quantities involved in the 0 l -SD condition, and determine when it is satisfied or not. Let us start with the invariant measure π. By.8, for all x we have πx π0 with π0 / x I a λ dx. y lx λ y µ y π0 y lx λ+r λ+r λ π0,.0 λdx Remark.8. The computations of mean return times using Kac formula E [ T x /πx, where T x : inf{t > 0 : X x t x} is the first return time to x illustrates the localization phenomenon for the biased random walk on a regular tree: localization is related to positive recurrence of the walk when it takes place on an infinite tree. Lyons showed that the biased walk is recurrent return times to any state almost surely finite when the bias λ exceeds the branching number of the tree here λ > r and transient when λ < r see Lyons, 990. Moreover, when λ > r, the chain is positive recurrent finite mean return times to any state. Using the Kac formula and since for each k there are r k sites y in I such that dy k, we get E [ T x λ dx y I λdx λdy r k <, if λ > r.. λ in the finite case, although the walk on I a is always positive recurrent, the localization phenomenon is characterized by very different asymptotical behaviors of mean return times to zero and a respectively. When λ > r the mean return time to zero remains bounded as k0 a goes to infinity by Kac formula, we get E [ T a 0 the mean return time to a diverges with a E [ T a a λ/λ r for large a. In opposition, λ da+ /λ r for large a. Letusnowcomputethekeyquantity πb x πx whichappearsin bothexpressionsofk a ande [ T a a 0 see.5, and Proposition 3.3 below. We write πb x πx πb πx b B x λ db b B x λdx da dx k0 r λ k,. since there are r k sites b in B x such that db dx+k. When λ r, this is equal to λ λ r πb b K a : sup sup x la b B x πb r da dx+, and thus λ λ r da,.3 λ r λ

8 56 Olivier Bertoncini and E [ T a a 0 x la λ+r λ r λ+r λ r πb x µ x πx λ+r λ r da da + r λ r x la da r da λ k0 r da dx+ λ r da. λ k λ.4 r In the localization case λ > r, we have E [ T a λ+r a 0 λ r da, and K a K : λ as a. λ r The 0 l -SD condition is then satisfied, and we have cut-off and escape behavior between the root and any leave of I a, since for symmetry reasons, the above computations are the same for any state of I a. This result leads to a better understanding of what happens in that case: since T a a 0 exhibits cut-off behavior at mean times E [ T a 0 a da, the walk started in a will go in an almost deterministic way to zero before eventually returning to a. Moreover, the typical time scale of cut-off trajectories is negligible comparing to that of escape trajectories computations give E [ T a 0 a λ da for large a, which is of the same order than E [ T a a. In the delocalization case λ < r, K a and E [ T a a 0 are both exponential of order r/λ da for large a, and then there is no strong drift. The 0 l -SD condition is also not satisfied when λ r. Computations in that case show that K a da and E [ T a a 0 λ+r/λ da da+ and thus the ratio K a /E[ T a a 0 does not converge to zero when a goes to. 3. Mean hitting times We already mentioned without proving it in Section.3, that the expression of E [ T x px in terms of the invariant measure π equation. is a key ingredient in our computations. However, we will also need exact expressions for the first and second moments of hitting times between sitesalongthe path from zeroto somefixed state a J. We givealist ofthese expressions in Section 3.. We start with first and second moments for T x px Lemma 3., and T px x Lemma 3.. Proposition 3.3 gives the mean hitting times between sites a i la for some a J, and Proposition 3.4 their second moments. Finally we give an idea of the proofs in Section Exact expressions. Lemma 3.. For any x J E [ T x px πb x µ x πx, 3. Lemma 3.. For any x J E [ Tx px µ x πx E [ Tpx x µ x πx πb b µ b πb E[ T x px. 3. b B x E [ T px x πb x µ x πx πb c µ c πc + c C x b lx, 3.3 π B b E [ T px x. 3.4 µ b πb

9 Cut-off and Escape Behaviors for Birth and Death Chains on Trees 57 Proposition 3.3. For a J, recall that we denote by a i the i-th parent of a a i p i a. Let 0 j < n da, we have And thus E [ n T aj a n kj E [ n T an a j kj E [ n [ T aj a n + E Tan a j πb ak µ ak πa k, 3.5 πb ak µ ak πa k. 3.6 kj µ ak πa k. 3.7 Proposition 3.4. For 0 j < n da, we have E [ n Ta πb b j a n µ ak πa k µ b πb + πb a k E [ T ak+ a n E [ T aj a n, 3.8 kj b B ak and E [ n Ta πb c n a j µ ak πa k µ c πc + c C ak kj b la k Corollary 3.5. For 0 j < n da, we have E [ T a n a j E [ Tan a j π B b µ b πb +π [ B ak E Tak a j E [ T an a j. K a K µ + E [ T 0 aj Proof This a consequence of equation 3.9. We first use the fact that B b B ak for all b la k to write b la k π B b µ b πb π B ak b la k Then, as C ak B ak, and since by definition K a πb b sup b I a πb πb c µ c πc K a c C ak Finally, from equation 3.9 E [ n Ta n a j kj K µ π B ak µ ak πa k c C ak πc K π B b µ b πb π B ak E [ T0 ak. 3. a K µ and K µ : inf b I a µ a b, we have πc ak K a K µ π B ak. 3. K a K µ +E [ [ T 0 ak +E Tak a j E [ T an a j. 3.3 Hence the result, since T 0 ak +T ak a j T 0 aj and then using equation Proofs. The above formulas can be proven via the standard method of difference equations which rely on the following decomposition of E [ FT ak a n for any function F: Lemma 3.6. For all 0 < k < da, k n, we have E [ FT ak a n µ ak E [ FT ak+ a n + + c σa k λ c E [ FT c an

10 58 Olivier Bertoncini Proof This is obtained by decomposing according to the first step of the chain E [ FT ak a n µ ak E [ FT ak a n X a k+ + λ c E [ FT ak a n X c, 3.5 c σa k and by Markovianness, for x a k+ and c σa k, we have [ E FT ak a n X x l FlP T ak a n l X x l FlPT x an l Hence the result. l 0Fl +PT x an l [ E FT x an Together with Lemma 3. which is the discrete analogue of Lemma of Martínez and Ycart 00 for continuous chains, Lemma 3.6 allows us to prove Propositions 3.3 and 3.4: we fix 0 n da and apply equation 3.4 with Fx x resp. Fx x to obtain recursive equations for E [ T ak a n, resp. E [ T ak a n with 0 < k < da. Then we can iterate these equations in both directions up to boundary conditions which are particular cases of 3.4 for k 0 and da and solve them using reversibility and equations 3. and 3.. Another way of proving both propositions is to use, together with Lemmas 3. and 3., the fact that T aj a n resp. T an a j is the sum of independant T x px resp. T px x : T aj a n n kj T ak pa k and T an a j n kj T pak a k. 3.7 We give now a sketch of those proofs. First moments: Althought they can be proven directly by difference equations, formulas 3. and 3.3 are proven in Woess 009 Proposition 9.8 by an alternative method. And thus 3.5 and 3.6 of Proposition 3.3 follow from 3.7. Second moments: The proof of 3. follows the same lines as the ones of Lemma in Martínez and Ycart 00, where the application of equation 3.4 gives the recursive equation µ x t x + c σx λ ct c for t x E [ T x px. We now apply 3.4 with Fx x, a k x and a n px to obtain the recursive equation for E [ T x px : µ x E [ Tx px µx E [ T x px + λ c E [ Tc pc. 3.8 c σx Then we prove by induction that formula 3. satisfies the above equation. As in Martínez and Ycart 00, we have first to suppose that the tree is finite, since the first step of induction is to consider a leave x that is when σx. For the infinite case, we consider the limit n of restrictions to the truncated trees I n : {x I : dx n} of level n. For equation 3.4, we prove by induction that it satisfies the recursive equation obtained from 3.4 with Fx x, a k px and a n x. We don t need to restrict ourselves to finite trees in that case, since the first step of induction is to consider x such that px 0. Finally, 3.8 and 3.9 of Proposition 3.4 follow from 3.7.

11 Cut-off and Escape Behaviors for Birth and Death Chains on Trees Proofs of the main results The quantities involved in this section are defined with respect to the chain X a t on I a, but for the sake of notations we will omit the upper index a. We will prove Propositions. and.3 through Lemmas 4. and 4.3 which are a little bit stronger. They involve the quantities Qx : πb x πb b µ b πb b B x ; Q a : sup Qx, 4. x la and R a : b la πb b µ b πb, 4. which for all a satisfy the inequalities Lemma 4.. Proof of Lemma 4. As la B ada, we have R a Q a K a /K µ. 4.3 R a πb b µ b πb π B ada Qada, 4.4 b B ada and thus R a Q a, since π B ada Qada Qa da Q a. For the right-hand side of 4.3, we write πb x πb b µ b πb πb x b B x Kx K µ b B x πb Kx K µ, 4.5 where Kx sup b Bx πb b /πb. We conclude by taking the supremum over x la, since by.5, K a sup Kx. x la 4.. Proof of Proposition.. We recall that lim Var Ta 0 E [ T a 0 is a sufficient condition to prove the cut-off behavior of the variables T a 0 see Barrera et al., 009, Section 3. Since Q a K a /K µ see Lemma 4. above, and since by the 0 l -SD hypothesis the ratio K a/e [ T a 0 tends to zero, the following result implies Proposition.. Lemma 4.. If then Var T a 0 /E [ T a 0 0. Q a E [ T a 0 0, 4.7

12 60 Olivier Bertoncini Proof of Lemma 4. For all a J, we have Var T a 0 Var T x px x la E [ Tx px 4.8 x la x la µ x πx πb b µ b πb, b B x where the second inequality comes from equation 3. of Lemma 3.. Thus Var T a 0 and by 3. x la πb x µ x πx Qx Q a x la πb x µ x πx, 4.9 Ta 0 Var E [ Q a T a 0 E [ T a Proof of Proposition.3. As above, by Lemma 4. R a K a /K µ together with the 0 l -SD condition, the following lemma implies Proposition.3. Lemma 4.3. If then E [ T a 0 /E [ T0 a 0. R a E [ T a 0 0, 4. Proof of Lemma 4.3 It is equivalent to show that Γ a : E [ T a 0 E [ T a 0 +E [ T0 a By 3.5 and 3.7, we have Γ a x la x la πb x µ x πx µ x πx We now consider the probability measure on la defined by expectations EF : x la x la Fx µ x πx µ x πx. 4.3, 4.4 and the inequality E F E F for Fx πb x implies that Γ a R a /E [ T a 0, which concludes the proof.

13 Cut-off and Escape Behaviors for Birth and Death Chains on Trees Proof of Theorem.4. The cut-off behavior of T a 0 was already proven in Proposition.. For the escape behavior of T 0 a, we need to prove that sup x Ia E [ [ T x 0 /E T0 a tends to zero andthatthesequencet 0 a /E [ T 0 a isuniformlyintegrableseebarreraetal.,009,theorem. For the first point, we have sup x Ia E [ T x 0 E [ sup [ x I a E Tx 0 T 0 a E [ E [ T a 0 T a 0 E [ K E [ Ta 0 T 0 a E [ T 0, a as a consequence of.7 and Proposition.3. For the uniform integrability, we apply Corollary 3.5 with n da and j 0 to get E [ T0 a E [ + T 0 a K µ K a E [ +o. 4.6 T 0 a The second inequality is due to.7 and Proposition.3, and thus the sequence T 0 a /E [ T 0 a is uniformly integrable Proof of Corollary.5. The key element of this result, is that K b is equal to K a, as B αb B αa for b La. Then if the 0 l -SD condition holds for a, it also holds for every state b of La: K b E [ T b 0 K a E [ T b 0 Ka CE [ T a 0 Since equation 4.3 is valid for any state, we can apply Lemma 4. and Proposition. holds for all b: that is T b 0 exhibits cut-off behavior for every b La. It remains to prove the escape behavior of T 0 b. Let b La, using.7 we have sup x Ia E [ T x 0 E [ sup [ x I a E Tx 0 T 0 b E [ E [ T a 0 T a 0 E [ E [ Tb 0 T b 0 E [ K E [ T b 0 T 0 b C E [, 4.8 T 0 b and this tends to zero since by Lemma 4.3, Proposition.3 holds for b. Now by Corollary 3.5, we have E [ T0 b E [ T 0 b + K µ K a E [ +o, 4.9 T 0 b since by.7 and Proposition.3 K a E [ K a T 0 b E [ E [ Ta 0 T a 0 E [ E [ Tb 0 T b 0 E [ K E [ T b 0 T 0 b C E [ T 0 b Thus T 0 b /E [ T 0 b is uniformly square integrable, and T0 b exhibits escape behavior. Acknowledgments The author wishes to thank an anonymous referee for the careful reading of the manuscript and his helpful comments and suggestions to improve the paper. References D. Aldous. Random walks on finite groups and rapidly mixing Markov chains. In Seminar on probability, XVII, volume 986 of Lecture Notes in Math., pages Springer, Berlin 983. MR D. Aldous and P. Diaconis. Shuffling cards and stopping times. Amer. Math. Monthly 93 5, MR84. D. Aldous and P. Diaconis. Strong uniform times and finite random walks. Adv. in Appl. Math. 8, MR

14 6 Olivier Bertoncini D. Aldous and J. Fill. Reversible markov chains and random walks on graphs 0. Monograph in preparation. J. Barrera, O. Bertoncini and R. Fernández. Abrupt convergence and escape behavior for birth and death chains. J. Stat. Phys. 37 4, MR O. Bertoncini. Convergence abrupte et métastabilité. Ph.D. thesis, Université de Rouen O. Bertoncini, J. Barrera M. and R. Fernández. Cut-off and exit from metastability: two sides of the same coin. C. R. Math. Acad. Sci. Paris 346 -, MR4380. M. Cassandro, A. Galves, E. Olivieri and M. E. Vares. Metastable behavior of stochastic dynamics: a pathwise approach. J. Statist. Phys , MR S. N. Dorogovtsev. Lectures on Complex Networks. Oxford Master Series In Statistical, Computational, And Theoretical Physics. Oxford University Press, Oxford 00. S. N. Dorogovtsev, A. V. Goltsev and J. F. F. Mendes. Critical phenomena in complex networks. Reviews of Modern Physics 80 4, R. Lyons. Random walks and percolation on trees. Ann. Probab. 8 3, MR S. Martínez and B. Ycart. Decay rates and cutoff for convergence and hitting times of Markov chains with countably infinite state space. Adv. in Appl. Probab. 33, MR853. W. Woess. Denumerable Markov chains. EMS Textbooks in Mathematics. European Mathematical Society EMS, Zürich 009. Generating functions, boundary theory, random walks on trees. MR

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018

Math 456: Mathematical Modeling. Tuesday, March 6th, 2018 Math 456: Mathematical Modeling Tuesday, March 6th, 2018 Markov Chains: Exit distributions and the Strong Markov Property Tuesday, March 6th, 2018 Last time 1. Weighted graphs. 2. Existence of stationary

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015

MATH 564/STAT 555 Applied Stochastic Processes Homework 2, September 18, 2015 Due September 30, 2015 ID NAME SCORE MATH 56/STAT 555 Applied Stochastic Processes Homework 2, September 8, 205 Due September 30, 205 The generating function of a sequence a n n 0 is defined as As : a ns n for all s 0 for which

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution.

Lecture 7. µ(x)f(x). When µ is a probability measure, we say µ is a stationary distribution. Lecture 7 1 Stationary measures of a Markov chain We now study the long time behavior of a Markov Chain: in particular, the existence and uniqueness of stationary measures, and the convergence of the distribution

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2

MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 MATH 56A: STOCHASTIC PROCESSES CHAPTER 2 2. Countable Markov Chains I started Chapter 2 which talks about Markov chains with a countably infinite number of states. I did my favorite example which is on

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321

Lecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321 Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process

More information

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL

STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL First published in Journal of Applied Probability 43(1) c 2006 Applied Probability Trust STABILIZATION OF AN OVERLOADED QUEUEING NETWORK USING MEASUREMENT-BASED ADMISSION CONTROL LASSE LESKELÄ, Helsinki

More information

MARKOV PROCESSES. Valerio Di Valerio

MARKOV PROCESSES. Valerio Di Valerio MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some

More information

A COMPOUND POISSON APPROXIMATION INEQUALITY

A COMPOUND POISSON APPROXIMATION INEQUALITY J. Appl. Prob. 43, 282 288 (2006) Printed in Israel Applied Probability Trust 2006 A COMPOUND POISSON APPROXIMATION INEQUALITY EROL A. PEKÖZ, Boston University Abstract We give conditions under which the

More information

Math Homework 5 Solutions

Math Homework 5 Solutions Math 45 - Homework 5 Solutions. Exercise.3., textbook. The stochastic matrix for the gambler problem has the following form, where the states are ordered as (,, 4, 6, 8, ): P = The corresponding diagram

More information

SIMILAR MARKOV CHAINS

SIMILAR MARKOV CHAINS SIMILAR MARKOV CHAINS by Phil Pollett The University of Queensland MAIN REFERENCES Convergence of Markov transition probabilities and their spectral properties 1. Vere-Jones, D. Geometric ergodicity in

More information

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is

Lecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 65 2.2.5. proof of extinction lemma. The proof of Lemma 2.3 is just like the proof of the lemma I did on Wednesday. It goes like this. Suppose that â is the smallest

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

Revisiting the Contact Process

Revisiting the Contact Process Revisiting the Contact Process Maria Eulália Vares Universidade Federal do Rio de Janeiro World Meeting for Women in Mathematics - July 31, 2018 The classical contact process G = (V,E) graph, locally finite.

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 23 Feb 2015 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

The Moran Process as a Markov Chain on Leaf-labeled Trees

The Moran Process as a Markov Chain on Leaf-labeled Trees The Moran Process as a Markov Chain on Leaf-labeled Trees David J. Aldous University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860 aldous@stat.berkeley.edu http://www.stat.berkeley.edu/users/aldous

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

STAT STOCHASTIC PROCESSES. Contents

STAT STOCHASTIC PROCESSES. Contents STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Markov Chains for Everybody

Markov Chains for Everybody Markov Chains for Everybody An Introduction to the theory of discrete time Markov chains on countable state spaces. Wilhelm Huisinga, & Eike Meerbach Fachbereich Mathematik und Informatik Freien Universität

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

Lectures on Markov Chains

Lectures on Markov Chains Lectures on Markov Chains David M. McClendon Department of Mathematics Ferris State University 2016 edition 1 Contents Contents 2 1 Markov chains 4 1.1 The definition of a Markov chain.....................

More information

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking

Lecture 7: Simulation of Markov Processes. Pasi Lassila Department of Communications and Networking Lecture 7: Simulation of Markov Processes Pasi Lassila Department of Communications and Networking Contents Markov processes theory recap Elementary queuing models for data networks Simulation of Markov

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

88 CONTINUOUS MARKOV CHAINS

88 CONTINUOUS MARKOV CHAINS 88 CONTINUOUS MARKOV CHAINS 3.4. birth-death. Continuous birth-death Markov chains are very similar to countable Markov chains. One new concept is explosion which means that an infinite number of state

More information

Oberwolfach workshop on Combinatorics and Probability

Oberwolfach workshop on Combinatorics and Probability Apr 2009 Oberwolfach workshop on Combinatorics and Probability 1 Describes a sharp transition in the convergence of finite ergodic Markov chains to stationarity. Steady convergence it takes a while to

More information

Homework set 3 - Solutions

Homework set 3 - Solutions Homework set 3 - Solutions Math 495 Renato Feres Problems 1. (Text, Exercise 1.13, page 38.) Consider the Markov chain described in Exercise 1.1: The Smiths receive the paper every morning and place it

More information

Introduction to Markov Chains and Riffle Shuffling

Introduction to Markov Chains and Riffle Shuffling Introduction to Markov Chains and Riffle Shuffling Nina Kuklisova Math REU 202 University of Chicago September 27, 202 Abstract In this paper, we introduce Markov Chains and their basic properties, and

More information

6 Markov Chain Monte Carlo (MCMC)

6 Markov Chain Monte Carlo (MCMC) 6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

INTRODUCTION TO MARKOV CHAIN MONTE CARLO

INTRODUCTION TO MARKOV CHAIN MONTE CARLO INTRODUCTION TO MARKOV CHAIN MONTE CARLO 1. Introduction: MCMC In its simplest incarnation, the Monte Carlo method is nothing more than a computerbased exploitation of the Law of Large Numbers to estimate

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Biased activated random walks

Biased activated random walks Joint work with Leonardo ROLLA LAGA (Université Paris 3) Probability seminar University of Bristol 23 April 206 Plan of the talk Introduction of the model, results 2 Elements of proofs 3 Conclusion Plan

More information

25.1 Markov Chain Monte Carlo (MCMC)

25.1 Markov Chain Monte Carlo (MCMC) CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Approx counting/sampling, MCMC methods Date: 4/4/07 The previous lecture showed that, for self-reducible problems,

More information

The cutoff phenomenon for random walk on random directed graphs

The cutoff phenomenon for random walk on random directed graphs The cutoff phenomenon for random walk on random directed graphs Justin Salez Joint work with C. Bordenave and P. Caputo Outline of the talk Outline of the talk 1. The cutoff phenomenon for Markov chains

More information

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING

INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

Mixing Times and Hitting Times

Mixing Times and Hitting Times Mixing Times and Hitting Times David Aldous January 12, 2010 Old Results and Old Puzzles Levin-Peres-Wilmer give some history of the emergence of this Mixing Times topic in the early 1980s, and we nowadays

More information

Modern Discrete Probability Spectral Techniques

Modern Discrete Probability Spectral Techniques Modern Discrete Probability VI - Spectral Techniques Background Sébastien Roch UW Madison Mathematics December 22, 2014 1 Review 2 3 4 Mixing time I Theorem (Convergence to stationarity) Consider a finite

More information

Characterization of cutoff for reversible Markov chains

Characterization of cutoff for reversible Markov chains Characterization of cutoff for reversible Markov chains Yuval Peres Joint work with Riddhi Basu and Jonathan Hermon 3 December 2014 Joint work with Riddhi Basu and Jonathan Hermon Characterization of cutoff

More information

The coupling method - Simons Counting Complexity Bootcamp, 2016

The coupling method - Simons Counting Complexity Bootcamp, 2016 The coupling method - Simons Counting Complexity Bootcamp, 2016 Nayantara Bhatnagar (University of Delaware) Ivona Bezáková (Rochester Institute of Technology) January 26, 2016 Techniques for bounding

More information

Biased random walks on subcritical percolation clusters with an infinite open path

Biased random walks on subcritical percolation clusters with an infinite open path Biased random walks on subcritical percolation clusters with an infinite open path Application for Transfer to DPhil Student Status Dissertation Annika Heckel March 3, 202 Abstract We consider subcritical

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes

Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Front. Math. China 215, 1(4): 933 947 DOI 1.17/s11464-15-488-5 Central limit theorems for ergodic continuous-time Markov chains with applications to single birth processes Yuanyuan LIU 1, Yuhui ZHANG 2

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

Intersecting curves (variation on an observation of Maxim Kontsevich) by Étienne GHYS

Intersecting curves (variation on an observation of Maxim Kontsevich) by Étienne GHYS Intersecting curves (variation on an observation of Maxim Kontsevich) by Étienne GHYS Abstract Consider the graphs of n distinct polynomials of a real variable intersecting at some point. In the neighborhood

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

TREES OF SELF-AVOIDING WALKS. Keywords: Self-avoiding walk, equivalent conductance, random walk on tree

TREES OF SELF-AVOIDING WALKS. Keywords: Self-avoiding walk, equivalent conductance, random walk on tree TREES OF SELF-AVOIDING WALKS VINCENT BEFFARA AND CONG BANG HUYNH Abstract. We consider the biased random walk on a tree constructed from the set of finite self-avoiding walks on a lattice, and use it to

More information

An Introduction to Entropy and Subshifts of. Finite Type

An Introduction to Entropy and Subshifts of. Finite Type An Introduction to Entropy and Subshifts of Finite Type Abby Pekoske Department of Mathematics Oregon State University pekoskea@math.oregonstate.edu August 4, 2015 Abstract This work gives an overview

More information

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE

LIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION

More information

Random Times and Their Properties

Random Times and Their Properties Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices

Discrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Power Series and Analytic Function

Power Series and Analytic Function Dr Mansoor Alshehri King Saud University MATH204-Differential Equations Center of Excellence in Learning and Teaching 1 / 21 Some Reviews of Power Series Differentiation and Integration of a Power Series

More information

ON COMPOUND POISSON POPULATION MODELS

ON COMPOUND POISSON POPULATION MODELS ON COMPOUND POISSON POPULATION MODELS Martin Möhle, University of Tübingen (joint work with Thierry Huillet, Université de Cergy-Pontoise) Workshop on Probability, Population Genetics and Evolution Centre

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,

More information

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains

8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States

More information

L n = l n (π n ) = length of a longest increasing subsequence of π n.

L n = l n (π n ) = length of a longest increasing subsequence of π n. Longest increasing subsequences π n : permutation of 1,2,...,n. L n = l n (π n ) = length of a longest increasing subsequence of π n. Example: π n = (π n (1),..., π n (n)) = (7, 2, 8, 1, 3, 4, 10, 6, 9,

More information

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions

CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some

More information

Convergence Rate of Markov Chains

Convergence Rate of Markov Chains Convergence Rate of Markov Chains Will Perkins April 16, 2013 Convergence Last class we saw that if X n is an irreducible, aperiodic, positive recurrent Markov chain, then there exists a stationary distribution

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk

Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk Stability and Rare Events in Stochastic Models Sergey Foss Heriot-Watt University, Edinburgh and Institute of Mathematics, Novosibirsk ANSAPW University of Queensland 8-11 July, 2013 1 Outline (I) Fluid

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Advanced Computer Networks Lecture 2. Markov Processes

Advanced Computer Networks Lecture 2. Markov Processes Advanced Computer Networks Lecture 2. Markov Processes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/28 Outline 2/28 1 Definition

More information

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes

Lectures on Stochastic Stability. Sergey FOSS. Heriot-Watt University. Lecture 4. Coupling and Harris Processes Lectures on Stochastic Stability Sergey FOSS Heriot-Watt University Lecture 4 Coupling and Harris Processes 1 A simple example Consider a Markov chain X n in a countable state space S with transition probabilities

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego Yaglom-type limit theorems for branching Brownian motion with absorption by Jason Schweinsberg University of California San Diego (with Julien Berestycki, Nathanaël Berestycki, Pascal Maillard) Outline

More information

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that

215 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that 15 Problem 1. (a) Define the total variation distance µ ν tv for probability distributions µ, ν on a finite set S. Show that µ ν tv = (1/) x S µ(x) ν(x) = x S(µ(x) ν(x)) + where a + = max(a, 0). Show that

More information