Zhan Shi. Université Paris VI. This version: June 21, Updated version available at:

Size: px
Start display at page:

Download "Zhan Shi. Université Paris VI. This version: June 21, Updated version available at:"

Transcription

1 RANDOM WALKS & TREES Zhan Shi Université Paris VI This version: June 2, 200 Updated version available at:

2 Preface These notes provide an elementary and self-contained introduction to branching random walks. Chapter gives a brief overview of Galton Watson trees, whereas Chapter 2 presents the classical law of large numbers for branching random walks. These two short chapters are not exactly indispensable, but they introduce the idea of using size-biased trees, thus giving motivations and an avant-goût to the main part, Chapter 3, where branching random walks are studied from a deeper point of view, and are connected to the model of directed polymers on a tree. Tree-related random processes form a rich and exciting research subject. These notes cover only special topics. For a general account, we refer to the St-Flour lecture notes of Peres [47] and to the forthcoming book of Lyons and Peres [42], as well as to Duquesne and Le Gall [23] and Le Gall [37] for continuous random trees. I am grateful to the organizers of the Symposium for the kind invitation, and to my co-authors for sharing the pleasure of random climbs.

3 Contents Galton Watson trees Galton Watson trees and extinction probabilities Size-biased Galton Watson trees Proof of the Kesten Stigum theorem The Seneta Heyde norming Notes Branching random walks and the law of large numbers 3 Warm-up Law of large numbers Proof of the law of large numbers: lower bound Size-biased branching random walk and martingale convergence Proof of the law of large numbers: upper bound Notes Branching random walks and the central limit theorem 23 Central limit theorem Directed polymers on a tree Small moments of partition function: upper bound in Theorem Small moments of partition function: lower bound in Theorem Partition function: all you need to know about exponents 3β and Central limit theorem: the 3 limit Central limit theorem: the limit Partition function: exponent β A pathological case The Seneta Heyde norming for the branching random walk Branching random walks with selection, I

4 2 Branching random walks with selection, II Notes Solution to the exercises 4 Bibliography 45

5 Chapter Galton Watson trees We start by studying a few basic properties of supercritical Galton Watson trees. The main aim of this chapter is to introduce the notion of size-biased trees. In particular, we see in Section 3 how this allows us to prove the well-known Kesten Stigum theorem. This notion of size-biased trees will be developed in forthcoming chapters to study more complicated models.. Galton Watson trees and extinction probabilities We are interested in processes involving (rooted) trees. The simplest rooted tree is the regular rooted tree, where each vertex has a fixed number (say m, with m > ) of offspring. For example, here is a rooted binary tree: root first generation second generation Figure : First generations in a rooted binary tree third generation

6 2 Chapter. Galton Watson trees Let Z n denote the number of vertices (also called particles or individuals) in the n-th generation, then Z n = m n, n 0. In probability theory, we often encounter trees where the number of offspring of a vertex is random. The easiest case is when these random numbers are i.i.d., which leads to a Galton Watson tree. A Galton Watson tree starts with one initial ancestor (sometimes, it is possible to have several or even a random number of initial ancestors, in which case it will be explicitly stated). It produces a certain number of offspring according to a given probability distribution. The new particles form the first generation. Each of the new particles produces offspring according to the same probability distribution, independently of each other and of everything else in the generation. And the system regenerates. We write p i for the probability that a given particle has i children, i 0; thus i=0 p i =. [In the case of a regular m-ary tree, p m = and p i = 0 for i m.] To avoid trivial discussions, we assume throughout that p 0 +p <. As before, we write Z n for the number of particles in the n-th generation. It is clear that if Z n = 0 for a certain n, then Z j = 0 for all j n. Figure 2: First generations in a Galton Watson tree In Figure 2, we have Z 0 =, Z = 2, Z 2 = 4, Z 3 = 7. One of the first questions we ask is about the extinction probability (.) q := P{Z n = 0 eventually}. Sometimes also referred to as a Bienaymé Galton Watson tree.

7 Galton Watson trees and extinction probabilities 3 It turns out that the expected number of offspring plays an important role. Let (.2) m := E(Z ) = ip i (0, ]. i=0 Theorem. Let q be the extinction probability defined in (.). (i) The extinction probability q is the smallest root of the equation f(s) = s for s [0, ], where f(s) := s i p i, (0 0 := ) is the generating function of the reproduction law. (ii) In particular, q = if m, and q < if m >. i=0 Proof. By definition, f(s) = E(s Z ). Conditioning on Z n, Z n is the sum of Z n i.i.d. random variables having the common distribution which is that of Z ; thus E(s Zn Z n ) = f(s) Z n, which implies E(s Zn ) = E(f(s) Z n ). By induction, E(s Zn ) = f n (s) for any n, where f n denotes the n-th fold composition of f. In particular, P(Z n = 0) = f n (0). The event {Z n = 0} being non-decreasing in n (i.e., {Z n = 0} {Z n+ = 0}), we have ( ) q = P {Z n = 0} = lim P(Z n = 0) = lim f n (0). n n n Let us look at the graph of the function f on [0, ]. The function is (strictly) increasing and strictly convex, with f(0) = p 0 0 and f() =. In particular, it has at most two fixed points. If m, then p 0 > 0, and f(s) > s for all s [0, ), which implies f n (0). In other words, q = is the unique root of f(s) = s. Assume now m (, ]. This time, f n (0) converges increasingly to the unique root of f(s) = s, s [0, ). In particular, q <. Theorem. tells us that in the subcritical case (i.e., m < ) and in the critical case (m = ), the Galton Watson process dies out with probability, whereas in the supercritical case (m > ), the Galton Watson process survives with (strictly) positive probability. In the rest of the text, we will be mainly interested in the supercritical case m >. But for the time being, let us introduce W n := Z n mn, n 0,

8 4 Chapter. Galton Watson trees f(s) f(s) p 0 p 0 0 q < s 0 q = Figure 3a: m > Figure 3b: m s which is well-defined as long as m <. It is clear that W n is a martingale (with respect to the natural filtration of (Z n ), for example). Since it is non-negative, we have W n W, a.s., for some non-negative random variable W. [We recall that if (ξ n ) is a sub-martingale with sup n E(ξ n + ) <, then ξ n converges almost surely to a finite random variable. Apply this to ( W n ).] By Fatou s lemma, E(W) liminf n E(W n ) =. It is, however, possible that W = 0. So it is important to know when W is non-degenerate. We make the trivial remark that W = 0 if the system dies out. In particular, by Theorem., we have W = 0 a.s. if m. What happens if m >? We start with two simple observations. The first says that in general, P(W = 0) equals q or, whereas the second tells us that W is non-degenerate if the reproduction law admits a finite second moment. Observation.2 Assume m <. Then P(W = 0) equals either q or. Proof. There is nothing to prove if m. So let us assume < m <. By definition, 2 Z n+ = Z i= Z(i) n, where Z n (i), i, are copies of Z n, independent of each other and Z. Dividing on both sides by m n and letting n, it follows that mw is 2 Usual notation: := 0.

9 2 Size-biased Galton Watson trees 5 distributed as Z i= W(i), where W (i), i, are copies of W, independent of each other and Z. In particular, P(W = 0) = E[P(W = 0) Z ] = f(p(w = 0)), i.e., P(W = 0) is a root of f(s) = s for s [0, ]. In words, P(W = 0) = q or. Exercise.3 If E(Z 2 ) < and m >, then sup ne(w 2 n ) <. Exercise.4 If E(Z 2 ) < and m >, then E(W) =, and P(W = 0) = q. It turns out that the second moment condition in Exercise.4 can be weakened to an XlogX-type integrability condition. Let log + x := logmax{x, }. Theorem.5 (Kesten and Stigum [35]) Assume < m <. Then E(W) = P(W > 0 non-extinction) = E(Z log + Z ) <. Remark. (i) The conclusion in the Kesten Stigum theorem can also be stated as E(W) = P(W = 0) = q i= p iilogi <. (ii) The condition E(Z log + Z ) < may look technical. We will see in the next section why this is a natural condition. 2. Size-biased Galton Watson trees In order to introduce size-biased Galton Watson processes, we need to view the tree as a random element in a probability space (Ω, F, P). Let U := { } k= (N ) k, where N := {,2, }. If u, v U, we denote by uv the concatenated element, with u = u = u. A tree ω is a subset of U satisfying: (i) ω; (ii) if uj ω for some j N, then u ω; (iii) if u ω, then uj ω if and only if j N u (ω) for some non-negative integer N u (ω). In the language of trees, if u U is an element of the tree ω, u is a vertex of the tree, and N u (ω) the number of children. Vertices of ω are labeled by their line of descent: if u = i i n U, then u is the i n -th child of the i n -th child of... of the i -th child of the initial ancestor. Let Ω be the space of all trees. We now endow it with a sigma-algebra. For any u U, let Ω u := {ω Ω : u ω} denote the subspace of Ω consisting of all the trees containing

10 6 Chapter. Galton Watson trees Figure 4: Vertices of a tree as elements of U u as a vertex. [In particular, Ω = Ω because all the trees contain the root as a vertex, according to part (i) of the definition.] The promised sigma-algebra associated with Ω is defined by F := σ{ω u, u U }. Let T : Ω Ω be the identity application. Let (p k, k 0) be a probability. According to Neveu [44], there exists a probability P on Ω such that the law of T under P is the law of the Galton Watson tree with reproduction distribution (p k ). Let F n := σ{ω u, u U, u n}, where u is the length of u (representing the generation of the vertex u in the language of trees). Note that F is the smallest sigma-field containing every F n. For any tree ω Ω, let Z n (ω) be the number of individuals in the n-th generation. 3 It is easily checked that for any n, Z n is a random variable taking values in N := {0,,2 }. Let P be the probability on (Ω, F) such that for any n, P Fn = W n P Fn, i.e., P(A) = A W ndp for any A F n. Here, P Fn and P Fn are the restrictions of P and P on F n, respectively. Since W n is a martingale, the existence of P is guaranteed by Kolmogorov s extension theorem. 3 The rigorous definition is Z n (ω) := #{u U : u ω, u = n}.

11 2 Size-biased Galton Watson trees 7 For any n, P(Z n > 0) = E[ {Zn>0}W n ] = E[W n ] =. Therefore, P(Z n > 0, n) =. In other words, there is almost surely non-extinction of the Galton Watson tree T under the new probability P. The Galton Watson tree T under P is called a size-biased Galton Watson tree. Let us give a description of its paths. Let N := N. If N, then there are N individuals in the first generation. We write T, T 2,, T N for the N subtrees rooted at each of the N individual in the first generation. 4 Exercise 2. Let k. If A, A 2,, A k are elements of F, then (2.) P(N = k, T A,,T k A k ) = kp k k P(A ) P(A i ) P(A i )P(A i+ ) P(A k ). m k i= Equation (2.) tells us the following fact about the size-biased Galton Watson tree: The root has the biased distribution, i.e., having k children with probability kp k; among the m individuals in the first generation, one of them is chosen randomly (according to the uniform distribution) such that the subtree rooted at this vertex is a size-biased Galton Watson tree, whereas the subtrees rooted at all other vertices in the first generation are independent copies of the usual Galton Watson tree. Iterating the procedure, we obtain a decomposition of the size-biased Galton Watson tree with an (infinite) spine and with i.i.d. copies of the usual Galton Watson tree: The root =: w 0 has the biased distribution, i.e., having k children with probability kp k. Among the m children of the root, one of them is chosen randomly (according to the uniform distribution) as the element of the spine in the first generation (denoted by w ). We attach subtrees rooted at all other children; these subtrees are independent copies of the usual Galton Watson tree. The vertex w has the biased distribution. Among the children of w, we choose at random one of them as the element of the spine in the second generation (denoted by w 2 ). Independent copies of the usual Galton Watson tree are attached as subtrees rooted at all other children of w, whereas w 2 has the biased distribution. And so on. See Figure 5. From technical point of view, it is more convenient to connect size-biased Galton Watson trees with Galton Watson branching processes with immigration, described as follows. A Galton Watson branching processes with immigration starts with no individual (say), and is characterized by a reproduction law and an immigration law. At generation n (for 4 The rigorous definition of T u (ω) if u ω is a vertex of ω: T u (ω) := {v U : uv ω}.

12 8 Chapter. Galton Watson trees w 0 = w GW GW GW w 2 GW GW w 3 GW GW Figure 5: A size-biased Galton Watson tree n ), Y n new individuals immigrate into the system, while all individuals regenerate independently and following the same reproduction law; we assume that (Y n, n ) is a collection of i.i.d. random variables following the same immigration law, and independent of everything else in that generation. Our description of the size-biased Galton Watson tree can be reformulated in the following way: (Z n, n 0) under P is a Galton Watson branching process with immigration, whose immigration law is that of N, with P( N = k) := kp k m 3. Proof of the Kesten Stigum theorem for k. We prove the Kesten Stigum theorem, by means of size-biased Galton Watson trees. Let us start with a few elementary results. Exercise 3. Let X, X, X 2, be i.i.d. non-negative random variables. (i) If E(X) <, then Xn n 0 a.s. (ii) If E(X) =, then limsup n X n n = a.s. For thenext elementary results, let (F n ) beafiltration, andlet P and P beprobabilities 5 on (Ω, F ). Assume that for any n, P Fn is absolutely continuous with respect to P Fn. Let ξ n := d P Fn dp Fn, and let ξ := limsup n ξ n. 5 We denote by F, the smallest sigma-algebra containing all F n.

13 3 Proof of the Kesten Stigum theorem 9 Exercise 3.2 Prove that (ξ n ) is a P-martingale. Prove that ξ n ξ P-a.s., and that ξ < P-a.s. Exercise 3.3 Prove that (3.) P(A) = E(ξ A )+ P(A {ξ = }), A F. Hint: You can first prove the identity assuming P P and using Lévy s martingale convergence theorem 6. Exercise 3.4 Prove that (3.2) P P ξ <, P-a.s. E(ξ) =, (3.3) P P ξ =, P-a.s. E(ξ) = 0. The Kesten Stigum theorem will be a consequence of Seneta [5] s theorem for branching processes with immigration. Exercise 3.5 (Seneta s theorem) Let Z n denote the number of individuals in the n-th generation of a branching process with immigration (Y n ). Assume that < m <, where m denotes the expectation of the reproduction law. (i) If E(log + Y ) <, then lim n Z n m n exists and is finite a.s. (ii) If E(log + Y ) =, then limsup n Z n m n =, a.s. Proof of the Kesten Stigum theorem. Assume < m <. If i= p iilogi <, then E(log + N) <. By Seneta s theorem, limn W n exists P-a.s. and is finite P-a.s. By (3.2), E(W) = ; in particular, P(W = 0) < and thus P(W = 0) = q (Observation.2). If i= p iilogi =, then E(log + N) =. By Seneta s theorem, limn W n exists P-a.s. and is infinite P-a.s. By (3.3), E(W) = 0, and thus P(W = 0) =. 6 That is, if η is P-integrable, then E(η F n ) converges in L (P) and P-almost surely, when n, to E(η F ).

14 0 Chapter. Galton Watson trees 4. The Seneta Heyde norming In the supercritical case, the Kesten Stigum theorem (Theorem.5) tells us that under the condition E(Z log + Z Z ) <, n converges almost surely to a limit (i.e., W), which m n vanishes precisely on the set of extinction. It turns out that even without the condition E(Z log + Z ) <, theconclusion still holdstrueif we areallowed tomodifythenormalizing function. Theorem 4. (Seneta [50], Heyde [3]) Assume < m <. Then there exists a sequence (c n ) of positive constants such that c n+ c n m and that Zn c n has a (finite) almost sure limit vanishing precisely on the set of extinction. Proof. We note that f is well-defined on [p 0, ]. Let s 0 (q, ) and define by induction s n := f (s n ), n 0. Clearly, s n. Conditioning on F n, Z n is the sum of Z n i.i.d. random variables having the common distribution which is that of Z ; thus E(s Zn n F n ) = f(s n ) Z n, which implies that s Zn n is a martingale. Since it is also bounded, it converges almost surely and in L to a limit 7, say Y, with E(Y) = E(s Z 0 0 ) = s 0. Let c n := log(/s n), n 0. By definition, szn n = e Zn/cn Z, so that lim n n c n exists a.s., and lies in [0, ]. By l Hôpital s rule, lim s logf(s) logs f (s)s = lim = m. s f(s) This yields c n+ c n m. Consider the set A := {lim n Z n c n = 0}. Let as before T,, T Z denote the subtrees rooted at each of the individuals in the first generation. Then 8 P(A) = P(T A) = E{P(T A Z )} E{P(T A,,T Z A Z )}, the inequality being a consequence of the fact that c n+ c n m. Since T,, T Z are i.i.d. given Z, we have P(T A,,T Z A Z ) = [P(A)] Z ; therefore, P(A) E{[P(A)] Z } = f(p(a)). 7 We use the fact that if (ξ n ) is a martingale with E(sup n ξ n ) <, then it converges almost surely and in L. 8 In the literature, we say that the property A is inherited. What we have proved here says that an inherited property has probability either 0 or given non-extinction.

15 5 Notes On the other hand, P(A) q. Thus P(A) {q, }, and P(A non-extinction) {0, }. Since E(Y) = s 0 <, this yields P(A non-extinction) = 0; in other words, {lim n Z n c n = 0} = {extinction} almost surely. Similarly, we can pose B := {lim n Z n c n < } and check that P(B) f(p(b)). Since P(B) q, we have P(B non-extinction) {0, }. Now, E(Y) = s 0 > q, we obtain P(B non-extinction) =. Remark. Of course 9, in the Seneta Heyde theorem, we have c n m n if and only if E(Z log + Z ) <. 5. Notes Section concerns elementary properties of Galton Watson processes. For a general account, werefertostandardbookssuchasasmussen andhering[3], Athreya andney[4], Harris[30]. The formulation of branching processes described at the beginning of Section 2 is due to Neveu [44]; the idea of viewing Galton Watson branching processes as tree-valued random variables can be found in Harris [30]. The idea of size-biased branching processes, which goes back at least to Kahane and Peyrière[33], has been used by several authors in various contexts. Its presentation in Section 2, as well as its use to prove the Kesten Stigum theorem, comes from Lyons, Pemantle and Peres [4]. Size-biased branching processes can actually be used to prove the corresponding results of the Kesten Stigum theorem in the critical and subcritical cases. See [4] for more details. The short proof of Seneta s theorem (Exercise 3.5) is borrowed from Asmussen and Hering [3], pp The Seneta Heyde theorem (Theorem 4.) was first proved by Seneta for convergence in distribution, and then by Heyde for almost sure convergence. The short proof is from Lyons and Peres [42]. For another simple proof, see Grey [25]. 9 By a n b n, we mean 0 < liminf n a n bn limsup n a n bn <.

16 2 Chapter. Galton Watson trees

17 Chapter 2 Branching random walks and the law of large numbers The Galton Watson branching process simply counts the number of particles in each generation. In this chapter, we make an extension in the spatial sense by associating each individual of the Galton Watson process with a random variable. This results to a branching random walk. We study, in a first section, a simple example of branching random walk; in particular, the idea of using the Cramér Chernoff large deviation theorem appears in a natural way. We then put this idea into a general setting, and prove a law of large numbers for the branching random walk. Our basic technique relies, once again, on (a spatial version of) size-biased trees. In particular, this technique also gives a spatial version of the Kesten Stigum theorem, namely, the Biggins martingale convergence theorem.. Warm-up We study a simple example of branching random walk in this section. Let T be a binary tree rooted at. Let (ξ x, x T) be a collection of i.i.d. random variables indexed by all the vertices of T. To simplify the situation, we assume that the common distribution of ξ x is the uniform distribution on [0, ]. For any vertex x T, let [, x] denote the shortest path connecting to x. If x is in the n-th generation, then [, x] is composed of n+ vertices, each one is the parent of the next vertex whereas the last vertex is simply x. Let ], x] := [, x]\{ }. We define By an abuse of notation, we keep using T to denote the vertices of T. 3

18 4 Chapter 2. Branching random walks and the law of large numbers V( ) := 0 and V(x) := y ]],x]] ξ y, x T\{ }. Then (V(x), x T) is anexample of branching random walk which we study in its generality in the next section. For the moment, we continue with our example. For any x in the n-th generation, V(x) is by definition sum of n i.i.d. uniform-[0, ] random variables, so by the usual law of large numbers, V(x) would be approximatively n 2 when n is large. We are interested in the asymptotic behaviour, when n, of 2 n inf x =n V(x). If, with probability one, γ := lim n n inf x =nv(x) exists and is a constant, then γ 2 according to the above discussion. A little more thought will convince us that the inequality would be strict: γ <. Here is why. 2 Let s (0, ). For any x = n, the probability P{V(x) sn} goes to 0 when n 2 (weak law of large numbers); this probability is exponentially small (the Cramér Chernoff theorem 3 ): P{V(x) sn} exp( I(s)n), for some constant I(s) > 0 depending on s which is explicitly known. Let N(s,n) be the number of x in the n-th generation such that V(x) sn. Then E(N(s,n)) is approximatively 2 n exp( I(s)n). In particular, if I(s) < log2, then E(N(s,n)), and one would expect that N(s,n) would be at least one when n is sufficiently large. If indeed N(s,n), then inf x =n V(x) sn, which would yield γ s <. On the other hand, if I(s) > log2, then by Chebyshev s inequality 2 P{N(s,n) } E(N(s,n)), np{n(s,n) } <, so a Borel Cantelli argument tells that γ s. 2 If the heuristics were true, then one would get γ = inf{s < : I(s) < log2} = sup{s < 2 : I(s) > log2} (admitting that the last identity holds). In the next sections, we will develop the heuristics into a rigorous argument. We will prove the Biggins martingale convergence theorem (which is a spatial version of the Kesten Stigum theorem) by means of size-biased branching random walks; the martingale convergence theorem allows us to avoid discussing the technical point about whether N(s,n) when E(N(s, n)) is very large. 2 Throughout, x denote the generation of the vertex x. 3 The Cramér Chernofftheorem saysthat if X, X 2, arei.i.d. real-valued randomvariables, then under some mild general assumptions, P{ n i= X i sn} (for s < E(X )) and P{ n i= X i s n} (for s > E(X )) decay exponentially when n. This can be formulated in a more general setting, known as the large deviation theory; see Dembo and Zeitouni [20].

19 2 Law of large numbers 5 2.Law of large numbers The (discrete-time one-dimensional) branching random walk is a natural extension of the Galton Watson tree in the spatial sense; its distribution is governed by a point process which we denote by Θ. An initial ancestor is born at the origin of the real line. Its children, who form the first generation, are positioned according to the point process Θ. Each of the individuals in the first generation produces children who are thus in the second generation and are positioned (with respect to their born places) according to the same point process Θ. And so on. We assume that each individual reproduces independently of each other and of everything else. The resulting system is called a branching random walk. It is clear that if we only count the number of individuals in each generation, we get a Galton Watson process, with #Θ being its reproduction distribution. The example insectioncorresponds tothespecial casethatthepoint process Θconsists of two independent uniform-[0, ] random variables. Let (V(x), x = n) denote the positions of the individuals in the n-th generation. We are interested in the asymptotic behaviour of inf x =n V(x). (2.) Let us introduce the (log-)laplace transform of the point process ( ψ(t) := loge x = Throughout this chapter, we assume: ψ(t) < for some t > 0; ψ(0) > 0. e tv(x) ) (, ], t 0. The assumption ψ(0) > 0 is equivalent to E(#Θ) >, i.e., the associated Galton Watson tree is supercritical. The main result of this chapter is as follows. Theorem 2. If ψ(0) > 0 and ψ(t) < for some t > 0, then almost surely on the set of non-extinction, where (2.2) lim n γ := inf{a R : J(a) > 0}, n inf V(x) = γ, x =n J(a) := inf[ta+ψ(t)], a R. t 0

20 6 Chapter 2. Branching random walks and the law of large numbers If instead we want to know about sup x =n V(x), we only need to replace the point process Θ by Θ. Theorem 2. is proved in Section 3 for the lower bound, and in Section 5 for the upper bound. We close this section with a few elementary properties of the functions J( ) and ψ( ). We assume that ψ(t) < for some t > 0. Clearly, J is concave, being the infimum of concave (linear) function; in particular, it is continuous on the interior of {a R : J(a) > }. Also, it is obvious that J is non-decreasing. We recall that a function f is said to be lower semi-continuous at point t if for any sequence t n t, liminf n f(t n ) f(t), and that f is lower semi-continuous if it is lower semi-continuous at all points. It is well-known and easily checked by definition that f is lower semi-continuous if and only if for any a R, {t : f(t) a} is closed. Exercise 2.2 Prove that ψ is convex and lower semi-continuous on [0, ). Exercise 2.3 Assume that ψ(t) < for some t > 0. Then for any t 0, ψ(t) = sup[j(a) at]. a R Hint: A property of the Legendre transformation f (x) := sup a R (ax f(a)): a function f : R (, ] is convex and lower semi-continuous if and only if 4 (f ) = f. Exercise 2.4 Assume that ψ(t) < for some t > 0. Then γ = sup{a R : J(a) < 0} Furthermore, γ is the unique solution of J(γ) = 0. 3.Proof of the law of large numbers: lower bound Fix an a R such that J(a) < 0. Let L n = L n (a) := {V(x) na}. x =n 4 The if part is trivial, whereas the only if part is proved by the fact that f(x) = sup{g(x) : g affine, g f}.

21 4 Size-biased branching random walk and martingale convergence 7 By Chebyshev s inequality, P(L n > 0) = P(L n ) E(L n ). Let t 0. Since {V (x) na} e nat tv (x), we have ( P(L n > 0) e nat E x =n e tv(x) ). To compute the expectation expression on the right-hand side, let F k (for any k) be the sigma-algebra generated by the first k generations of the branching random walk; then E( x =n e tv (x) F n ) = y =n e tv (y) e ψ(t). Taking expectation on both sides gives E( x =n e tv (x) ) = e ψ(t) E( x =n e tv (x) ), which is e nψ(t) by induction. As a consequence, P(L n > 0) e nat+nψ(t), for any t 0. Taking infimum over all t 0, this leads to: ( ) P(L n > 0) exp ninf (at+ψ(t)) = e nj(a). t 0 By assumption, J(a) < 0, so that n P(L n > 0) <. By the Borel Cantelli lemma, with probability one, for all sufficiently large n, we have L n = 0, i.e., inf x =n V(x) > na. The lower bound in Theorem 2. follows. 4. Size-biased branching random walk and martingale convergence Let β R be such that ψ(β) := loge{ x = e βv(x) } R. Let W n (β) := e nψ(β) x =n e βv(x) = e βv (x) nψ(β), n. [When β = 0, W n (0) is the martingale we studied in Chapter.] Using the branching structure, we immediately see that (W n (β), n ) is a martingale with respect to (F n ), where F n is the sigma-field induced by the first n generations of the branching random walk. Therefore, W n (β) W(β) a.s., for some non-negative random variable W(β). Fatou s lemma says that E[W(β)]. x =n Here is Biggins s martingale convergence theorem, which is the analogue of the Kesten Stigum theorem for the branching random walk. Theorem 4. (Biggins [8]) If ψ(β) < and ψ (β) := e ψ(β) E{ x = V(x)e βv(x) } exists and is finite, then E[W(β)] = P{W(β) > 0 non-extinction} = E[W (β)log + W (β)] < and βψ (β) < ψ(β).

22 8 Chapter 2. Branching random walks and the law of large numbers A similar remark as in the Kesten Stigum theorem applies here: the condition P{W(β) > 0 non-extinction} =, which means P[W(β) = 0] = q, is easily seen to be equivalent to P[W(β) = 0] < via a similar argument as in Observation.2 of Chapter. The proof of the theorem relies the notion of size-biased branching random walks, which is an extension in the spatial sense of size-biased Galton Watson processes. We only describe the main idea; for more details, we refer to Lyons [40]. Some basic notation is in order (for more details, see Neveu [44]). Let U := { } k= (N ) k as in Section 2 of Chapter. Let U := {(u,v(u)) : u U, V : U R}. Let Ω be Neveu s space of marked trees, which consists of all the subsets ω of U such that the first component of ω is a tree. [Attention: Ω is different from the Ω in Section 2 of Chapter.] Let T : Ω Ω be the identity application. According to Neveu [44], there exists a probability P on Ω such that the law of T under P is the law of the branching random walk described in the previous section. According to Kolmogorov s extension theorem, there exists a unique probability P = P(β) on Ω such that for any n, P Fn = W n (β) P Fn. The law of T under P is called the law of a size-biased branching random walk. Here is a simple description of the size-biased branching random walk (i.e., the distribution of the branching random walk under P). The offspring of the root =: w 0 is generated according to the biased distribution 5 Θ = Θ(β). Pick one of these offspring w at random; the probability that a given vertex x is picked up as w is proportional to e βv(x). The children other than w give rise to independent ordinary branching random walks, whereas the offspring of w is generated according to the biased distribution Θ (and independently of others). Again, pick one of the children of w at random with probability which is inversely exponentially proportional to (β times) the displacement, call it w 2, with the others giving rise to independent ordinary branching random walks while w 2 produce offspring according to the biased distribution Θ, and so on. Proof of Theorem 4.. If β = 0 or if the point process Θ is deterministic, then Theorem 4. is reduced to the Kesten Stigum theorem (Theorem.5) proved in the previous chapter. So let us assume that β 0 and that Θ is not deterministic. (i) Assume the condition on the right-hand side fails. We claim that in this case, limsup n W n (β) = P-a.s.; thus by (3.3) of Chapter, E[W(β)] = 0; a fortiori, 5 That is, a point process whose distribution (under P) is the distribution of Θ under P.

23 4 Size-biased branching random walk and martingale convergence 9 P{W(β) > 0 non-extinction} = 0. To see why limsup n W n (β) = P-a.s., we distinguish two possibilities. First possibility: βψ (β) ψ(β). Then lim sup[ βv(w n ) nψ(β)] =, n P-a.s. [Thisisobviousifβψ (β) > ψ(β), becausebythelawoflargenumbers, V(wn) n E P [V(w )] = E[ x = V(x)e βv (x) ψ(β) ] = ψ (β), P-a.s. Ifβψ (β) = ψ(β), thenthelawoflargenumbers ψ(β), P-a.s., and we still have 6 liminf β n [βv(w n )+nψ(β)] =, P-a.s.] Since W n (β) e βv(wn) nψ(β), we immediately get limsup n W n (β) = P-a.s., as says V(wn) n desired. Second possibility: E[W (β)log + W (β)] =. In this case, we argue that W n+ (β) = e βv(x) (n+)ψ(β) x =n e βv(wn) (n+)ψ(β) y =n+, y>x β[v(y) V (x)] e y =n+, y>w n e β[v(y) V(wn)] =: I n II n. Since II n are i.i.d. (under P) with E P (log+ II 0 ) = E[W (β)log + W (β)] =, it follows from Exercise 3. (ii) of Chapter that lim sup n n log+ II n =, P-a.s. Ontheotherhand, V(wn) n ψ (β), P-a.s.(lawoflargenumbers), thisyieldslimsup n I n II n =, P-a.s., which, again, leads to limsup n W n (β) = P-a.s. (ii) We now assume that the condition on the right-hand side of the Biggins martingale convergence theorem is satisfied, i.e., βψ (β) < ψ(β) and E[W (β)log + W (β)] <. Let G be the sigma-algebra generated by w n and V(w n ) as well as offspring of w n, for all n 0. Then E P[W n (β) G] = = n e βv(w k) (n+)ψ(β) k=0 +e βv(wn) (n+)ψ(β) n e βv(w k) (k+)ψ(β) k=0 x =k+: x>w k, x w k+ e β[v(x) V(wk)] e [n (k+)]ψ(β) n e β[v(x) V(wk)] x =k+: x>w k i= e βv(w i) iψ(β) 6 It is an easy consequence of the central limit theorem that if X, X 2, are i.i.d. random variables with E(X ) = 0 and E(X 2 ) <, then limsup n n i= X i = and liminf n n i= X i =, a.s., as long as P{X = 0} <.

24 20 Chapter 2. Branching random walks and the law of large numbers = n n I k II k e ψ(β) I i. k=0 i= Since II n are i.i.d. (under P) with E P(log + II 0 ) = E[W (β)log + W (β)] <, it follows from Exercise 3. (i) of Chapter that lim n n log+ II n = 0, P-a.s. On the other hand, I n decays exponentially fast (because βψ (β) < ψ(β)). It follows that n k=0 I k II k e ψ(β) n i= I k converges P-a.s. By (theconditional version of) Fatou slemma, E P{liminf n W n (β) G} <, P-a.s. Recall that P Fn = P W n(β) Fn. We claim that is a P-supermartingale 7 W n(β) : indeed, for any n j and A F j, we have E P[ W n(β) A] = P{W n (β) > 0, A} P{W j (β) > 0, A} = E P[ W j (β) A], thus E P[ F W n(β) j] W j as claimed. Since (β) W n(β) is a positive P-supermartingale, it converges P-a.s., to a limit which, according to what we have proved in the last paragraph, is P-almost surely (strictly) positive. We can write as follow: limsup n W n (β) <, P-a.s. According to (3.2) of Chapter, this yields E[W(β)] =, which obviously implies P[W(β) = 0] <, and which is equivalent to P{W(β) > 0 non-extinction} = as pointed out in the remark after Theorem 4.. This completes the proof of Theorem 4.. We end this section with the following re-statement of the size-biased branching random walk: for any n, P Fn := W n (β) P Fn, P(w n = x F n ) = e βv (x) = e βv (x) nψ(β) y =n e βv (y) W n(β) for any x = n; under P, we have an (infinite) spine and i.i.d. copies of the usual branching randomwalk. Alongthespine, V(w i ) V(w i ), i,arei.i.d.under P,andthedistribution of V(w ) under P is given by E P[F(V(w ))] = E{ x = F(V(x))e βv(x) ψ(β) }, for any measurable function F : R R +. Here is a consequence of the decomposition of the size-biased branching random walk, which will be of frequent use. Corollary 4.2 Assume ψ(β) <. Then for any n and any measurable function g : R R +, (4.) { E x =n } e βv(x) nψ(β) g(v(x)) = E[g(S n )], 7 In general, W n(β) is not a P-martingale; see Harris and Roberts [29]. I am grateful to an anonymous referee for having pointed out an erroneous claim in a first draft.

25 5 Proof of the law of large numbers: upper bound 2 w 0 = w BRW w 2 BRW BRW BRW BRW w 3 BRW BRW Figure 6: A size-biased branching random walk where S n := n i= X i, and (X i ) is a sequence of i.i.d. random variables such that E[F(X )] = E{ x = F(V(x))e βv(x) ψ(β) }, for any measurable function F : R R +. Proof of Corollary 4.2. Let LHS (4.) denote the expression on the left hand-side of (4.). By definition, LHS (4.) = E P { x =n W n(β) g(v(x))}. Recall that P(w n = x F n ) =, this yields LHS (4.) = E P{ x =n {w n=x}g(v(x))}, which is E P[g(V(w n ))]. It e βv (x) nψ(β) W n(β) e βv(x) nψ(β) remains to recall that V(w i ) V(w i ), i, are i.i.d. under P having the distribution of X. 5.Proof of the law of large numbers: upper bound We prove the upper bound in the law of large numbers under the additional assumptions that ψ(t) <, t 0 and that E[W (β)log + W (β)] <, β > 0. Exercise 5. Assume that ψ(t) <, t 0, and that ψ(0) > 0. Let a R be such that 0 < J(a) < ψ(0). Then there exists β > 0 such that ψ (β) = a. Let a R be such that 0 < J(a) < ψ(0). According to Exercise 5., ψ (β) = a for some β > 0. In particular, 0 < J(a) = aβ +ψ(β) = βψ (β)+ψ(β). Let W n (β) := x =n e βv (x) nψ(β) as before. Let ε > 0 and let n := e βv(x) nψ(β). x =n: V(x) na >εn

26 22 Chapter 2. Branching random walks and the law of large numbers By Corollary 4.2 and in its notation, E( n ) = P( S n na > εn). Recall that S n = n i= X i, with (X i ) i.i.d. such that E[F(X )] = E{ x = F(V(x))e βv (x) ψ(β) }, for any measurable function F : R R +. In particular, E(X ) = a. By the Cramér Chernoff large deviation theorem, n P( S n na > εn) <. Therefore, n n < a.s. In particular, n 0, a.s. By definition, W n (β) = We know n 0 a.s., so that x =n: V(x) na εn e βv(x) nψ(β) + n e βn(a ε) nψ(β) #{ x = n : V(x) (a+ε)n}+ n. lim inf n e βn(a ε) nψ(β) #{ x = n : V(x) (a+ε)n} W(β), a.s. Since βψ (β) < ψ(β) and E[W (β)log + W (β)] <, it follows from the Biggins martingale convergence theorem (Theorem 4.) that W n (β) W(β) > 0 almost surely on nonextinction. As a consequence, on the set of non-extinction, lim sup n n inf V(x) a+ε, a.s. x =n This completes the proof of the upper bound in Theorem Notes The law of large numbers (Theorem 2.) was first proved by Hammersley [26] for the Bellman Harris process, by Kingman [36] for the (strictly) positive branching random walk, and by Biggins [7] for the branching random walk. The proof of the upper bound in the law of large numbers, presented in Section 5 based on martingale convergence, is not the original proof given by Biggins [7]. Biggins proof relies on constructing an auxiliary branching process, as suggested by Kingman [36]. The short proof of the Biggins martingale convergence theorem in Section 4, via sizebiased branching random walks, is borrowed from Lyons [40].

27 Chapter 3 Branching random walks and the central limit theorem This chapter is a continuation of Chapter 2. In Section, we state the main result, a central limit theorem for the minimal position in the branching random walk. We do not directly study the minimal position, but rather investigate some martingales involving all the living particles in the same generation, but to which only the minimal positions make significant contributions. The study of these martingale relies, again, on the idea of sizebiased branching random walks, and is also connected to problems for directed polymers on a tree. Unfortunately, our central limit theorem does not apply to all branching random walks; a particular pathological case is analyzed in Section 9. At the end of the chapter, we mention a few related models of branching random walks.. Central limit theorem Let (V(x), x = n) denote the positions of the branching random walk. The law of large numbers proved in the previous chapter states that conditional on non-extinction, lim n n inf V(x) = γ, a.s., x =n whereγ istheconstantin(2.2)ofchapter2. Itisnaturaltoaskabouttherateofconvergence in this theorem. Let us look at the special case of i.i.d. random variables assigned to the edges of a rooted regular tree. It turns out that inf x =n V(x) has few fluctuations with respect to, say, its median m V (n). In fact, the law of inf x =n V(x) m V (n) is tight! This was first proved by Bachmann [5] for the branching random walk under the technical condition that the common 23

28 24 Chapter 3. Branching random walks and the central limit theorem distribution of the i.i.d. random variables assigned on the edges of the regular tree admits a density function which is log-concave (a most important example being the Gaussian law). This technical condition was recently removed by Bramson and Zeitouni[5]. See also Section 5 of the survey paper by Aldous and Bandyopadhyay [2] for other discussions. Finally, let us mention the recent paper of Lifshits [38], where an example of branching random walk is constructed such that the law of inf x =n V(x) m V (n) is tight but does not converge weakly. Throughout the chapter, we assume that for some δ > 0, δ + > 0 and δ > 0, {( ) +δ } (.) E <, (.2) { E x = } e (+δ +)V(x) +E x = { x = e δ V(x) } <, We recall the log-laplace transform { } ψ(t) := loge e tv(x) (, ], t 0. x = By (.2), ψ(t) < for t [ δ, +δ + ]. Following Biggins and Kyprianou [], we assume (.3) ψ(0) > 0, ψ() = ψ () = 0. For comments on this assumption, see Remark (ii) below after Theorem.. Under (.3), the value of the constant γ defined in (2.2) of the previous chapter is γ = 0, so that Theorem 2. of the last chapter reads: almost surely on the set of non-extinction, (.4) lim n n inf V(x) = 0. x =n On the other hand, under (.3), Theorem 4. of the last chapter tells us that x =n e V(x) 0 a.s., which yields that, almost surely, inf V(x) +. x =n In other words, on the set of non-extinction, the system is transient to the right. A refinement of (.4) is obtained by McDiarmid [43]. Under the additional assumption E{( x = )2 } <, it is proved in [43] that for some constant c < and conditionally on the system s non-extinction, lim sup n We now state a central limit theorem. logn inf x =n V(x) c, We recall that the assumption ψ(0) > 0 in (.3) means that the associated Galton Watson tree is supercritical. a.s.

29 2 Directed polymers on a tree 25 Theorem. Assume (.), (.2) and (.3). On the set of non-extinction, we have (.5) (.6) (.7) lim sup n lim inf n lim n logn inf V(x) = 3 x =n 2, logn inf V(x) = x =n 2, logn inf V(x) = 3 x =n 2, a.s. a.s. in probability. Remark. (i) The most interesting part of Theorem. is possibly (.5) (.6). It reveals the presence of fluctuations of inf x =n V(x) on the logarithmic level, which is in contrast with a known result of Bramson [4] stating that for a class of branching random walks, inf loglogn x =nv(x) converges almost surely to a finite and positive constant. (ii) Some brief comments on (.3) are in order. In general (i.e., without assuming ψ() = ψ () = 0), if (.8) t ψ (t ) = ψ(t ) for some t (0, ), then the branching random walk associated with the point process V(x) := t V(x) + ψ(t ) x satisfies (.3). That is, as long as (.8) has a solution (which is the case for example if ψ() = 0 and ψ () > 0), the study will boil down to the case (.3). It is, however, possible that (.8) has no solution. In such a situation, Theorem. does not apply. For example, we have already mentioned a class of branching random walks exhibited in Bramson [4], for which inf x =n V(x) has an exotic loglogn behaviour. See Section 9 for more details. (iii) Under (.3) and suitable integrability assumptions, Addario-Berry and Reed [] obtain a very precise asymptotic estimate of E[inf x =n V(x)], as well as an exponential upper bound for the deviation probability for inf x =n V(x) E[inf x =n V(x)], which, in particular, implies (.7). (iv) In the case of (continuous-time) branching Brownian motion, a refined version of the analogue of (.7) was proved by Bramson [3], by means of some powerful explicit analysis. 2.Directed polymers on a tree The following model is borrowed from the well-known paper of Derrida and Spohn [22]: Let T be a rooted Cayley tree; we study all self-avoiding walks (= directed polymers) of n steps on T starting from the root. To each edge of the tree, is attached a random variable (= potential). We assume that these random variables are independent and identically

30 26 Chapter 3. Branching random walks and the central limit theorem distributed. For each walk ω, its energy E(ω) is the sum of the potentials of the edges visited by the walk. So the partition function is Z n (β) := ω e βe(ω), where the sum is over all self-avoiding walks of n steps on T, and β > 0 is the inverse temperature. 2 More generally, we take T to be a Galton Watson tree, and observe that the energy E(ω) corresponds to (the partial sum of) the branching random walk described in the previous sections. The associated partition function becomes (2.) Z n,β := e βv(x), β > 0. x =n If 0 < β <, the study of Z n,β boils down to the case ψ () < 0 which is covered by the Biggins martingale convergence theorem (Theorem 4. in Chapter 2). In particular, on the Z set of non-extinction, n,β E{Z n,β converges almost surely toa(strictly) positive random variable. } We now study the case β. If β =, we write simply Z n instead of Z n,. Theorem 2. Assume (.), (.2) and (.3). On the set of non-extinction, we have (2.2) Z n = n /2+o(), a.s. Theorem 2.2 Assume (.), (.2) and (.3), and let β >. On the set of non-extinction, we have (2.3) lim sup n logz n,β logn = β 2, a.s. (2.4) (2.5) lim inf n logz n,β logn = 3β 2, a.s. Z n,β = n 3β/2+o(), in probability. Again, the most interesting part in Theorem 2.2 is probably (2.3) (2.4), which describes a new fluctuation phenomenon. Also, there is no phase transition any more for Z n,β at β = from the point of view of upper almost sure limits. An important step in the proof of Theorems 2. and 2.2 is to estimate all small moments of Z n and Z n,β, respectively. This is done in the next theorems. 2 There is hopefully no confusion possible between Z n here and the number of individuals in a Galton Watson process studied in Chapter.

31 3 Small moments of partition function: upper bound in Theorem Theorem 2.3 Assume (.), (.2) and (.3). For any a [0, ), we have 0 < liminf E{ (n /2 Z n ) a} limsupe { (n /2 Z n ) a} <. n Theorem 2.4 Assume (.), (.2) and (.3), and let β >. For any 0 < r <, we have β n (2.6) E { Z r n,β} = n 3rβ/2+o(), n. We prove the theorems of this chapter under the additional assumption that for some constant C, sup x = V(x) +#{x : x = } C a.s. This assumption is not necessary, but allows us to avoid some technical discussions. 3. Small moments of partition function: upper bound in Theorem 2.4 This section is devoted to (a sketch of) the proof of the upper bound in Theorem 2.4; the upper bound in Theorem 2.3 can be proved in a similar spirit, but needs more care. We assume (.), (.2) and (.3), and fix β >. Let P be such that P Fn = Z n P Fn, n. For any Y 0 which is F n -measurable, we have E{Z n,β Y} = E P { x =n e βv (x) Z n Y} = E P { x =n {w n=x}e (β )V (x) Y}, and thus (3.) E{Z n,β Y} = E P{e (β )V (wn) Y}. Let s ( β, ), and λ > 0. (We will choose λ = 3.) Then β 2 E { } Z s n,β n ( s)βλ +E { Z s n,β } {Z n,β >n βλ } { e = n ( s)βλ (β )V (w n) (3.2) +E P Z s n,β {Zn,β >n βλ } We now estimate the expectation expression E P{ } on the right-hand side. Let a > 0 and > b > 0 be constants such that (β )a > sβλ+ 3 and [βs (β )]b > 3. (The choice 2 2 of willbemadepreciselateron.) Letw n [, w n ]besuchthatv(w n ) = min x [[,wn]]v(x), and consider the following events: E,n := {V(w n ) > alogn} {V(w n ) blogn}, E 2,n := {V(w n ) < logn, V(w n ) > blogn}, E 3,n := {V(w n ) logn, blogn < V(w n ) alogn}. }.

32 28 Chapter 3. Branching random walks and the central limit theorem Clearly, P( 3 i= E i,n) =. On the event E,n {Z n,β > n βλ }, we have either V(w n ) > alogn, in which case e (β )V (wn) Zn,β s n sβλ (β )a, or V(w n ) blogn, in which case we use the trivial inequality Z n,β e βv(wn) to see that e (β )V (wn) Z s n,β e [βs (β )]V(wn) n [βs (β )]b (recalling that βs > β ). Since sβλ (β )a < 3 and [βs (β )]b > 3, we obtain: 2 2 (3.3) E P { e (β )V (w n) Z s n,β E,n {Z n,β >n βλ } } n 3/2. WenowstudytheintegralonE 2,n {Z n,β > n βλ }. Since s > 0, wecanchooses > 0and s 2 > 0 (with s 2 sufficiently small) such that s = s +s 2. We have, on E 2,n {Z n,β > n βλ }, (β )V (wn) e Zn,β s = eβs 2V(w n ) (β )V (w n) Z s n,β e βs 2V(w n ) Z s 2 n,β n βs 2 +(β )b+βλs e βs 2V(w n ). Z s 2 n,β We admit that for small s 2 > 0, E P[ e βs 2 V (w n ) Z s 2 ] n K, for some K > 0. [This actually is true n,β foranys 2 > 0.] Wechoose(andfix)theconstant solargethat βs 2 +(β )b+βλs +K < 3. Therefore, for all large n, 2 (3.4) E P { e (β )V (w n) Z s n,β E2,n {Z n,β >n βλ } } n 3/2. WemakeapartitionofE 3,n : letm 2beaninteger, andleta i := b+ i(a+b) M, 0 i M. By definition, E 3,n = M i=0 {V(w n ) logn, a i logn < V(w n ) a i+ logn} =: M i=0 E 3,n,i. Let 0 i M. There are two possible situations. First situation: a i λ. In this case, we argue that on the event E 3,n,i, we have Z n,β e βv(wn) n βa i+ and e (β )V (wn) n (β )a i, thus e (β )V (wn) Z s n,β Accordingly, in this situation, E P n βsa i+ (β )a i = n βsa i (β )a i +βs(a+b)/m n [βs (β )]λ+βs(a+b)/m. { e (β )V (w n) } E3,n,i n [βs (β )]λ+βs(a+b)/m P(E3,n,i ). Z s n,β Second (and last) situation: a i > λ. We have, on E 3,n,i {Z n,β > n βλ e (β )V (wn) }, Zn,β s n βλs (β )a i n [βs (β )]λ ; thus, in this situation, E P { e (β )V (w n) Z s n,β E3,n,i {Z n,β >n βλ } } n [βs (β )]λ P(E3,n,i ).

33 4 Small moments of partition function: lower bound in Theorem We have therefore proved that { e (β )V (w n) E P Zn,β s E3,n {Z n,β >n βλ } } = M i=0 E P { e (β )V (w n) Z s n,β n [βs (β )]λ+βs(a+b)/m P(E3,n ). E3,n,i {Z n,β >n βλ } } We need to estimate P(E 3,n ). Recall from Section 4 of Chapter 2 that V(w i ) V(w i ), i, are i.i.d. under P, and the distribution of V(w ) under P is that of X given in Corollary 4.2 of Chapter 2. Therefore, P(E 3,n ) = P{ min 0 k n S k logn, blogn S n alogn}, with S k := k i= X i as before. Since E(X ) = 0 (which is a consequence of the assumption ψ () = 0), the random walk (S k ) is centered; so the probability above is n (3/2)+o(). Combining this with (3.2), (3.3) and (3.4) yields E { } Z s n,β 2n ( s)βλ +2n 3/2 +n [βs (β )]λ+βs(a+b)/m (3/2)+o(). We choose λ := 3. Since M can be as large as possible, this yields the upper bound in 2 Theorem 2.4 by posing r := s. 4. Small moments of partition function: lower bound in Theorem 2.4 We now turn to (a sketch of) the proof of the lower bound in Theorem 2.4. [Again, the lower bound in Theorem 2.3 can be proved in a similar spirit, but with more care.] Assume (.), (.2) and (.3). Let β > and s ( β, ). By means of the elementary inequality (a+b) s a s +b s (for a 0 and b 0), we have, for some constant K, Z s n n,β K e ( ( s)βv (w j ) u I j j= where I j is the set of brothers 3 of w j. x =n, x>u e β[v(x) V (u)] ) s +e ( s)βv (w n), Let G n be the sigma-field generated by everything on the spine in the first n generations. Sincewearedealingwitharegulartree, #I j isboundedbyaconstant. Sothedecomposition 3 That is, the set of children of w j but excluding w j.

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Chapter 2 Galton Watson Trees

Chapter 2 Galton Watson Trees Chapter 2 Galton Watson Trees We recall a few eleentary properties of supercritical Galton Watson trees, and introduce the notion of size-biased trees. As an application, we give in Sect. 2.3 the beautiful

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

THE x log x CONDITION FOR GENERAL BRANCHING PROCESSES

THE x log x CONDITION FOR GENERAL BRANCHING PROCESSES J. Appl. Prob. 35, 537 544 (1998) Printed in Israel Applied Probability Trust 1998 THE x log x CONDITION FOR GENERAL BRANCHING PROCESSES PETER OLOFSSON, Rice University Abstract The x log x condition is

More information

Branching processes. Chapter Background Basic definitions

Branching processes. Chapter Background Basic definitions Chapter 5 Branching processes Branching processes arise naturally in the study of stochastic processes on trees and locally tree-like graphs. After a review of the basic extinction theory of branching

More information

Random trees and branching processes

Random trees and branching processes Random trees and branching processes Svante Janson IMS Medallion Lecture 12 th Vilnius Conference and 2018 IMS Annual Meeting Vilnius, 5 July, 2018 Part I. Galton Watson trees Let ξ be a random variable

More information

Mandelbrot s cascade in a Random Environment

Mandelbrot s cascade in a Random Environment Mandelbrot s cascade in a Random Environment A joint work with Chunmao Huang (Ecole Polytechnique) and Xingang Liang (Beijing Business and Technology Univ) Université de Bretagne-Sud (Univ South Brittany)

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

Hard-Core Model on Random Graphs

Hard-Core Model on Random Graphs Hard-Core Model on Random Graphs Antar Bandyopadhyay Theoretical Statistics and Mathematics Unit Seminar Theoretical Statistics and Mathematics Unit Indian Statistical Institute, New Delhi Centre New Delhi,

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes

A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes Classical and Modern Branching Processes, Springer, New Yor, 997, pp. 8 85. Version of 7 Sep. 2009 A Conceptual Proof of the Kesten-Stigum Theorem for Multi-type Branching Processes by Thomas G. Kurtz,

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

The Contour Process of Crump-Mode-Jagers Branching Processes

The Contour Process of Crump-Mode-Jagers Branching Processes The Contour Process of Crump-Mode-Jagers Branching Processes Emmanuel Schertzer (LPMA Paris 6), with Florian Simatos (ISAE Toulouse) June 24, 2015 Crump-Mode-Jagers trees Crump Mode Jagers (CMJ) branching

More information

Sharpness of second moment criteria for branching and tree-indexed processes

Sharpness of second moment criteria for branching and tree-indexed processes Sharpness of second moment criteria for branching and tree-indexed processes Robin Pemantle 1, 2 ABSTRACT: A class of branching processes in varying environments is exhibited which become extinct almost

More information

arxiv:math/ v1 [math.pr] 5 Apr 2004

arxiv:math/ v1 [math.pr] 5 Apr 2004 To appear in Ann. Probab. Version of 9 September 994 psfig/tex.2-dvips Conceptual Proofs of L log L Criteria for Mean Behavior of Branching Processes By Russell Lyons, Robin Pemantle, and Yuval Peres arxiv:math/0404083v

More information

Legendre-Fenchel transforms in a nutshell

Legendre-Fenchel transforms in a nutshell 1 2 3 Legendre-Fenchel transforms in a nutshell Hugo Touchette School of Mathematical Sciences, Queen Mary, University of London, London E1 4NS, UK Started: July 11, 2005; last compiled: August 14, 2007

More information

Branching within branching: a general model for host-parasite co-evolution

Branching within branching: a general model for host-parasite co-evolution Branching within branching: a general model for host-parasite co-evolution Gerold Alsmeyer (joint work with Sören Gröttrup) May 15, 2017 Gerold Alsmeyer Host-parasite co-evolution 1 of 26 1 Model 2 The

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

A simple branching process approach to the phase transition in G n,p

A simple branching process approach to the phase transition in G n,p A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1

Problem Sheet 1. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X : Ω R be defined by 1 if n = 1 Problem Sheet 1 1. Let Ω = {1, 2, 3}. Let F = {, {1}, {2, 3}, {1, 2, 3}}, F = {, {2}, {1, 3}, {1, 2, 3}}. You may assume that both F and F are σ-fields. (a) Show that F F is not a σ-field. (b) Let X :

More information

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms

Course Notes. Part IV. Probabilistic Combinatorics. Algorithms Course Notes Part IV Probabilistic Combinatorics and Algorithms J. A. Verstraete Department of Mathematics University of California San Diego 9500 Gilman Drive La Jolla California 92037-0112 jacques@ucsd.edu

More information

Martingale Theory and Applications

Martingale Theory and Applications Martingale Theory and Applications Dr Nic Freeman June 4, 2015 Contents 1 Conditional Expectation 2 1.1 Probability spaces and σ-fields............................ 2 1.2 Random Variables...................................

More information

3 Integration and Expectation

3 Integration and Expectation 3 Integration and Expectation 3.1 Construction of the Lebesgue Integral Let (, F, µ) be a measure space (not necessarily a probability space). Our objective will be to define the Lebesgue integral R fdµ

More information

1 Informal definition of a C-M-J process

1 Informal definition of a C-M-J process (Very rough) 1 notes on C-M-J processes Andreas E. Kyprianou, Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY. C-M-J processes are short for Crump-Mode-Jagers processes

More information

arxiv: v2 [math.pr] 25 Feb 2017

arxiv: v2 [math.pr] 25 Feb 2017 ypical behavior of the harmonic measure in critical Galton Watson trees with infinite variance offspring distribution Shen LIN LPMA, Université Pierre et Marie Curie, Paris, France E-mail: shen.lin.math@gmail.com

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Resistance Growth of Branching Random Networks

Resistance Growth of Branching Random Networks Peking University Oct.25, 2018, Chengdu Joint work with Yueyun Hu (U. Paris 13) and Shen Lin (U. Paris 6), supported by NSFC Grant No. 11528101 (2016-2017) for Research Cooperation with Oversea Investigators

More information

Lecture 06 01/31/ Proofs for emergence of giant component

Lecture 06 01/31/ Proofs for emergence of giant component M375T/M396C: Topics in Complex Networks Spring 2013 Lecture 06 01/31/13 Lecturer: Ravi Srinivasan Scribe: Tianran Geng 6.1 Proofs for emergence of giant component We now sketch the main ideas underlying

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Mouvement brownien branchant avec sélection

Mouvement brownien branchant avec sélection Mouvement brownien branchant avec sélection Soutenance de thèse de Pascal MAILLARD effectuée sous la direction de Zhan SHI Jury Brigitte CHAUVIN, Francis COMETS, Bernard DERRIDA, Yueyun HU, Andreas KYPRIANOU,

More information

Branching, smoothing and endogeny

Branching, smoothing and endogeny Branching, smoothing and endogeny John D. Biggins, School of Mathematics and Statistics, University of Sheffield, Sheffield, S7 3RH, UK September 2011, Paris Joint work with Gerold Alsmeyer and Matthias

More information

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS

RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS Pliska Stud. Math. Bulgar. 16 (2004), 43-54 STUDIA MATHEMATICA BULGARICA RECENT RESULTS FOR SUPERCRITICAL CONTROLLED BRANCHING PROCESSES WITH CONTROL RANDOM FUNCTIONS Miguel González, Manuel Molina, Inés

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

CASE STUDY: EXTINCTION OF FAMILY NAMES

CASE STUDY: EXTINCTION OF FAMILY NAMES CASE STUDY: EXTINCTION OF FAMILY NAMES The idea that families die out originated in antiquity, particilarly since the establishment of patrilineality (a common kinship system in which an individual s family

More information

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1

ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 ADVANCED PROBABILITY: SOLUTIONS TO SHEET 1 Last compiled: November 6, 213 1. Conditional expectation Exercise 1.1. To start with, note that P(X Y = P( c R : X > c, Y c or X c, Y > c = P( c Q : X > c, Y

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Biased random walks on subcritical percolation clusters with an infinite open path

Biased random walks on subcritical percolation clusters with an infinite open path Biased random walks on subcritical percolation clusters with an infinite open path Application for Transfer to DPhil Student Status Dissertation Annika Heckel March 3, 202 Abstract We consider subcritical

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Useful Probability Theorems

Useful Probability Theorems Useful Probability Theorems Shiu-Tang Li Finished: March 23, 2013 Last updated: November 2, 2013 1 Convergence in distribution Theorem 1.1. TFAE: (i) µ n µ, µ n, µ are probability measures. (ii) F n (x)

More information

FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES

FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES FRINGE TREES, CRUMP MODE JAGERS BRANCHING PROCESSES AND m-ary SEARCH TREES CECILIA HOLMGREN AND SVANTE JANSON Abstract. This survey studies asymptotics of random fringe trees and extended fringe trees

More information

Necessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walk

Necessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walk Necessary and sufficient conditions for the convergence of the consistent maximal displacement of the branching random walk Bastien Mallein January 4, 08 Abstract Consider a supercritical branching random

More information

THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] April 28, 2007

THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] April 28, 2007 THE NOISY VETO-VOTER MODEL: A RECURSIVE DISTRIBUTIONAL EQUATION ON [0,1] SAUL JACKA AND MARCUS SHEEHAN Abstract. We study a particular example of a recursive distributional equation (RDE) on the unit interval.

More information

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015 Problem 1: (The Flajolet-Martin Counter) In class (and in the prelim!), we looked at an idealized algorithm for finding the number of distinct elements in a stream, where we sampled uniform random variables

More information

Lecture 2: Tipping Point and Branching Processes

Lecture 2: Tipping Point and Branching Processes ENAS-962 Theoretical Challenges in Networ Science Lecture 2: Tipping Point and Branching Processes Lecturer: Amin Karbasi Scribes: Amin Karbasi 1 The Tipping Point In most given systems, there is an inherent

More information

14 Branching processes

14 Branching processes 4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Legendre-Fenchel transforms in a nutshell

Legendre-Fenchel transforms in a nutshell 1 2 3 Legendre-Fenchel transforms in a nutshell Hugo Touchette School of Mathematical Sciences, Queen Mary, University of London, London E1 4NS, UK Started: July 11, 2005; last compiled: October 16, 2014

More information

An extension of Hawkes theorem on the Hausdorff dimension of a Galton Watson tree

An extension of Hawkes theorem on the Hausdorff dimension of a Galton Watson tree Probab. Theory Relat. Fields 116, 41 56 (2000) c Springer-Verlag 2000 Steven P. Lalley Thomas Sellke An extension of Hawkes theorem on the Hausdorff dimension of a Galton Watson tree Received: 30 June

More information

4 Expectation & the Lebesgue Theorems

4 Expectation & the Lebesgue Theorems STA 205: Probability & Measure Theory Robert L. Wolpert 4 Expectation & the Lebesgue Theorems Let X and {X n : n N} be random variables on a probability space (Ω,F,P). If X n (ω) X(ω) for each ω Ω, does

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

w i w j = w i k w k i 1/(β 1) κ β 1 β 2 n(β 2)/(β 1) wi 2 κ 2 i 2/(β 1) κ 2 β 1 β 3 n(β 3)/(β 1) (2.2.1) (β 2) 2 β 3 n 1/(β 1) = d

w i w j = w i k w k i 1/(β 1) κ β 1 β 2 n(β 2)/(β 1) wi 2 κ 2 i 2/(β 1) κ 2 β 1 β 3 n(β 3)/(β 1) (2.2.1) (β 2) 2 β 3 n 1/(β 1) = d 38 2.2 Chung-Lu model This model is specified by a collection of weights w =(w 1,...,w n ) that represent the expected degree sequence. The probability of an edge between i to is w i w / k w k. They allow

More information

Brownian Motion and Conditional Probability

Brownian Motion and Conditional Probability Math 561: Theory of Probability (Spring 2018) Week 10 Brownian Motion and Conditional Probability 10.1 Standard Brownian Motion (SBM) Brownian motion is a stochastic process with both practical and theoretical

More information

Almost giant clusters for percolation on large trees

Almost giant clusters for percolation on large trees for percolation on large trees Institut für Mathematik Universität Zürich Erdős-Rényi random graph model in supercritical regime G n = complete graph with n vertices Bond percolation with parameter p(n)

More information

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems Branching Brownian motion: extremal process and ergodic theorems Anton Bovier with Louis-Pierre Arguin and Nicola Kistler RCS&SM, Venezia, 06.05.2013 Plan 1 BBM 2 Maximum of BBM 3 The Lalley-Sellke conjecture

More information

A D VA N C E D P R O B A B I L - I T Y

A D VA N C E D P R O B A B I L - I T Y A N D R E W T U L L O C H A D VA N C E D P R O B A B I L - I T Y T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E Contents 1 Conditional Expectation 5 1.1 Discrete Case 6 1.2

More information

k-protected VERTICES IN BINARY SEARCH TREES

k-protected VERTICES IN BINARY SEARCH TREES k-protected VERTICES IN BINARY SEARCH TREES MIKLÓS BÓNA Abstract. We show that for every k, the probability that a randomly selected vertex of a random binary search tree on n nodes is at distance k from

More information

Asymptotic of the maximal displacement in a branching random walk

Asymptotic of the maximal displacement in a branching random walk Graduate J Math 1 2016, 92 104 Asymptotic of the maximal displacement in a branching random walk Bastien Mallein Abstract In this article, we study the maximal displacement in a branching random walk a

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Recent results on branching Brownian motion on the positive real axis. Pascal Maillard (Université Paris-Sud (soon Paris-Saclay))

Recent results on branching Brownian motion on the positive real axis. Pascal Maillard (Université Paris-Sud (soon Paris-Saclay)) Recent results on branching Brownian motion on the positive real axis Pascal Maillard (Université Paris-Sud (soon Paris-Saclay)) CMAP, Ecole Polytechnique, May 18 2017 Pascal Maillard Branching Brownian

More information

Stein s Method: Distributional Approximation and Concentration of Measure

Stein s Method: Distributional Approximation and Concentration of Measure Stein s Method: Distributional Approximation and Concentration of Measure Larry Goldstein University of Southern California 36 th Midwest Probability Colloquium, 2014 Stein s method for Distributional

More information

1 Sequences of events and their limits

1 Sequences of events and their limits O.H. Probability II (MATH 2647 M15 1 Sequences of events and their limits 1.1 Monotone sequences of events Sequences of events arise naturally when a probabilistic experiment is repeated many times. For

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

Random Process Lecture 1. Fundamentals of Probability

Random Process Lecture 1. Fundamentals of Probability Random Process Lecture 1. Fundamentals of Probability Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2016 1/43 Outline 2/43 1 Syllabus

More information

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007

Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August 2007 CSL866: Percolation and Random Graphs IIT Delhi Arzad Kherani Scribe: Amitabha Bagchi Lecture 1: Overview of percolation and foundational results from probability theory 30th July, 2nd August and 6th August

More information

Math 564 Homework 1. Solutions.

Math 564 Homework 1. Solutions. Math 564 Homework 1. Solutions. Problem 1. Prove Proposition 0.2.2. A guide to this problem: start with the open set S = (a, b), for example. First assume that a >, and show that the number a has the properties

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

DR.RUPNATHJI( DR.RUPAK NATH )

DR.RUPNATHJI( DR.RUPAK NATH ) Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology

More information

Positive and null recurrent-branching Process

Positive and null recurrent-branching Process December 15, 2011 In last discussion we studied the transience and recurrence of Markov chains There are 2 other closely related issues about Markov chains that we address Is there an invariant distribution?

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

Extinction of Family Name. Student Participants: Tianzi Wang Jamie Xu Faculty Mentor: Professor Runhuan Feng

Extinction of Family Name. Student Participants: Tianzi Wang Jamie Xu Faculty Mentor: Professor Runhuan Feng Extinction of Family Name Student Participants: Tianzi Wang Jamie Xu Faculty Mentor: Professor Runhuan Feng Description The purpose of this project is to give a brief introduction of the Galton- Watson-Bienaymé

More information

Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.

Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements. Lecture: Hélène Barcelo Analytic Combinatorics ECCO 202, Bogotá Notes by Zvi Rosen. Thanks to Alyssa Palfreyman for supplements.. Tuesday, June 2, 202 Combinatorics is the study of finite structures that

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

Probabilistic Methods in Statistical Physics

Probabilistic Methods in Statistical Physics Probabilistic Methods in Statistical Physics Thierry Lévy Tsinghua University, October 008 CNRS and École Normale Supérieure DMA 45, rue d Ulm F-75005 Paris, levy@dma.ens.fr 1 Contents I Percolation 7

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016

Aditya Bhaskara CS 5968/6968, Lecture 1: Introduction and Review 12 January 2016 Lecture 1: Introduction and Review We begin with a short introduction to the course, and logistics. We then survey some basics about approximation algorithms and probability. We also introduce some of

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS

ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS ADVANCED CALCULUS - MTH433 LECTURE 4 - FINITE AND INFINITE SETS 1. Cardinal number of a set The cardinal number (or simply cardinal) of a set is a generalization of the concept of the number of elements

More information

Exercises Measure Theoretic Probability

Exercises Measure Theoretic Probability Exercises Measure Theoretic Probability 2002-2003 Week 1 1. Prove the folloing statements. (a) The intersection of an arbitrary family of d-systems is again a d- system. (b) The intersection of an arbitrary

More information

ABSTRACT CONDITIONAL EXPECTATION IN L 2

ABSTRACT CONDITIONAL EXPECTATION IN L 2 ABSTRACT CONDITIONAL EXPECTATION IN L 2 Abstract. We prove that conditional expecations exist in the L 2 case. The L 2 treatment also gives us a geometric interpretation for conditional expectation. 1.

More information

On the behavior of the population density for branching random. branching random walks in random environment. July 15th, 2011

On the behavior of the population density for branching random. branching random walks in random environment. July 15th, 2011 On the behavior of the population density for branching random walks in random environment Makoto Nakashima Kyoto University July 15th, 2011 Branching Processes offspring distribution P(N) = {{q(k)} k=0

More information

Scaling limit of random planar maps Lecture 2.

Scaling limit of random planar maps Lecture 2. Scaling limit of random planar maps Lecture 2. Olivier Bernardi, CNRS, Université Paris-Sud Workshop on randomness and enumeration Temuco, Olivier Bernardi p.1/25 Goal We consider quadrangulations as metric

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Ancestor Problem for Branching Trees

Ancestor Problem for Branching Trees Mathematics Newsletter: Special Issue Commemorating ICM in India Vol. 9, Sp. No., August, pp. Ancestor Problem for Branching Trees K. B. Athreya Abstract Let T be a branching tree generated by a probability

More information

CRITICAL MULTI-TYPE GALTON-WATSON TREES CONDITIONED TO BE LARGE

CRITICAL MULTI-TYPE GALTON-WATSON TREES CONDITIONED TO BE LARGE CRITICAL MULTI-TYPE GALTON-WATSON TREES CONDITIONED TO BE LARGE ROMAIN ABRAHAM, JEAN-FRANÇOIS DELMAS, AND HONGSONG GUO Abstract. Under minimal condition, we prove the local convergence of a critical multi-type

More information

Conceptual Proofs of L log L Criteria. By Russell Lyons, Robin Pemantle, and Yuval Peres

Conceptual Proofs of L log L Criteria. By Russell Lyons, Robin Pemantle, and Yuval Peres To appear in Ann. Probab. Version of 9 September 994 Conceptual Proofs of L log L Criteria for Mean Behavior of Branching Processes By Russell Lyons, Robin Pemantle, and Yuval Peres Abstract. The Kesten-Stigum

More information

Lecture 4 Lebesgue spaces and inequalities

Lecture 4 Lebesgue spaces and inequalities Lecture 4: Lebesgue spaces and inequalities 1 of 10 Course: Theory of Probability I Term: Fall 2013 Instructor: Gordan Zitkovic Lecture 4 Lebesgue spaces and inequalities Lebesgue spaces We have seen how

More information