Variable Speed Branching Brownian Motion 1. Extremal Processes in the Weak Correlation Regime

Size: px
Start display at page:

Download "Variable Speed Branching Brownian Motion 1. Extremal Processes in the Weak Correlation Regime"

Transcription

1 ALEA, Lat. Am. J. Probab. Math. Stat. 1 1, Variable Speed Branching Brownian Motion 1. Extremal Processes in the Weak Correlation Regime Anton Bovier and Lisa Hartung Institut für Angewandte Mathematik, Rheinische Friedrich-Wilhelms-Universität, Endenicher Allee 6, Bonn, Germany address: lhartung@uni-bonn.de, bovier@uni.bonn.de URL: Abstract. We prove the convergence of the extremal processes for variable speed branching Brownian motions where the speed functions, that describe the timeinhomogeneous variance, lie strictly below their concave hull and satisfy a certain weak regularity condition. These limiting objects are universal in the sense that they only depend on the slope of the speed function at and the final time t. The proof is based on previous results for two-speed BBM obtained in Bovier and Hartung 14 and uses Gaussian comparison arguments to extend these to the general case. 1. Introduction Gaussian processes indexed by trees is a topic that received a lot of attention, in particular in the context of spin glass theory see e.g. Bovier 6; Talagrand 11a,b; Panchenko 13 through the so-called Generalised Random Energy Models GREM, introduced and studied by Derrida 1985; Gardner and Derrida 1986a,b. Other contexts where such processes appeared are branching random walks see e.g.bramson 1978b; Shi 11; Zeitouni 13 and branching Brownian motion see e.g. Moyal 1957; McKean 1975; Bramson 1978a, 1983; Derrida and Spohn Received by the editors March 5, 14; accepted March 11, Mathematics Subject Classification. 6J8, 6G7, 8B44. Key words and phrases. Gaussian processes, branching Brownian motion, variable speed, extremal processes, Gaussian comparison, cluster processes. A.B. is partially supported through the German Research Foundation in the Collaborative Research Center 16 The Mathematics of Emergent Effects, the Priority Programme 159 Probabilistic Structures in Evolution, the Hausdorff Center for Mathematics HCM, and the Cluster of Excellence ImmunoSensation at Bonn University. L.H. is supported by the German Research Foundation in the Bonn International Graduate School in Mathematics BIGS. 61

2 6 A. Bovier and L. Hartung One of the issues of interest in this context is to understand the structure of the extremal processes that arise in these models in the limit when the size of the tree tends to infinity. A Gaussian process on a tree is characterised fully by the tree and by its covariance, which in the models we are interested in is a function of the genealogical distance on the tree. In the classical models of branching random walk and branching Brownian motion, the covariance is a linear function of the tree-distance. In the context of the GREM, the tree is a binary tree with N levels; another popular tree is a supercritical Galton-Watson tree see, e.g. Athreya and Ney 197. These models generalise branching Brownian motion and were first introduced, to our knowledge, in Derrida and Spohn In this paper we focus on this latter class of models. They can be constructed as follows. On some abstract probability space Ω, F, P, define a supercritical Galton-Watson GW tree. The offspring distribution, {p k } k N, is normalised for convenience such that p k = 1, k=1 kp k =, and the second moment, K = k=1 kk 1p k is assumed finite. We fix a time horizon t >. We denote the number of individuals leaves of the tree at time t by nt and label the leaves at time t by i 1 t, i t,..., i nt t. For given t and for s t, it is convenient to let i k s denote the ancestor of particle i k t at time s. Of course, in general there will be several indices k, l such that i k s = i l s. The time of the most recent common ancestor of i k t and i l s is given, for s, r t, by di k r, i l s sup{u s r : i k u = i l u}. 1.1 We denote by Fs tree, s R + the σ-algebra generated by the Galton-Watson process up to time s. On the same probability space we will now construct, for given t, and for any realisation of the GW tree, a Gaussian process as follows. Let A : [, 1] [, 1] be a right-continuous non-decreasing function. We define a Gaussian process, x, labelled by the tree up to time t, i.e. by {i k s} s t 1 k nt, with covariance, for s, r t and k, l nt E [x k sx l r] = ta t 1 di k r, i l s. 1. The existence of such a process is shown easily through a construction as time changed branching Brownian motion. Note first that, in the case when Ax = x, this process is standard branching Brownian motion Moyal 1957; Skorohod For general A, the models can by constructed from time changed Brownian motion as follows. Let Σ s = tas/t. 1.3 Note that Σ is almost everywhere differentiable and denote by σ x its derivative wherever it exists. Define the process {B Σ s } s t on [, t] as time change of ordinary Brownian motion, B, via B Σ s = B Σ s. 1.4 Branching Brownian motion with speed function Σ is constructed like ordinary Brownian motion, except that if a particle splits at some time s < t, then the offspring particles perform variable speed Brownian motions with speed function Σ, i.e. they are independent copies of {Br Σ Bs Σ } t r s, all starting at the position of the parent particle at time s. We refer to these processes as variable speed branching Brownian motion. This class of processes, labelled by the different choices of functions A, provides an interesting set of examples to study the possible limiting

3 Variable Speed Branching Brownian Motion 63 extremal processes for correlated random variables. The ultimate goal will be to describe the extremal processes in dependence on the function A. Remark 1.1. Strictly speaking, we are not talking about a single stochastic process, but about a family, {x t k s, k ns}t R+ s t, of processes with finite time horizon, indexed by that horizon, t. That dependence on t is usually not made explicit in order not to overburden the notation. Branching Brownian motion has received a lot of attention over the last decades, with a strong focus on the properties of extremal particles. We mention the seminal contributions of McKean 1975; Bramson 1978a, 1983; Lalley and Sellke 1987, and Chauvin and Rouault 1988, 199 on the connection to the Fisher-Kolmogorov- Petrovsky-Piscounov F-KPP equation Fisher 1937; Kolmogorov et al and on the distribution of the rescaled maximum. In recent years, there has been a revival of interest in BBM with numerous contributions, including the construction of the full extremal process by Aïdékon et al. 13 and Arguin et al. 13. For a review of these developments see, e.g., the recent survey by Gouéré 14 or the lecture notes Bovier 15. Variable speed branching Brownian motion as well as random walk has recently been investigated by Fang and Zeitouni 1a,b; Maillard and Zeitouni 13; Mallein 13, and the present authors Bovier and Hartung 14. Naturally, the same construction can be done for any other family of trees. It is widely believed see Zeitouni 13 that the resulting structures are very similar, with only details depending on the underlying tree model. More importantly, it is believed that the extremal structure in more general Gaussian processes, such as mean field spin glasses Bolthausen and Kistler 6, 9 or the Gaussian free field Zeitouni 13 are of the same type; considerable progress in this direction has been made recently by Bramson et al. 13 and by Biskup and Louidor 13. We are interested in understanding the nature of the extremes of our processes in dependence on the properties of the covariance functions A. The case when A is a step function with finitely many steps corresponds to Derrida s GREMs Gardner and Derrida 1986b; Bovier and Kurkova 4a, the only difference being that the deterministic binary tree of the GREM is replaced by a Galton-Watson tree. It is very easy to treat this case. The case when A is arbitrary has been dubbed CREM in Bovier and Kurkova 4b and treated for binary regular trees. In that case the leading order of the maximum was obtained, as well as the genealogical description of the Gibbs measures; this analysis carries over mutando mutandis to the analogous BBM situation. The finer analysis of the extremes is, however, much more subtle and in general still open. Fang and Zeitouni 1b have obtained the order of the corrections namely t 1/3 in the case when A is strictly concave and continuous. These corrections come naturally from the probability of a Brownian bridge to stay away from a curved line, which was earlier analysed in Ferrari and Spohn 5. There are, however, no results on the extremal process or the law of the maximum. Another rather tractable situation occurs when A is a piecewise linear function. The simplest case here corresponds to choosing a speed that takes just two values, i.e. { σ σ s = 1, for s < tb, σ, 1.5 for bt s t,

4 64 A. Bovier and L. Hartung with σ 1b + σ 1 b = 1. In this case, Fang and Zeitouni 1b have obtained the correct order of the logarithmic corrections. This case was fully analysed in a recent paper of ours Bovier and Hartung 14, where we provide the construction of the extremal processes. In the present paper, we present the full picture in the case where Ax < x for all x, 1, and the slopes of A at and at 1 are different from 1. We show that there is a large degree of universality in that the limiting extremal processes are those that emerged in the two-speed case, and that they depend only on the slopes of A at and at 1. The critical cases, Ax x, involve, besides the well-understood standard BBM, a number of different situations that can be quite tricky, and we postpone this analysis to a forthcoming publication Results. We need some mild technical assumptions on the covariance function. Let A : [, 1] [, 1] be a right-continuous, non-decreasing function that satisfies the following three conditions: A1 For all x, 1: Ax < x, A = and A1 = 1. A There exists δ b > and functions Bx, Bx : [, 1] [, 1] that are twice differentiable in [, δ b ] with bounded second derivatives, such that Bx Ax Bx, x [, δ b ] 1.6 with B = B A. A3 There exists δ e > and functions Cx, Cx : [, 1] [, 1] that are twice differentiable in [1 δ e, 1] with bounded second derivatives, such that Cx Ax Cx, x [1 δ e, 1] 1.7 with C 1 = C 1 A 1. The case A 1 = + is allowed. This is to be understood in the sense that, for all ρ <, there exists ε > such that, for all x [1 ε, 1], Ax 1 ρ1 x. For standard BBM, xt, recall that Bramson 1978a and Lalley and Sellke 1987 have shown that y ] = ωx = E [e CZe, 1.8 lim P t max x kt mt y k nt where mt t 3 log t, Z is a random variable, the limit of the so called derivative martingale, and C is a constant. In Arguin et al. 13 see also Aïdékon et al. 13 for a different proof it was shown that the extremal process, lim t nt Ẽ t lim exists in law, and Ẽ is of the form Ẽ = δ xk t mt = t k=1 k,j Ẽ, 1.9 δ ηk, k j where η k is the k-th atom of a Cox process Cox 1955 directed by the random measure CZe y dy, with C and Z as before. k i are the atoms of independent

5 Variable Speed Branching Brownian Motion 65 and identically distributed point processes k, which are the limits in law of δ xi t max j nt x j t, 1.11 j nt where xt is BBM conditioned on the event max j nt x j t t. The main result of the present paper is the following theorem. Theorem 1.. Assume that A : [, 1] [, 1] satisfies A1-A3. Let A = σb < 1 and A 1 = σe > 1. Let mt = t 1 log t. Then there is a constant Cσ e depending only on σ e and a random variable Y σb depending only on σ b such that i lim P t ii The point process max x it mt x 1 i nt k nt δ xk t mt E σb,σ e = i,j = E [e Cσ e Y σb e x ]. 1.1 δ pi +σ e, 1.13 Λ i j as t, in law, where the p i are the atoms of a Cox process on R directed by the random measure Cσ e Y σb e x dx, and the Λ i are the limits of the processes as in 1.11, but conditioned on the event {max k x k t σ e t}. iii If A 1 =, then C = 1/ 4π, and Λ i = δ, i.e. the limiting process is a Cox process. The random variable Y σb is the limit of the uniformly integrable martingale ns Y σb s = e s1+σ b + σ b x is, 1.14 where x i s is standard branching Brownian motion. Remark 1.3. In Theorem 7.7 of Bovier and Hartung 14 the constant Cσ e is characterised by the tail behaviour of solutions to the F-KPP equation, namely Cσ e σ e lim t e x e x /t t 1/ ut, x + t, 1.15 where x σ e 1t, and u solves the F-KPP equation t ut, x = 1 xut, x + 1 ut, x p k 1 ut, x k, 1.16 with initial condition u, x = 1 x. Remark 1.4. The special case of Theorem 1. when A consists of two linear segments was obtained in Bovier and Hartung 14. Theorem 1. shows that the limiting objects under conditions A1 A3 are universal and depend only on the slopes of the covariance function A at and at 1. This could have been guessed, but the rigorous proof turns out to be quite involved. Note that σ e = is allowed. In that case the extremal process is just a mixture of Poisson point processes. If σ b =, then Y σb is just an exponential random variable of mean 1. We call Y σb s s R+ the McKean martingale. k=1

6 66 A. Bovier and L. Hartung 1.. Outline of the proof. The proof of Theorem 1. is based on the corresponding result obtained in Bovier and Hartung 14 for the case of two speeds, and on a Gaussian comparison method. We start by showing the localisation of paths, namely that the paths of all particles that reach a hight of order mt at time t has to lie within a certain tube. Next, we show tightness of the extremal process. The remainder of the paper is then concerned with proving the convergence of the finite dimensional distributions through Laplace transforms. We introduce auxiliary two speed BBM s whose covariance functions approximate A well around and 1. Moreover we choose them in such a way that their covariance functions lie above respectively below A in a neighbourhood of and 1 see Figure 1.1. We then use Gaussian comparison methods to compare the Laplace transforms. The Gaussian comparison comes in three main steps. In a first step we introduce the usual interpolating process and introduce a localisation condition on its paths. In a second step we justify a certain integration by parts formula, that is adapted to our setting. Finally, the resulting quantities are decomposed into a part with controlled sign and a part that converges to zero. Figure 1.1. Gaussian Comparison: The extremal process of BBM with covariance A black curve is compared to process with covariances functions A red curve, respectively A blue, curve.. Localization of paths In this section we show where the paths of particles that are extreme at time t are localised. This is essentially inherited from properties of the standard Brownian

7 Variable Speed Branching Brownian Motion 67 bridge. For a given speed function Σ, and a subinterval I [, t], define the following events on the space of paths, X : R + R, { T γ t,i,σ = X s : s I : Xs Σ s Xt < Σ s t Σ s γ}..1 t Proposition.1. Let x denote variable speed BBM with covariance function A. For r < t, set I r {s : Σ s [r, t r]}. For any γ > 1 and for all d R, for all ɛ >, there exists r < such that, for r > r and for all t > 3r, { } P k nt : {x k t > mt + d} x k T γ t,i r,σ < ɛ.. To prove Proposition.1 we need the following lemma on Brownian bridges see Bramson Lemma.. Let γ > 1. Let ξ be a Brownian bridge from to in time t. Then, for all ɛ >, there exists r < such that, for r > r and for all t > 3r, More precisely, P s [r, t r] : ξs > s t s γ < ɛ..3 P s [r, t r] : ξs > s t s γ < 8 k= r Proof : The probability in.3 is bounded from above by t r k= r t/ k= r P s [k 1, k] : ξs > s t s γ k 1 γ e kγ 1 /..4 P s [k 1, k] : ξs > s t s γ,.5 by the reflection principle for the Brownian bridge. This is bounded from above by t/ k= r P s [, k] : ξs > k 1 γ..6 Using the bound of Lemma. b of Bramson 1983 we have P s [, k] : ξs > k 1 γ 4k 1 1 γ e k 1γ 1 /..7 Using this bound for each summand in.6 we obtain.4. Since the sum on the right-hand side of.4 is finite.3 follows. Proof of Proposition.1: Using a first moment method, the probability in. is bounded from above by e t P B Σ t > mt + d, B Σ T γ t,i r,σ..8 Since Σ s is an non-decreasing function on [, t] with Σ t = t, the expression in.8 is bounded from above by { e t P {B t > mt + d} s [r, t r] : B s s t B t > s t s γ}..9

8 68 A. Bovier and L. Hartung Now, ξs B s s t B t is the Brownian bridge from to in time t, and it is well known see e.g. Lemma.1 in Bramson 1983 that ξs is independent of B t, for all s [, t]. Therefore,.9 is equal to e t P B t > mt + d P s [r, t r] : ξs > s t s γ..1 Using the standard Gaussian tail bound, we have u e x / dx u 1 e u /, for u >,.11 e t P B t > mt + d e t t π mt + d e mt+d /t t π mt + d e d M,.1 for some constant M depending on d, if t is large enough. By Lemma. we can find r large enough such that for all r r and t > 3r, P s [r, t r] : ξs > s t s γ < ɛ/m..13 The bounds.1 and.13 imply that.1 is smaller than ɛ. 3. Proof of Theorem 1. In this section we prove Theorem 1. assuming Proposition 3. below, whose proof will be postponed to the following two sections. Proof of Theorem 1.: We show the convergence of the extremal process E t = δ xk t mt 3.1 k nt by showing the convergence of the finite dimensional distributions and tightness. Tightness of E t t is implied by the following bound on the number of particles above a level d see Resnick 1987, Lemma 3.. Proposition 3.1. For any d R and ɛ >, there exists N = Nɛ, d such that, for all t large enough, P E t [d, N < ɛ. 3. Proof : By a first order Chebyshev inequality, for all t large enough, P E t [d, N 1 N et P B t > mt + d M N 3.3 by.1, where M > is a constant that depends on d. Choosing N > M/ɛ yields Proposition 3.1. To show the convergence of the finite dimensional distributions define, for u R, nt N u t = 1 xi t mt>u, 3.4

9 Variable Speed Branching Brownian Motion 69 that counts the number of points that lie above u. Moreover, we define the corresponding quantity for the process E σb,σ e defined in 1.13, N u = i,j 1 pi+σ eλ i j >u. 3.5 Observe that, in particular, P max x it mt u = P N u t = i nt The key step in the proof of Theorem 1. is the following proposition, that asserts the convergence of the finite dimensional distributions of the process E t. Proposition 3.. For all k N and u 1,..., u k R as t. {N u1 t,..., N uk t} d {N u1,..., N uk } 3.7 The proof of this proposition will be postponed to the following sections. Assuming the proposition, we can now conclude the proof of the theorem. The distribution of {N u1 t,..., N uk t} for all k N, u 1,..., u k R characterise the finite dimensional distributions of the point process E t since the class of sets {u,, u R} form a Π-system that generates BR. Hence 3.7 implies the convergence of the finite dimensional distributions of E t see, e.g., Proposition 3.4 in Resnick Combining this observation with Propositions 3.1, we obtain Assertion ii of Theorem 1.. Assertion i follows immediately from Eq To prove Assertion iii, we need to show that, as σe, it holds that Cσ e 1/ 4π and the processes Λ i converge to the trivial process δ. Then, E σb, = i δ pi, 3.8 where p i, i N are the points of a Cox process directed by the random measure 1 4π Y σb e x dx. Lemma 3.3. The point process E σb,σ e converges in law, as σ e, to the point process E σb,. Proof : The proof of Lemma 3.3 is based on a result concerning the cluster processes Λ i. We write Λ σe for a single copy of these processes and add the subscript to make the dependence on the parameter σ e explicit. We recall from Bovier and Hartung 14 that the process Λ σe is constructed as follows. Define the processes as the limits of the point processes E σe E t σ e nt δ xk t σ e t, 3.9 k=1 where x is standard BBM at time t conditioned on the event {max k nt x k t > σe t}. We show here that, as σ e tends to infinity, the processes E σe converge to a point process consisting of a single atom at. More precisely, we show that lim lim P σ e t E t σ e [ R, > 1 max k nt x kt > σ e t =. 3.1

10 7 A. Bovier and L. Hartung Figure 3.. The cluster process seen from infinity for σ e small left and σ e very large right Now, P E t σ e [ R, > 1 max x kt > σ e t k nt P supp E t σ e [, E t σ e [ R, > 1 max x kt > σ e t k nt P supp E t σ e dy E t σ e [ R, > 1 max x kt > σ e t k nt = P supp E t σ e dy max x kt > σ e t k nt P E t σ e [ R, > 1 t supp E σ e dy t But P supp E σ e dy P t,y+ σe is the Palm measure on BBM, i.e. the conditional law of BBM given that there is a particle at time t in dy see Kallenberg 1986, Theorem 1.8 Chauvin et al. 1991, Theorem describe the tree under the Palm measure P t,z as follows. Pick one particle at time t at the location z. Then pick a spine, Y, which is a Brownian bridge from to z in time t. Next pick a Poisson point process π on [, t] with intensity. For each point p π start a random number ν p of independent branching Brownian motions B Y p,i, i ν p starting at Y p. The law of ν is given by the size biased distribution, Pν p = k 1 kp k. See Figure 3.. Now let z = σ e t + y for y. Under the Palm measure, the point process E σe t then takes the form Since, for 1 > γ > 1/, E σe t law = δ y + p π,i<ν p n Y p,i p j=1 δ B Y p,i j t p σ e t. 3.1

11 Variable Speed Branching Brownian Motion 71 lim lim P s σe 1/ : Y t s y + σ e s [ σ e s γ, σ e s γ ] = 1, 3.13 σ e t if we define the set { Y : t s σe 1/ : Y t s y + } σ e s [ σ e s γ, σ e s γ ], 3.14 G t σ e it will suffice to show that, for all R R +, lim lim P p π, i < ν p, j : B Y p,i j t p y R Y Gσ t σ e t e = The probability in 3.15 is bounded by P p π, i ν p, j : B Y p,i j t p y R Y Gσ t e [ t ν p ] E 1 1 Y p,i B j t p>y R Y G πdp σ e [ [ t νp ] ] E E F π πdp t 1 maxj B Y p,i j t p y R 1 Y G t σe KP max B Y t s j y R Y Gσ t j e ds Here we used the independence of the offspring BBM and that the conditional probability given the σ-algebra F π generated by the Poisson process π appearing in the integral over π depends only on p. For the integral over s up to 1/σɛ 1/, we just bound the integrand by K. For larger values, we use the localisation provided by the condition that Y G σe, to get that the right hand side of 3.16 is not larger than σ 1/ e t K ds + K e s PBs > R + σ e s σ e s γ ds σe 1/ 3.17 is by.11 bounded from above by Kσ 1/ e + K σ 1/ e e 1 σ e s+ σ e R+σ e s γ ds From this it follows that 3.18 which does no longer depend on t converges to zero, as σ e, for any R R. Hence we see that P E t σ e [ R, > 1 t supp E σ e dy, 3.19 uniformly in y, as t and then σ e tend to infinity. Next, P supp E t σ e dy max x kt > σ e t k nt P max x kt σ e t + y max x kt > σ e t. 3. k nt k nt But by Proposition 7.5 in Bovier and Hartung 14 the probability in the integrand converges to exp σ e y, as t. It follows from the proof that this convergence is unifomr in y, and hence by dominated convergence, the right-hand side of 3. is finite. Therefore, 3.1 holds. As a consequence, Λ σe converges to δ.

12 7 A. Bovier and L. Hartung It remains to show that the intensity of the Poisson process converges as claimed. Theorems 1 and of Chauvin and Rouault 1988 relate the constant Cσ e defined by 1.15 to the intensity of the shifted BBM conditioned to exceed the level σ e t as follows: 1 4π Cσe = lim s = lim s E [ ] E k 1 xks> σ e s P max k x k s > σ e s [ k 1 xk s max i x i s> σ e s max i x i s max x k s > ] σ e s = Λ σe E, ], 3.1 where, by Theorem 7.5 in Bovier and Hartung 14, E is a exponentially distributed random variable with parameter σ e, independent of Λ σe. As we have just shown that Λ σe δ, it follows that the right-hand side tends to one, as σ e, and hence Cσ e 1/ 4π. Hence the intensity measure of the PPP appearing in E σb,σ e 1 converges to the desired intensity measure 4π Y σb e x dx. k This proves Assertion iii of Theorem Proof of Proposition 3. We prove Proposition 3. via convergence of Laplace transforms. For u 1,..., u k R, k N, define the Laplace transform of {N u1 t,..., N uk t}, k L u1,...,u k t, c = E exp c l N ul t, c = c 1,..., c k t R k +, 4.1 l=1 and analogously the Laplace transform L u1,...,u k c of {N u1,..., N uk }. Proposition 3. is then a consequence of the next proposition. Proposition 4.1. For any k N, u 1,..., u k R and c 1,..., c k R + lim L u t 1,...,u k t, c = L u1,...,u k c. 4. The proof of Proposition 4.1 comes in two main steps. First, we prove the result for the case of two speed BBM. This was done in our previous paper Bovier and Hartung 14. In fact, we will need a slight extension of that result where we allow a slight dependence of the speeds on t. This will be given in the next subsection. The second step is to show that, under the hypotheses of Theorem 1., the Laplace transforms can be well approximated by those of two speed BBM. This uses the classical Gaussian comparison argument in a slightly subtle way Approximating two speed BBM. The case A 1 <. It turns out that it is enough to compare the process with covariance function A with processes whose covariance function is piecewise linear with a single change in slope. We derive asymptotic upper and lower bounds by choosing these in such a way that the covariances near zero and near one are below, respectively above, that of the original

13 Variable Speed Branching Brownian Motion 73 process. We define δ < t = sup{x [, 1] : Ax t /3 } δ > t = 1 inf{x [, 1] : Ax 1 t /3 } 4.3 a Case 1 σ b = but lim t δ < t = b Case σ b = but lim t δ < t = δ < Figure 4.3. Different cases for δ < t and δ > t. By Assumption A1 it follows that lim t δ > t =. Remark 4.. If lim t δ < t = δ <, then it follows from the definition of δ < t that Ax = on [, δ < ]. In the following formulas, we choose a parameter n N as follows. If in Assumption A the functions B, B can be chosen such that there exists m, such that B k = B k =, for all 1 k < m, and in some finite interval [, δ b ], both B m x and B m x are bounded by some constants K 1, respectively K 1, then we choose n as the largest of these integers. Otherwise, we choose n =. Moreover, let C x K and C x K for all x [1 δ e, 1]. We define and Here with If σ e <, Ax = Ax = b = Σ s = tas/t 4.4 Σ s = tas/t. 4.5 { σb + K1 n! δ< t n 1 x, x b, 1 + σe K δ> tx 1, b < x 1, σe + K δ> t σb + K 1 n! δ< t n 1 σe + K δ> t. 4.7 {{ } σb K 1 n! δ< t n 1 x, x b, 1 + σe + K δ> tx 1, b < x 1, 4.8

14 74 A. Bovier and L. Hartung with b = 1 σe K δ> t σb K 1 n! δ< t n 1 σe K δ> t. 4.9 Remark 4.3. If σb =, Ax = for x b. If lim t δ < t = δ < which implies that all derivatives in zero are, we take {, x b, Ax = 1 + σe K 4.1 δ> tx 1, b < x 1, and b = 1 σ e + K δ> t σ e + K δ> t If A 1 = σe = +, then b = 1. And A A ρ is defined by { σb Ax = + K1 n! δ< t n 1 x, x b, 1 + ρx 1, b < x 1, and b b ρ = 1 ρ. σb + K 1 n! δ< t n 1 ρ The choice of Σ and Σ is motivated by the following properties. 4.1 Lemma 4.4. A and A are piecewise linear, continuous functions with A = A = and A1 = A1 = 1. Moreover, i If lim t δ < t =, then, for all s with Σ s [, t 1/3 ] and Σ s [t t 1/3, t], Σ s Σ s Σ s ii If lim t δ < t = δ < >, then 4.13 only holds for all s with Σ s [t t 1/3, t] while, for s [, δ bt, Σ s = Σ s = Σ s = Proof : A and A are obviously piecewise linear. The fact that they are continuous is easily verified. By definition, A = σb and A 1 = σe. For all s such that Σ s [, t 1/3 ], a nth-order Taylor expansion of B at together with the fact that by assumption, for k < n, B = B k = shows that [ ] Σ s Bs = t B s t + Bn ξ s n, for some ξ, s n! t The reverse inequality holds when B is replaced by B. Eq follows then from Assumption A. Using a second order Taylor expansion of C and C at 1, we obtain Eq for Σ s [t t 1/3, t]. Eq holds trivially in the specified interval. This concludes the proof of the lemma. Let {y i, i nt} be the particles of a BBM with speed function Σ and let {y i, i nt} be particles of a BBM with speed function Σ. We want to show that

15 Variable Speed Branching Brownian Motion 75 the limiting extremal processes of these processes coincide. Set nt N u t 1 yi t mt>u, 4.16 nt N u t 1 yi t mt>u Lemma 4.5. For all u 1,..., u k and all c 1,..., c k R +, the limits k lim E exp c k N ul t t and lim E t exp l=1 k c k N ul t l= exist. If A 1 <, then two limits coincide with L u1,...,u k c. If A 1 = σ e =, then the two limits in 4.18 and 4.19 converges to the same limit, as ρ. Proof : We first consider the case when A 1 <. To prove Lemma 4.5, we show that the extremal processes nt E t = δ yi mt and E t = δ yi mt 4. both converge to E σb,σ e, that was defined in Note that this implies first convergence of Laplace functionals with functions φ with compact support, while the N u t have support that is unbounded from above. Convergence for these, however, carries over due to the tightness established in Proposition 3.1. To do so, observe that the slopes at of Σ and Σ are equal to σb up to an error of order δ < t. Moreover, the slope at t is in both cases, up to an error of order δ > t, equal to σe. The time of speed change bt, respectively bt, is equal to 1 σ e σ b σ b up to an error of order δ > t δ < t. For the two-speed BBM with speed σ σ b s =, σe, nt for < s 1 σ e, σb σ e < s < t, for 1 σ e σ b σ e 4.1 it was shown in Bovier and Hartung 14 that the maximal displacement is equal to mt and that the extremal process converges to E σb,σ e as t. The method used to show this is to show the convergence of the Laplace functionals, Eexp φxe t dx, φ C c R, R +. The other difference is that the function A we have to consider now depend weakly on t. We need to show that this has no effect. Inspecting the proof of the convergence of the Laplace functional, respectively convergence of the maximum in Bovier and Hartung 14, one sees that nothing changes since we keep t fixed until Eq. 5.8 in Bovier and Hartung 14.

16 76 A. Bovier and L. Hartung There, we then have to show that, for each y R, in the case of Σ 1/ E exp Ca e y Y B σ b,b t,γ 1 + o1, 4. σ e K δ > t 1 [σ b + K 1 δ < t]b/ t converges, as first t and then B, to E exp σ e CaY σb e y, 4.3 where Ca > is a constant depending on a = σ e 1 see 1.15, and Y B σ b,b t,γ 4.4 = nb t e 1+σ b + K δ< tb t+ y i b t 1 yi b t σ b + K δ< tb t [ Bt γ/,bt γ/ ]. The main task is to ensure the convergence of Y B σ b,b t,γ to the limit of the corresponding McKean martingale, Y σb. In the case where lim t δ < t >, this takes the simple form Y B,b t,γ = nb t e b t, 4.5 which converges to an exponential random variable of mean one, as desired. In the case when lim t δ < t =, a further slight modification is necessary. Observe that in the proof of Theorem 5.1 in Bovier and Hartung 14, b t can be replaced by any sequence t such that lim t t/t =. Here we adapt t to the function Σ and choose σ e K δ > t 1 [σ b + K 1 δ < t] t/t t = δ < t 1/. 4.6 Doing so, we have to show that, analogously to 4., the object 1/ E exp Ca e y Y B σ b, t,γ 1 + o1 4.7 converges to 4.3. By our choice of t, K e 1 δ < t t e K 1 δ < t t+b t γ 1 const. δ < t, 4.8 which tends to zero, as t. Thus where n t Ỹσ B b, t,γ Y B σ b, t,γ = Ỹ σ B b, t,γ1 + o1, 4.9 e 1+σ b t+ σ b x i t 1 σb x i t σ b t [ B tγ,b t γ ]. 4.3 By Lemma 4.3 in Bovier and Hartung 14, it follows that Ỹ B σ b, t,γ converges in probability and in L 1 to the random variable Y σb. Since Ỹ σ B b, t,γ and Ca >, and since lim t 1/ = σ e, 4.31 σ e K δ > t 1 [σ b + K 1 δ < t] t/t

17 Variable Speed Branching Brownian Motion 77 it follows that lim lim inf E B t = lim lim sup E B t = E exp exp 1 + o1 Caσ e e y Ỹσ B b, t,γ exp Caσ e e y Ỹσ B b, t,γ 1 + o1 Cσ e Y σb e y, 4.3 where Cσ e = σ e Ca. The same arguments work when Σ is replaced by Σ. The limit in 4.3 coincides with the one obtained in Bovier and Hartung 14 for the two-speed BBM with speed given in 4.1. The assertion in the case when σ e = follows directly from Lemma Gaussian comparison. We distinguish from now on the expectation with respect to the underlying tree structure and the one with respect to the Brownian movement of the particles. E n : expectation w.r.t. Galton-Watson process. E B : expectation w.r.t. the Gaussian process conditioned on the σ-algebra F tree t generated by the Galton-Watson process. For a given realisation of the Galton-Watson process, we now let x, ȳ, and y be three independent Gaussian processes whose covariances are given as in 1. with A replaced by A in the case of ȳ and A in the case of y. The proof of Proposition 4.1 is based on the following Lemma that compares the Laplace transform L u1,...,u k t, c with the corresponding Laplace transform for the comparison processes. Lemma 4.6. For any k N, u 1,..., u k R and c 1,..., c k R + we have k L u1,...,u k t, c E exp c l N ul t + o L u1,...,u k t, c E exp l=1 k c l N ul t + o Proof : The proofs of 4.33 and 4.34 are very similar. Hence we focus on proving We will, however, indicate what has to be changed when proving the lower bound as we go along. For simplicity all overlined names depend on Σ. Corresponding quantities where Σ is replaced by Σ are underlined. To use Gaussian comparison methods, we first replace the functions N u t, N u t by smooth approximants: χ κ 1 x x e z /κ dz, 4.35 πκ and nt l=1 Nu κ t χ κ x i t mt u, 4.36 nt N κ ut χ κ ȳ i t mt u. 4.37

18 78 A. Bovier and L. Hartung Note that, as κ, χ κ x 1 x>, N κ u t N u t, N κ ut N u t In order to prove 4.33, we show that for all κ >, k k E B exp c l Nu κ l t E B exp c l N κ u l t + Rt, 4.39 l=1 where Rt is independent of κ and lim t ERt =. From now on we work conditional on the σ-algebra generated by the Galton- Watson tree. We introduce the family of functions f t,κ : R nt R by nt k f t,κ x f t,κ x 1,..., x nt exp c l χ κ x i mt u l. We want to control E B exp l=1 l=1 k c l Nu κ l t E B exp l=1 k c l N κ u l t = E B ft,κ x 1 t,..., x nt t E B f t,κ y 1 t,..., y nt t 4.4 Define for h [, 1] the interpolating process x h i = hx i + 1 hy i, h [, 1] The interpolating process {x h i, i nt} is a Gaussian process with the same underlying branching structure and speed function Then, 4.4 is equal to where and d dh f t,κx h t = 1 nt l=1 l=1 Σ hs = hσ s + 1 hσ s. 4.4 E B 1 d dh f t,κx h tdh, 4.43 [ 1 f t,κ x h x 1t,..., x h nt t h x i t i ] 1 y i t, 1 h 4.44 f t,κ x h x 1t,..., x h nt t 4.45 i k c x h l i = t mt u l e κ f t,κ x h πκ 1t,..., x h nt t. The key idea is to introduce a localisation condition on the path of x h i into 4.44 at this stage. Note that it is not surprising at this point, since localising paths has been a crucial tool in almost all computations involving BBM, see already Bramson s paper Bramson 1978a. To do so, we insert into the right-hand side of 4.44 a one in the form 1 = 1 x h i T γ + 1 x h t,ī,σ i T γ, 4.46 h t,ī,σ h

19 Variable Speed Branching Brownian Motion 79 with Ī [ tδ < t δ< 1 t, t1 δ> 1 t], 4.47 and T γ defined in.1. Here δ <,> t,i,σ 1 t δ <,> t, while δ <,> is defined in the h same way, but with respect to the speed function Σ instead of Σ. We call the two resulting summands S h < and S h >, respectively. Note that, when proving the lower bound, we choose instead of Ī, the interval I [ tδ < t δ< 1 t, t1 δ> t] The next lemma shows that S h > does not contribute to the expectation in 4.44, as t. Lemma 4.7. With the notation above, we have 1 lim E n E B S h > dh = t The proof of this lemma will be postponed. We continue with the proof of Lemma 4.6. We are left with controlling, for fixed h, 1, E B S h < = E B 1 nt By the definition of T γ t,ī,σ h, [ f t,κ x h xi t t1 x x h i T γ i t,ī,σ h h y it 1 h ] x h i T γ = 1 s Ī: ξ, 4.51 h t,ī,σ i s Σ h s t Σ h sγ h where ξi hs xh i s Σ h s t x h i t is a time changed Brownian bridge from to in time t, which, as we recall, is independent of the endpoint x h i t. We want to apply a Gaussian integration by parts formula to 4.5. However, we need to take care of the fact that each summand in 4.5 depends on the whole path of ξ i through the term in Therefore, we first approximate that indicator function in 4.51 by a discretised version. Let, for N N, t 1,..., t N be a sequence of equidistant points in [, t]. Define the following sequence of approximations, G h,n : CR + R, to the indicator function in 4.51, where G h,n x g h xt 1,..., xt N, 4.5 g h z 1,..., z N = N l=1 [1 tl Īχ N Σ ht l t Σ ht l γ z l χ N Σ ht l t Σ ht l γ + z l + 1tl Ī ] Clearly G h,n 1 x T γ t,ī,σ h, pointwise, while the derivatives z l g h z 1,..., z N are bounded. By the Gaussian integration by parts formula see, e.g., Talagrand 11a,

20 8 A. Bovier and L. Hartung Appendix A.5, we have, for any given N N, E B x i t f t,κ x h tg h,n ξ h x i = N l=1 nt + E B xi tξi h t l E B f t,κ x h t g h ξi h t 1,..., ξi h t z N l j=1 E B x i tx h j te B G h,n ξ h f t,κ x h t x j x i But for all l {1,..., N }, E B xi tξi h t l = he B x i tx i t l x i t Σ t l x i t t = h Σ t l Σ t l =, 4.55 and hence the second line in 4.54 is equal to zero. In exactly the same way we get E B ȳ i f t,κ x h x 1t,..., x h nt t i nt = E B ȳ i tx h j te B G h,n ξ h f t,κ x h t x j x i j=1 Computing the covariances, E B xi tx h j t = he x i tx j t and we obtain that E B = 1 nt i,j=1 i j nt E B ȳi tx h j t = 1 he ȳ i tȳ j t, [ f t,κ x h tg h,n ξ h xi t x i h y it 1 h ] 4.57 [ EB x i tx j t E B y i ty j t ] E B G h,n ξ h f t,κ x h t x i x j where crucially the terms with i = j have cancelled. This equation holds for any N, and since G h,n x 1, and the integral E f t,κx h t B is finite trivially, x i x j since the mixed second derivatives of f are bounded, by Lebesgue s dominated convergence theorem, the right hand side converges to the expression where G h,n is replaced by its limit. Similarly, in the left hand side we can apply Lebesgue s theorem, majorising the integrands by C x i t, etc, which are all integrable. Thus we obtain that E B = 1 nt i,j=1 i j nt [ f t,κ x h xi t t1 1x h T x γ i i t,ī,σ h h [ EB x i tx j t E B y i ty j t ] E B 1 x h i T γ t,ī,σ h, y it 1 h ] 4.58 f t,κ x h t, x i x j

21 Variable Speed Branching Brownian Motion 81 Introducing 1 = 1 dx h i t,x h j t Ī + 1 dx h i t,x h j t Ī, 4.59 into 4.58, we rewrite the right hand side of 4.58 as T 1 + T, where T nt =, i,j=1 i j i,j=1 i j E B [ xi tx j t y i ty j t ] E B 1 dx h i t,x h j t Ī1 x h i T γ t,ī,σ h E B [ xi tx j t y i ty j t ] E B 1 dx h i t,x h j t Ī1 x h i T γ t,ī,σ h f t,κ x h t x i x j T 4.61 nt =. The term T 1 is controlled by the following Lemma. f t,κ x h t x i x j Lemma 4.8. With the notation above, there exists a constant C <, independent of t and κ, such that for all t large and κ small enough, 1 E n T 1dh C e s+σ s+os γ e s+σ s+os γ ds. 4.6 Moreover, we have: Ī Lemma 4.9. If Σ satisfies A1-A3, and Σ is as defined in 4.4, then e s+σ s+os γ e s+σ s+os γ ds = lim t Ī We postpone the proofs of these lemmata to Section 5. Up to this point the proof of 4.34 works exactly as the proof of 4.33 when Σ is replaced by Σ. For T and T we have: Lemma 4.1. For almost all realisations of the Galton-Watson process, the following statements hold: i If lim t δ < t =, then T, 4.64 and T ii If lim t δ < t = δ < >, then and limt, 4.66 t limt t The proof of this lemma is again postponed to Section 5. From Lemma 4.8, Lemma 4.9, and Lemma 4.1 together with 4.5, the bound 4.39 follows. Since the left and right hand sides involve expectations over bounded functions, we may pass to the limit κ. This implies As pointed out, using Lemma 4.1, the bound 4.34 also follows. Thus, Lemma 4.6 is proved, once we provide the postponed proofs of the various lemmata above.

22 8 A. Bovier and L. Hartung We conclude the proof of Proposition 4.1. Proof of Proposition 4.1: Taking the limit as t in 4.33 and 4.34 and using Lemma 4.5 gives, in the case A 1 <, lim sup L u1,...,u k t, c L u1,...,u k c, 4.68 t lim inf u 1,...,u k t, c t L u1,...,u k c Hence lim t L u1,...,u k t, c exists and is equal to L u1,...,u k c. In the case A 1 =, the same result follows if in addition we take ρ after taking t. This concludes the proof of Proposition Proofs of the auxiliary lemmata We now provide the proofs of the lemmata from the last section whose proofs we had postponed. Proof of Lemma 4.7: We have E B S h > 1 nt k c l E B l=1 x h i t mt u l [ ] e κ 1 xit πκ x h i T γ h + y i t 1 h. t,ī,σ h 5.1 We use the fact that the condition in the indicator function involves only the time changed Brownian bridge, ξi hs = xh i s Σ h s t x h i t, which is independent of the endpoint x h i t, and of course also of x it. This implies that E B = E B x h i t mt u l e κ 1 πκ x h i T γ t,ī,σ h x h i t mt u l e κ πκ x i t h x it h P B x h i T γ t,ī,σ h, 5. and similarly for the terms involving ȳ. The computation of the first expectation is a straightforward Gaussian integration involving two independent Gaussians. In fact we can write E B x h i t mt u l e κ πκ x i t = where κ +th M tκ h1 h/κ h1 h/κ κ +t1 h tκ dz 1 dz π 3/ tκ e 1 z,mz+v,z mt+u l /κ z 1,, v mt + u l κ Note that det M = t + t 1 κ, and its eigenvalues are given by 5.3 h h λ ± = t 1 + κ ± κ 4 + t 1 κ. 5.5 Importantly, the smaller eigenvalue behaves, when κ /t tends to zero, as λ = 1/t 1 + Oκ /t. 5.6

23 Variable Speed Branching Brownian Motion 83 The remaining calculations amount to completing the square. With a mt + u l κ t we can rewrite the right hand side of 5.3 as e 1 mt+u l t+κ π 1/ κ + t Now it is plain that the last expectation is bounded by h, h dz 1 dz π detm 1 e 1 z a,mz a z a 1 + const.λ 1/ h mt + u l κ /t t1/ 1 + Oκ t const. ht + t, 5.9 with the constant uniform in, say, κ 1, t 1. This allows us to bound 5.8 by a uniform constant times ht + e t/1+κ /t+ln t/1+κ /t const.e t ht3 + t. 5.1 Next we bound the probability that the Brownian bridge does not stay in the tube. For this we use Lemma.. Note that by construction, if s Ī, then for all h [, 1], Σ h Dt1/3, and Σ h t Dt1/3, for some constant < D <, depending only on the function A. Thus, by Eq..4 of Lemma., P B x h i T γ 8 k 1/ γ e k γ 1 / t,ī,σ h k=dt 1/3 We are now ready to insert everything back into 5.1. This gives that, uniformly in κ small and t large as above E B S h > ntconst. k l=1 c l e t t 3 + t/ h + t/ 1 h e Dγ 1 t γ 1/ Integrating over h and taking the expectation with respect to the Galton-Watson process yields 1 k E n E B S h > dh const. c l t 3/ e Dγ 1 t γ 1/3, which tends to zero as t, uniformly in κ 1, as claimed, if γ > 1/. This proves the assertion of the lemma. Proof of Lemma 4.1: We first proof Observe that l=1 dx i t, x j t = dy i t, y j t = dx h i t, x h j t Moreover, for all 1 i, j nt, i j, 1 x h i T γ t,ī,σ h f t,κ x h x i x 1t,..., x h nt t j For dx i t, x j t [, tδ < 1 t δ< t, we distinguish the cases lim t δ < t > and lim t δ < t =, respectively.

24 84 A. Bovier and L. Hartung If lim t δ < t = δ < >, then Ax = Ax = Ax =, for all x [, tδ 1 < t δ< t. Thus all the terms in both T and T with i, j such that dx i t, x j t [, tδ 1 < t δ< t vanish. Next consider the case where lim t δ < t =. By Lemma 4.4 we have, for y i t, y j t with dy i t, y j t [, tδ 1 < t δ< t, that E B y i t, y j t = Σ dyi t, y j t Σ dy i t, y j t = Σ dx i t, x j t = E B x i t, x j t For 4.65 we proceed in the same way but instead of 5.15 we have, for dy i t, y j t [, tδ 1 < t δ< t, E B y i t, y j t = Σ dy i t, y j t Σ dy i t, y j t If dy i t, y j t [t1 δ 1 > t, t], resp. dy obtain in both cases from Lemma 4.4 that and = Σ dx i t, x j t = E B x i t, x j t t, y t [t1 i j δ> t, t], we E B y i t, y j t E B x i t, x j t, 5.17 E B y i t, y j t E B x i t, x j t, 5.18 respectively. This concludes the proof of Lemma 4.1. Proof of Lemma 4.8 : We have that 1 nt E n T 1dh E n E B x i tx j t E B y i ty j t i,j=1 i j E B 1 dx h i t,x h j t Ī1 x h i T γ t,ī,σ h f t,κ x h t dh. x i x j By definition of f t,κ we have for i j f t,κ x h t k c l c l = x i x j πκ e x h i t mt u l x h j t mt u l κ e κ f t,κ x h t l, l=1 k l, l=1 c l c l πκ e x h i t mt u l x h j t mt u l κ e κ, 5. where we used that f t,κ 1. Using this bound we get that 5.19 is bounded from above by nt E n E B x i tx j t E B y i ty j t i,j=1 i j k l, l=1 1 E B 1 dx h i t,x h j t Ī1 x h i T γ t,ī,σ h c l c l πκ e x h i t mt u l x h j t mt u l κ e κ dh. 5.1

25 Variable Speed Branching Brownian Motion 85 We introduce the shorthand notation A 1 = Σ hs/t, A = 1 Σ hs/t. 5. To compute the expectation in 5.1 we fix the time of the most recent common ancestor of x i and x j as s and integrate over it. Then the pair x h i t, xh j t has the same distribution as Y + X 1, Y + X, where Y, X 1, X are independent centred Gaussian random variables with variance ta 1, ta, and ta, respectively. We also relax the tube condition except at the splitting time of the two particles. From this we see that the expression in 5.1 is bounded from above by C k c l c l Ī l, l=1 1 A1 mt+js,γ Σ s Σ s e t s 5.3 A 1 mt Js,γ where > C > is a constant, and for 1 l k Qy, u l, tqy, u l, te y ta 1 dy πta1 dhds, Js, γ = Σ hs t Σ hs γ = A1 A t γ, 5.4 Qy, u l, t = e x+y mt u l /κ e x ta dx. 5.5 π κ ta We first compute Qy, u l, t. We change variables in 5.5 and obtain that 5.5 can be written as x = z + ta mt y u l κ + ta 5.6 Qy, u l, t = e mt y u l κ +ta = e Plugging this into 5.4 we get C k l, l=1 c l c l Ī mt y u l κ +ta e z κ +A t κ A t π κ A t dz πκ + ta. 5.7 Σ s Σ s e t s A1 mt+js,γ A 1 mt Js,γ e mt y u l + mt y u l κ +ta πκ + ta In the integral with respect to y we now change variables to e y ta 1 dy πta1 dhds, w = y mt u l u la 1 t κ, A 1 t

26 86 A. Bovier and L. Hartung and drop terms that are bounded uniformly in t and κ by constants to see that 5.8 is less than or equal to C k l, l=1 c l c l e Ī Σ s Σ s e t s 1 mt κ +1+A 1 t e w κ +1+A 1 t κ +ta A 1 t πκ + ta A 1 A t mt A 1 mtκ ta 1 u l +u l κ +Js,γ +1+A 1 t A 1 A t mt A 1 mtκ ta 1 u l +u l κ Js,γ +1+A 1 t dwdhds πta1, 5.3 with C a new constant independent of t and κ. Since, for each h, 1, 1 + A1 A1 A mt Js, γ ta1 A A A 1 A mt 1/ 4 A 1 A mt A 1 A γ t γ, 5.31 which tends to +, as t, we can use the Gaussian tail bound.11 in the integral over w to show that e t s 1 e t s 1 A 1 A t mt A 1 mtκ ta 1 u l +u l κ +Js,γ +1+A 1 t e mt κ +1+A 1 t e w κ +1+A 1 t κ +ta A 1 t πκ + ta πta 1 dwdh A 1 A t mt A 1 mtκ ta 1 u l +u l κ Js,γ +1+A 1 t e 1 κ +1+A 1 t A 1 A t mt A 1 mtκ mt ta 1 u l +u l κ +ta A 1 t κ Js,γ +1+A 1 t e κ +1+A 1 t πκ + ta [ ] A1 A t mt A 1 mtκ ta 1 u l +u l κ +1+A 1 t Js, γ κ 1 +1+A 1 t κ dh +ta A 1 t πta1. e 1 κ +1+A 1 t A 1 A t mt A 1 mtκ ta 1 u l +u l κ +ta A 1 t κ Js,γ +1+A 1 t e mt κ +1+A 1 t A1t dh A 1 A t mt A 1 mtκ ta 1 u l +u l Js,γκ +1+A 1 t π 3 = e t s 1 By the definition of Js, γ we can bound 5.3 from above by Ĉe t s A1 t 1+A mt A 1A t mt A 1 mtκ ta 1u l +u l Js,γκ +1+A 1t e t +Os γ dh π where Ĉ < is a constant that does not depend on t and κ. The denominator in the fraction appearing in the integrand equals A A 1 t 1 + o1, for t large, because, for all s in the integration range Ī, it holds that A t > t 1 3 and A 1 t > t 1 3. Using this and the fact that mt /t = t log t + Ologt /t, 5.34 we see that the expression in 5.33 is smaller than Ĉ 1 t 1 A 1 A t A 1 t e s+a 1t+Os γ dh π 3, 5.35 Since A 1 = 1 A, the fraction in 5.35 is bounded by a constant times t 1+A / A 1 A 1,

27 Variable Speed Branching Brownian Motion 87 We now distinguish three regimes. If A ɛ, 1 ɛ, for ɛ > independent of t, then the expression in 5.36 is of order t 1/, as t. If A tends to 1, then for s Ī, t 1+A/ t 1/+1/3, 5.37 A 1 A 1 which tends to zero, as t. Finally, when A, we get t 1+A/ A 1 A 1 t 1+/3+o1, 5.38 which tends to zero as t. Hence, for all s Ī, 5.35 is bounded from above by o1 1 e s+a 1t+Os γ dh Inserting this into 5.8, and writing out A 1 t = hσ s + 1 hσ s, we see that 1 E n T 1dh 1 o1 Σ s Σ s e s+hσ s+1 hσ s+os γ dhds Ī o1 e s+σ s+os γ e s+σ s+os γ ds, 5.4 Ī with o1 tending to, as t, uniformly for κ small enough. This proves Lemma 4.8. Proof of Lemma 4.9: We split the domain of integration into three parts. First, let δ 3 > be such that σ b + K δ 3 < 1 and δ 3 < δ b By a Taylor expansion at zero we have Σ s σ b + K δ 3s, for s [, δ 3 t]. 5.4 Moreover, if δ 1 >, then so is δ, and we then choose δ 3 < δ < δ< 1 with δ< i lim t δ i < ; hence, for t large enough it then also holds that δ 3 < δ < t δ< 1 t. If δ 1 < =, we set note that, by monotonicity, in this case δ< t δ< 1 t = δ< t δ3t S1 e s+σ s+os γ e s+σ s+os γ ds 5.43 tδ < t δ3 t tδ < t e s1 σ b K δ 3+Os γ + e s1 σ b K δ< t+os γ ds. By assumption on δ 3, 1 σb K δ 3 > and 1 σb K δ< t >, for all t sufficiently large. Hence lim S1 = t If δ 1 < >, we set S1 =. Next we choose δ 4 such that σ e δ 4 K > 1 and δ 4 < δ e Again due to a first order Taylor expansion we have Σ t s t σ e K δ 4 s, for s [tδ > 1 t, tδ 4 ]. 5.46

28 88 A. Bovier and L. Hartung Hence S = t1 δ > 1 t t δ 4t δ4 t e s+σ s+os γ e s+σ s+os γ ds e s t+σ t s+os γ e s t+σ t s+os γ tδ > 1 t δ4t tδ > 1 t d s e s1 σ e + K δ 4+Os γ + e s1 σ e + K δ> t+os γ d s By assumption on δ 4 we have 1 κ e + K δ 4 < and, for t large, 1 σe + K δ> t <. Hence lim S = t We still have to control t δ4 t S3 e s+σ s+os γ e s+σ s+os γ ds δ 3 t Consider the function Ax on the interval [δ 3, 1 δ 4 ]. Since Ax is right-continuous, increasing and Ax < x on, 1, we know that Then which implies t δ4 t δ 3t M inf x Ax >. 5.5 x [δ 3,1 δ 4 ] s Σ s = ts/t As/t Mt, 5.51 e s+σ s+os γ t δ4 t ds e Mt e Osγ ds, 5.5 which tends to zero, as t. By the same argument it follows that t δ4 t lim t δ 3t δ 3t e s+σ s+os γ ds = It follows that lim t S3 =, which concludes the proof of Lemma 4.9. Acknowledgements The authors would like to thank an anonymous referee for pointing out several mistakes in a preliminary version. References E. Aïdékon, J. Berestycki, É. Brunet and Z. Shi. Branching Brownian motion seen from its tip. Probab. Theory Related Fields , MR L.-P. Arguin, A. Bovier and N. Kistler. The extremal process of branching Brownian motion. Probab. Theory Related Fields , MR K.B. Athreya and P.E. Ney. Branching processes. Springer-Verlag, New York- Heidelberg 197. Die Grundlehren der mathematischen Wissenschaften, Band 196. MR3734. M. Biskup and O. Louidor. Extreme local extrema of two-dimensional discrete Gaussian free field. ArXiv Mathematics e-prints 13. arxiv:

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems

A. Bovier () Branching Brownian motion: extremal process and ergodic theorems Branching Brownian motion: extremal process and ergodic theorems Anton Bovier with Louis-Pierre Arguin and Nicola Kistler RCS&SM, Venezia, 06.05.2013 Plan 1 BBM 2 Maximum of BBM 3 The Lalley-Sellke conjecture

More information

arxiv: v3 [math.pr] 21 Sep 2016

arxiv: v3 [math.pr] 21 Sep 2016 Submitted to the Annals of Applied Probability arxiv: arxiv:1412.5975 EXTENDED CONVERGENCE OF THE EXTREMAL PROCESS OF BRANCHING BROWNIAN MOTION arxiv:1412.5975v3 [math.pr] 21 Sep 2016 By Anton Bovier,

More information

Branching Brownian motion seen from the tip

Branching Brownian motion seen from the tip Branching Brownian motion seen from the tip J. Berestycki 1 1 Laboratoire de Probabilité et Modèles Aléatoires, UPMC, Paris 09/02/2011 Joint work with Elie Aidekon, Eric Brunet and Zhan Shi J Berestycki

More information

The extremal process of branching Brownian motion

The extremal process of branching Brownian motion Probab. Theory Relat. Fields DOI 1.17/s44-1-464-x The extremal process of branching Brownian motion Louis-Pierre Arguin Anton Bovier Nicola Kistler Received: 11 June 1 / Revised: 1 December 1 Springer-Verlag

More information

Critical branching Brownian motion with absorption. by Jason Schweinsberg University of California at San Diego

Critical branching Brownian motion with absorption. by Jason Schweinsberg University of California at San Diego Critical branching Brownian motion with absorption by Jason Schweinsberg University of California at San Diego (joint work with Julien Berestycki and Nathanaël Berestycki) Outline 1. Definition and motivation

More information

Recent results on branching Brownian motion on the positive real axis. Pascal Maillard (Université Paris-Sud (soon Paris-Saclay))

Recent results on branching Brownian motion on the positive real axis. Pascal Maillard (Université Paris-Sud (soon Paris-Saclay)) Recent results on branching Brownian motion on the positive real axis Pascal Maillard (Université Paris-Sud (soon Paris-Saclay)) CMAP, Ecole Polytechnique, May 18 2017 Pascal Maillard Branching Brownian

More information

Extremal Particles in Branching Brownian Motion - Part 1

Extremal Particles in Branching Brownian Motion - Part 1 Extremal Particles in Branching Brownian Motion - Part 1 Sandra Kliem (Part 1) und Kumarjit Saha (Part 2) Singapore - Genealogies of Interacting Particle Systems - 2017 Overview: Part 1 Part 1: Extremal

More information

The genealogy of branching Brownian motion with absorption. by Jason Schweinsberg University of California at San Diego

The genealogy of branching Brownian motion with absorption. by Jason Schweinsberg University of California at San Diego The genealogy of branching Brownian motion with absorption by Jason Schweinsberg University of California at San Diego (joint work with Julien Berestycki and Nathanaël Berestycki) Outline 1. A population

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego

Yaglom-type limit theorems for branching Brownian motion with absorption. by Jason Schweinsberg University of California San Diego Yaglom-type limit theorems for branching Brownian motion with absorption by Jason Schweinsberg University of California San Diego (with Julien Berestycki, Nathanaël Berestycki, Pascal Maillard) Outline

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Almost sure asymptotics for the random binary search tree

Almost sure asymptotics for the random binary search tree AofA 10 DMTCS proc. AM, 2010, 565 576 Almost sure asymptotics for the rom binary search tree Matthew I. Roberts Laboratoire de Probabilités et Modèles Aléatoires, Université Paris VI Case courrier 188,

More information

A note on the extremal process of the supercritical Gaussian Free Field *

A note on the extremal process of the supercritical Gaussian Free Field * Electron. Commun. Probab. 20 2015, no. 74, 1 10. DOI: 10.1214/ECP.v20-4332 ISSN: 1083-589X ELECTRONIC COMMUNICATIONS in PROBABILITY A note on the extremal process of the supercritical Gaussian Free Field

More information

arxiv: v4 [math.pr] 24 Sep 2012

arxiv: v4 [math.pr] 24 Sep 2012 The Annals of Applied Probability 2012, Vol. 22, No. 4, 1693 1711 DOI: 10.1214/11-AAP809 c Institute of Mathematical Statistics, 2012 arxiv:1010.2376v4 [math.pr] 24 Sep 2012 POISSONIAN STATISTICS IN THE

More information

arxiv:math.pr/ v1 17 May 2004

arxiv:math.pr/ v1 17 May 2004 Probabilistic Analysis for Randomized Game Tree Evaluation Tämur Ali Khan and Ralph Neininger arxiv:math.pr/0405322 v1 17 May 2004 ABSTRACT: We give a probabilistic analysis for the randomized game tree

More information

9.2 Branching random walk and branching Brownian motions

9.2 Branching random walk and branching Brownian motions 168 CHAPTER 9. SPATIALLY STRUCTURED MODELS 9.2 Branching random walk and branching Brownian motions Branching random walks and branching diffusions have a long history. A general theory of branching Markov

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Slowdown in branching Brownian motion with inhomogeneous variance

Slowdown in branching Brownian motion with inhomogeneous variance Slowdown in branching Brownian motion with inhomogeneous variance Pascal Maillard Ofer Zeitouni February 2, 24 Abstract We consider the distribution of the maximum M T of branching Brownian motion with

More information

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS

(2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS (2m)-TH MEAN BEHAVIOR OF SOLUTIONS OF STOCHASTIC DIFFERENTIAL EQUATIONS UNDER PARAMETRIC PERTURBATIONS Svetlana Janković and Miljana Jovanović Faculty of Science, Department of Mathematics, University

More information

Notes 1 : Measure-theoretic foundations I

Notes 1 : Measure-theoretic foundations I Notes 1 : Measure-theoretic foundations I Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Wil91, Section 1.0-1.8, 2.1-2.3, 3.1-3.11], [Fel68, Sections 7.2, 8.1, 9.6], [Dur10,

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Dynamics of the evolving Bolthausen-Sznitman coalescent. by Jason Schweinsberg University of California at San Diego.

Dynamics of the evolving Bolthausen-Sznitman coalescent. by Jason Schweinsberg University of California at San Diego. Dynamics of the evolving Bolthausen-Sznitman coalescent by Jason Schweinsberg University of California at San Diego Outline of Talk 1. The Moran model and Kingman s coalescent 2. The evolving Kingman s

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS The Annals of Applied Probability 2001, Vol. 11, No. 1, 121 181 SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS By Maury Bramson 1 and Joel L. Lebowitz 2 University of

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Annealed Brownian motion in a heavy tailed Poissonian potential

Annealed Brownian motion in a heavy tailed Poissonian potential Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Research Institute of Mathematical Sciences Stochastic Analysis and Applications, Okayama University, September 26, 2012

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Daniel M. Oberlin Department of Mathematics, Florida State University. January 2005

Daniel M. Oberlin Department of Mathematics, Florida State University. January 2005 PACKING SPHERES AND FRACTAL STRICHARTZ ESTIMATES IN R d FOR d 3 Daniel M. Oberlin Department of Mathematics, Florida State University January 005 Fix a dimension d and for x R d and r > 0, let Sx, r) stand

More information

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES

RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES RENORMALIZATION OF DYSON S VECTOR-VALUED HIERARCHICAL MODEL AT LOW TEMPERATURES P. M. Bleher (1) and P. Major (2) (1) Keldysh Institute of Applied Mathematics of the Soviet Academy of Sciences Moscow (2)

More information

Analysis II - few selective results

Analysis II - few selective results Analysis II - few selective results Michael Ruzhansky December 15, 2008 1 Analysis on the real line 1.1 Chapter: Functions continuous on a closed interval 1.1.1 Intermediate Value Theorem (IVT) Theorem

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE. Michael Oberguggenberger and Danijela Rajter-Ćirić

STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE. Michael Oberguggenberger and Danijela Rajter-Ćirić PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 7791 25, 7 19 STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY GENERALIZED POSITIVE NOISE Michael Oberguggenberger and Danijela Rajter-Ćirić Communicated

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Bivariate Uniqueness in the Logistic Recursive Distributional Equation

Bivariate Uniqueness in the Logistic Recursive Distributional Equation Bivariate Uniqueness in the Logistic Recursive Distributional Equation Antar Bandyopadhyay Technical Report # 629 University of California Department of Statistics 367 Evans Hall # 3860 Berkeley CA 94720-3860

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Nonlinear stabilization via a linear observability

Nonlinear stabilization via a linear observability via a linear observability Kaïs Ammari Department of Mathematics University of Monastir Joint work with Fathia Alabau-Boussouira Collocated feedback stabilization Outline 1 Introduction and main result

More information

The Contour Process of Crump-Mode-Jagers Branching Processes

The Contour Process of Crump-Mode-Jagers Branching Processes The Contour Process of Crump-Mode-Jagers Branching Processes Emmanuel Schertzer (LPMA Paris 6), with Florian Simatos (ISAE Toulouse) June 24, 2015 Crump-Mode-Jagers trees Crump Mode Jagers (CMJ) branching

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

Mandelbrot s cascade in a Random Environment

Mandelbrot s cascade in a Random Environment Mandelbrot s cascade in a Random Environment A joint work with Chunmao Huang (Ecole Polytechnique) and Xingang Liang (Beijing Business and Technology Univ) Université de Bretagne-Sud (Univ South Brittany)

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

5.1 Approximation of convex functions

5.1 Approximation of convex functions Chapter 5 Convexity Very rough draft 5.1 Approximation of convex functions Convexity::approximation bound Expand Convexity simplifies arguments from Chapter 3. Reasons: local minima are global minima and

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0

1. Nonlinear Equations. This lecture note excerpted parts from Michael Heath and Max Gunzburger. f(x) = 0 Numerical Analysis 1 1. Nonlinear Equations This lecture note excerpted parts from Michael Heath and Max Gunzburger. Given function f, we seek value x for which where f : D R n R n is nonlinear. f(x) =

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE WILLIE WAI-YEUNG WONG. Introduction This set of notes is meant to describe some aspects of polynomial approximations to continuous

More information

Ordinary differential equations with measurable right-hand side and parameters in metric spaces

Ordinary differential equations with measurable right-hand side and parameters in metric spaces Ordinary differential equations with measurable right-hand side and parameters in metric spaces Friedemann Schuricht Max-Planck-Institut für Mathematik in den Naturwissenschaften, Inselstraße 22 26, D-04103

More information

ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM

ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM ON SPACE-FILLING CURVES AND THE HAHN-MAZURKIEWICZ THEOREM ALEXANDER KUPERS Abstract. These are notes on space-filling curves, looking at a few examples and proving the Hahn-Mazurkiewicz theorem. This theorem

More information

The effects of a weak selection pressure in a spatially structured population

The effects of a weak selection pressure in a spatially structured population The effects of a weak selection pressure in a spatially structured population A.M. Etheridge, A. Véber and F. Yu CNRS - École Polytechnique Motivations Aim : Model the evolution of the genetic composition

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Skorokhod Embeddings In Bounded Time

Skorokhod Embeddings In Bounded Time Skorokhod Embeddings In Bounded Time Stefan Ankirchner Institute for Applied Mathematics University of Bonn Endenicher Allee 6 D-535 Bonn ankirchner@hcm.uni-bonn.de Philipp Strack Bonn Graduate School

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Extrema of discrete 2D Gaussian Free Field and Liouville quantum gravity

Extrema of discrete 2D Gaussian Free Field and Liouville quantum gravity Extrema of discrete 2D Gaussian Free Field and Liouville quantum gravity Marek Biskup (UCLA) Joint work with Oren Louidor (Technion, Haifa) Discrete Gaussian Free Field (DGFF) D R d (or C in d = 2) bounded,

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Mouvement brownien branchant avec sélection

Mouvement brownien branchant avec sélection Mouvement brownien branchant avec sélection Soutenance de thèse de Pascal MAILLARD effectuée sous la direction de Zhan SHI Jury Brigitte CHAUVIN, Francis COMETS, Bernard DERRIDA, Yueyun HU, Andreas KYPRIANOU,

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

A simple branching process approach to the phase transition in G n,p

A simple branching process approach to the phase transition in G n,p A simple branching process approach to the phase transition in G n,p Béla Bollobás Department of Pure Mathematics and Mathematical Statistics Wilberforce Road, Cambridge CB3 0WB, UK b.bollobas@dpmms.cam.ac.uk

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

Ancestor Problem for Branching Trees

Ancestor Problem for Branching Trees Mathematics Newsletter: Special Issue Commemorating ICM in India Vol. 9, Sp. No., August, pp. Ancestor Problem for Branching Trees K. B. Athreya Abstract Let T be a branching tree generated by a probability

More information

The Hilbert transform

The Hilbert transform The Hilbert transform Definition and properties ecall the distribution pv(, defined by pv(/(ϕ := lim ɛ ɛ ϕ( d. The Hilbert transform is defined via the convolution with pv(/, namely (Hf( := π lim f( t

More information

1 The Observability Canonical Form

1 The Observability Canonical Form NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)

More information

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples Homework #3 36-754, Spring 27 Due 14 May 27 1 Convergence of the empirical CDF, uniform samples In this problem and the next, X i are IID samples on the real line, with cumulative distribution function

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Annealed Brownian motion in a heavy tailed Poissonian potential

Annealed Brownian motion in a heavy tailed Poissonian potential Annealed Brownian motion in a heavy tailed Poissonian potential Ryoki Fukushima Tokyo Institute of Technology Workshop on Random Polymer Models and Related Problems, National University of Singapore, May

More information

Hypothesis testing for Stochastic PDEs. Igor Cialenco

Hypothesis testing for Stochastic PDEs. Igor Cialenco Hypothesis testing for Stochastic PDEs Igor Cialenco Department of Applied Mathematics Illinois Institute of Technology igor@math.iit.edu Joint work with Liaosha Xu Research partially funded by NSF grants

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Asymptotic of the maximal displacement in a branching random walk

Asymptotic of the maximal displacement in a branching random walk Graduate J Math 1 2016, 92 104 Asymptotic of the maximal displacement in a branching random walk Bastien Mallein Abstract In this article, we study the maximal displacement in a branching random walk a

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

Taylor and Laurent Series

Taylor and Laurent Series Chapter 4 Taylor and Laurent Series 4.. Taylor Series 4... Taylor Series for Holomorphic Functions. In Real Analysis, the Taylor series of a given function f : R R is given by: f (x + f (x (x x + f (x

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011

Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Economics 204 Summer/Fall 2011 Lecture 5 Friday July 29, 2011 Section 2.6 (cont.) Properties of Real Functions Here we first study properties of functions from R to R, making use of the additional structure

More information

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle.

Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle. Cauchy s Theorem (rigorous) In this lecture, we will study a rigorous proof of Cauchy s Theorem. We start by considering the case of a triangle. Given a certain complex-valued analytic function f(z), for

More information

The tail does not determine the size of the giant

The tail does not determine the size of the giant The tail does not determine the size of the giant arxiv:1710.01208v2 [math.pr] 20 Jun 2018 Maria Deijfen Sebastian Rosengren Pieter Trapman June 2018 Abstract The size of the giant component in the configuration

More information

Bessel Functions Michael Taylor. Lecture Notes for Math 524

Bessel Functions Michael Taylor. Lecture Notes for Math 524 Bessel Functions Michael Taylor Lecture Notes for Math 54 Contents 1. Introduction. Conversion to first order systems 3. The Bessel functions J ν 4. The Bessel functions Y ν 5. Relations between J ν and

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information