RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT. 1. Introduction

Size: px
Start display at page:

Download "RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT. 1. Introduction"

Transcription

1 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT PIERRE ANDREOLETTI, GRÉGOIRE VÉCHAMBRE, AND ALEXIS DEVULDER Abstract. We study a one-dimensional diffusion X in a drifted Brownian potential W κ, with < κ < 1, and focus on the behavior of the local times Lt, x, x of X before time t >. In particular we characterize the limit law of the supremum of the local time, as well as the position of the favorite sites. These limits can be written explicitly from a two dimensional stable Lévy process. Our analysis is based on the study of an extension of the renewal structure which is deeply involved in the asymptotic behavior of X. 1. Introduction Let Xt, t a diffusion in a random càdlàg potential V x, x R, defined informally by X = and dxt = dβt 1 2 V Xtdt, where β is a Brownian motion independent of V. Rigorously, X is defined by its conditional generator given V, 1 x d V x d ev e. 2 dx dx We put ourself in the case where V is a negative drifted brownian motion : V x = W κ x := W x κ 2 x, x R with < κ < 1 and W a two sided Brownian motion. We explain at the end of Section 1.1, what should be done to extend our results to a more general Lévy process. In our case, the diffusion X is a.s. transient and its asymptotic behavior was first studied by K. Kawazu and H. Tanaka : if Hr is the hitting time of r R by X Hr := inf{s >, Xs = r}, 1.1 Kawazu et al. [25] proved that, for < κ < 1 under the so-called annealed probability P, Hr/r 1/κ converges in law to a κ-stable distribution see also Y. Hu et al. [24], and H. Tanaka [33]. Here we are interested in the local time of X denoted Lt, x, x R, t > until an asymptotic instant t. For Brox s diffusion, when κ = it is proved in [4] that the process local time until the instant t re-centered at the localization coordinate b t see [1] converges in law, this allows the author to derive the law of the supremum of the local time before time t R +, L t := sup Lt, x. x R We recall their result below in order to compare it with what we obtain here : Date: 28/5/215 à 1:8: Mathematics Subject Classification. 6K37,6J55. Key words and phrases. Diffusion in a random potential, local time supremum, favorite sites. 1

2 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 2 Theorem 1.1. [4] If κ =, then with R κ := + L t lim = 1, t + t R κ e W κ x dx + + e W κ x dx. 1.2 W κ x, x and W κ, x are two independent copies of the process W κ x, x Doob-conditioned to remain positive. Extending their approach, and following the results of Shi [3], Diel [15] obtains the non-trivial normalisations for the almost sure behavior of the lim sup and lim inf of L t. Notice that these results have been previously established for the discrete analogue of X, the so called Sinai s random walk in [13] and [22]. One of our aim in this paper is to extend the study of the local time in the case < κ < 1, and deduce from that the weak asymptotic behavior of L t. Before going any further let us recall to the reader what is known for the slow transient cases. When the time and space are discrete see [26], for the seminal paper, a result of Gantert and Shi [23] states the almost sure behavior for the lim sup of the supremum of the local time L S n of these random walks denoted S, before time n : a.s. lim sup n + L S n/n = c >. Contrarily to the recurrent case [22] their method, based on a link with the local time of S and a branching process in random environment, is not able to determine the law of the limit of L S n/n. For the continuous time and space case we are treating here, the only paper dealing with L t is Devulder s work [14], who proves that the lim sup t + L t/t = + almost surely. But once again his method can not be used to characterize the limit law of L t/t. Our motivation here is twofold, first we prove that our approach enables to characterize the limit law of L t/t and open a way to determine the correct almost sure behavior of L t like for Brox s diffusion. Second we make a first step on a specific way to study the local time which could be used in estimation problems with random environment [1], [2], [5], [12], [11], [2], [6]. The method we develop here is an improvement of the one used in [3] about the localization of X t for large t. Let us recall the main result of this paper, for that we introduce some new objects. First the notion of h-extrema, with h >, introduced by Neveu et al. [27] and studied more specifically in our case of drifted Brownian motion by Faggionato [19]. For h >, we say that x R is a h-minimum for a given continuous process V if there exist u < x < v such that V y V x for all y [u, v], V u V x + h and V v V x + h. Moreover, x is an h-maximum for V if x is an h-minimum for V, and x is an h-extrema for V iff it is an h-maximum or an h-minimum for V. As we are interested in the process X until time t, we only focus on h t -extrema of W κ where h t := log t φt, with < φt = olog t, log log t = oφt. It is known see [19] that almost surely, the h t -extrema of W κ form a sequence indexed by Z, unbounded from below and above, and that the h t -minima and h t -maxima alternate. We denote respectively by m j, j Z and M j, j Z the increasing sequences of h t -minima and of h t -maxima of W κ, such that m < m 1 and m j < M j < m j+1 for every j Z. Define N t := max { k N, sup Xs m k }, s t the number of h t -minima visited by X until the instant t, we then have the following result

3 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 3 Theorem 1.2. [3] Assume < κ < 1. There exists a constant C 1 >, such that lim P Xt m Nt C 1 φt = 1. t + This result proves that the process X before the instant t visits a sequence of h t -minima, and then gets stuck in an ultimate one. Notice that this result was proved in the discrete case model by Enriquez-Sabot-Zindy in [17]. This phenomenon is due to two facts : the first one is the appearance of a renewal structure which is composed of the time it takes to the process to move from a h t -minima to the following one. The second is the fact that like in Brox s case, the process is trapped a significant amount of time in the neighborhood of the local minima m Nt. It is the extension of this renewal structure to the sequence of local time at the h t -minima that we study here. We now detail our results Results. Let us introduce a few notations involved in the statement of our results. Denote D[, +, R 2, J 1 the space of càdlàg functions with J 1 -Skorokhod topology and denote by L S the convergence in law for this topology. On this space, define a 2-dimensional Lévy process Y 1, Y 2 with value in R + R + which is a pure positive jump process with κ-stable Lévy measure ν given by x >, y >, ν [x, + [ [y, + [ = 1 y κ E [R κ κ 1 Rκ y x where R κ is defined in 1.2. For a given function f in D[, +, R, define for any s >, a > : ] + 1 x κ P R κ > y, x f s := sup fr fr, f 1 a := inf{x >, fx > a}, r s f s is the largest jump of f before instant s, f 1 a is the first time f is larger than a. Also define the couple of random variables I 1, I 2 I 1 := Y 1 Y 1 2 1, I 2 := 1 Y 2 Y2 1 1 Y 1Y2 1 1 Y 1Y2 1 1 Y 2 Y2 1 1 Y 2Y We are now ready to state the result, the convergence in law denoted L takes place when t goes to infinity : Theorem 1.3. L t t L I = maxi 1, I 2. There is an intuitive interpretation of this theorem which explains the appearance of the Lévy process Y 1, Y 2. We focus on this interpretation now. First for any s >, Y 1 s is the limit of the sum of the first se κφt normalised by t local times taken specifically at the se κφt first h t -minima. Y 2 plays a similar role but for the exit time of the se κφt first h t -valleys. Where an h t -valley is defined as a large neighborhood of an h t -minima, see Section 2.2 for a rigorous definition as well as Figure 1. So by definition I 1 is the largest jump of the process Y 1 before Y 2 is larger than 1 and can be interpreted as the largest local time re-normalized among the local time at the h t -minima visited by X until time t and from where X escape. That is to say I 1 is the limit of the random variable sup k Nt 1 Lm k, t/t. I 2 is a product of two terms : the first 1 Y 2 Y2 1 1 corresponds to the re-normalized amount of time left to the process X before instant t after it has reached the ultimate visited

4 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 4 h t -minima, m Nt. The second term corresponds to the local time of X at this ultimate h t -minima. Intuitively Y 2 is construct from Y 1 multiplying each of its jumps by an independent copy of the variable R κ. Therefore this second term can be seen as an independent copy of 1/R κ taken at the instant of the overshoot of Y 2 which makes it larger than 1. Notice that this variable R κ plays a similar role than R of Theorem 1.1. Indeed as in the case κ =, the process X is prisoner in the neighborhood of the last h t -minima visited before time t. We prove this result by showing first that portions of the trajectory of X re-centered at the local h t -minima, until the instant t, is made in probability with independent parts. This has been partially proved in [3] but we have to improve their results and add simultaneously the study of the local time. Second we prove that what we seek for the supremum of the local time is, mainly, a function of the sum of theses independent parts, which converges to a Lévy process. Let us give some details about this : Let Qs, s a canonical process, taking values in R +, with infinitesimal generator given for every x > by 1 d 2 2 dx 2 + ζ ζ d 2 coth 2 x dx. This process Q can be thought of as a ζ/2-drifted Brownian motion W ζ Doob-conditioned to stay positive, with the terminology of [7], which is called Doob conditioned to reach + before in [19] see Section 2.1 in [3] for more details. We call BES3, κ/2, the law of Qs, s. For a < b, Wκs, b s τ W κ b a is a κ/2-drifted Brownian motion starting from b and killed when it first hits a. For any process Ut, t R + we denote by τ U a := inf{t >, Ut = a}, the first time this process hits a, with the convention inf = +. We now introduce functional of W κ and Q : F ± x := τ Q x exp±qsds, x >, G ± a, b := Also for any δ >, and t >, define n t := e κφt1+δ, which is an upper bound of N t as stated in Lemma 3.1. τ W b κa exp ± W b κs ds, a < b. Then let S j t, R j t, e j t, j n t a sequence of i.i.d. random variables with S j, R j and e j independent with S 1 L = F + h t + G + h t, h t /2, R 1 L = F h t /2 + F h t /2 and e 1 L = E1/2 [an exponential random variable with parameter 1/2] where F an independent copy of F and F + independent of G +. Define l j := e j S j, H j := l j R j, note that for notational simplicity we do not make appear the dependence in t in the sequel. Typically l j plays the roll of the local time at the jth positive h t -minima if the walk escape from it before the instant t and H j the roll of the time it takes for the walk to escape from the corresponding valley. Define the family of processes Y 1, Y 2 t indexed by t, by 1.3 then s, Y 1, Y 2 t s := 1 t se κφt j=1 l j, H j,

5 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 5 Proposition 1.4. We have Y 1, Y 2 t L S Y1, Y 2. Once this is proved, we check that what we need for the supremum of the local time can be written as a function of Y 1, Y 2 t. We obtain an expression in this sens in Proposition 5.1. Then to obtain the limit, we prove the continuity in J 1 -topology of the involved mapping and apply a continuous mapping Theorem see Section 4.3. It appears that with this method we can obtain other asymptotics, like for the supremum of the local time before the last valley is reach before the instant t and once it left it for good as well as for the position of the favorite site : Theorem 1.5. We have L Hm Nt+1 t L Hm Nt t Let us call F t, the position of the first favorite site, F t F t X t L Y 1 Y 1 2 1, L Y 1 Y L BU[,1] + 1 B := inf{s >, Lt, s = L t}, then where B is a Bernoulli with parameter PI 1 < I 2 independent of the uniform random variable on [, 1] : U [,1]. One question we may ask here is what s happen in the discret case or with a more general Lévy process? For the discrete case, we should have a very similar behavior as the renewal structure which appears in both cases continuous and discrete is very similar see the works of Enriquez-Sabot- Zindy [17]. The main difference comes essentially from the functional R κ which should be replaced by a sum of exponential of a simple random walk conditioned to remain positive see [18], [17]. For a more general Lévy process, we think for example, of a spectraly negative Lévy process studied in the case of diffusion in random environment by Singh [32], more work needs to be done, especially for the environment. First to obtain a specific decomposition of the Lévy s path similar to what is done for the drifted Brownian motion in Faggionato [19] and also to study the more complicated functional R κ which is less known than in the brownian case. This is a work in preparation by Véchambre [34]. The rest of the paper is organized as follows : In Section 2 we recall the results of Faggionato on the path decomposition of the trajectories of W κ. Also we recall from [3] the construction of specific h t -minima which plays an important role in the appearance of independence, under P, on the path of X before time t. In Section 3 we study the joint process of the first n t hitting times and local times. We show that parts of the trajectory of X is not important for what we seek this part is technical, makes use of some technical results of the paper [3] and can be omitted in the first instance. We then prove the main result of this section : Proposition 3.5. It proves that the joint process exit time, local time can be approximated in probability by i.i.d random variables, again the proofs make use of some technical parts of [3] though the main ideas are discuss in the present paper. In Section 4 we prove Proposition 1.4, and study the continuity of certain functional of Y 1, Y 2 which appears in the expression of the law we have detailed above. This section is independent of the other, we essentially prove a basic functional limit theorem and prepare to the application of continuous mapping theorem.

6 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 6 Section 5 is where we make appear the renewal structure in the problem we want to solve. In particular we prove how the distribution of the supremum of the local time can be approximated by the distribution of a certain function of the couple Y 1, Y 2 t, the main step being Proposition 5.1. The appendix is a reminder of some estimates on the brownian motion, Bessel processes, and functionals of both of these processes Notations. In this section we introduce typical notations for the study of diffusions in random media, as well as elementary tools for the continuous one-dimension case. We denote by P the probability measure associated to W κ.. The probability conditionally on the potential W κ is denoted by P Wκ and is called the quenched probability. We also define the annealed probability as P. := P Wκ.P W κ dω. We denote respectively by E Wκ, E, and E the expectancies with regard to P Wκ, P and P. For any process Ut, t R + we denote by L U a bicontinuous version of the local time of U when it exists. Notice that for our main process X we simply write L. We also denote by U a the process U starting from a, and by P a the law of U a ; with the notation U = U. Now let us introduce the following functional of W κ, Ar := r e Wκx dx, r R, and recall that whenever κ >, A := lim r + Ar < a.s. As in Brox [1], there exists a Brownian motion B independent of W κ, such that Xt = A 1 [BT 1 t], where T r := r exp{ 2W κ [A 1 Bs]}ds, r τ B A. 1.4 The local time of the process X at x until instant t simply denoted Lt, x, can be written as see [3] Lt, x = e Wκx L B T 1 t, Ax. 1.5 With these notations, we recall the following expression of Hr, for all r, Hr = T [τ B Ar] = r e Wκu L B [τ B Ar, Au]du Path decomposition and Valleys 2.1. Path decomposition in the neighborhood of the h t -minima m i. First we recall some results for h-extrema of W κ. Let V i x := W κ x W κ m i, x R, i N, which is the potential W κ translated so that it is at the local minimum m i. We also define τ i h := sup{s < m i, V i x = h}, τ i h := inf{s > m i, V i x = h}, h >. The following result has been proved by Faggionato [19] [for i and ii], and the last fact comes from the strong Markov property.

7 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 7 Fact 2.1. path decomposition of W κ around the h t -minima m i i The truncated trajectories V i m i s, s m i τi h t, V i m i + s, s τ i h t m i, i 1 are independent. ii Let Qs, s be a process with law BES3, κ/2. All the truncated trajectories V i m i s, s m i τi h t for i 2 and V j m j + s, s τ j h t m j for j 1 are equal in law to Qs, s τ Q h t. iii For i 1, the truncated trajectory V i s + τ i h t, s is independent of W κ s, s τ i h t and is equal in law to Wκ ht s, s, that is, to a κ/2-drifted Brownian motion starting from h t Definition of valleys and standard h t -minima m j, j N. We are interested in the potential around the h t -minima m i, i N, in fact intervals containing at least [τ i 1 + κh t, M i ], however, these valleys could intersect. In order to define valleys which are well separated and i.i.d., we introduce the following notations. These notations are used to define valleys of the potential around some m i, which are shown in Lemma 2.2 to be equal to the m i for 1 i n t with large probability. Let h + t := 1 + κ + 2δh t, and define L + :=, m :=, and recursively for i 1 see Figure 1, L i := inf{x > L + i 1, W κx W κ L + i 1 h+ t }, τ i h t := inf { x L i, W κx inf [ L i,x] W } κ h t, 2.7 m i := inf { x L i, W κx = inf [ L i, τ ih W } t] κ, L + i := inf{x > τ i h t, W κ x W κ τ i h t h t h + t }. We also introduce the following random variables for i N : M i := inf{s > m i, W κ s = max mi u L + W κ u}, i L i := inf{x > τ i h t, W κ x W κ m i = 3h t /4}, L i := inf{x > τ i h t, W κ x W κ m i = h t /2}, τ i h := inf{s > m i, W κ x W κ m i = h}, h >, 2.8 τ i h := sup{s < m i, W κ x W κ m i = h}, h >, L i := τ i h+ t. We stress that these r.v. depend on t, which we do not write as a subscript to simplify the notations. Notice also that τ i h t is the same in definitions 2.7 and 2.8 with h = h t. Moreover by continuity of W κ, W κ τ i h t = W κ m i + h t. So, the m i, i N, are h t -minima, since W κ m i = inf [ L+ i 1, τ ih W t] κ, W κ τ i h t = W κ m i + h t and W κ L + i 1 W κ m i + h t. Moreover, L + i 1 < L i m i < τ i h t < L i < L + i, i N, 2.9 L + i 1 L i < m i < τ i h t < M i < L + i, i N. 2.1 Furthermore by induction the r.v. L i, τ ih t and L + i, i N are stopping times for the natural filtration of W κ x, x, and so L i, i N, are also stopping times. Also by induction, W κ L i = inf [, L i ] W κ, W κ m i = inf [, τ i h t] W κ, W κ L + i = inf [, L + i ] W κ = W κ m i h + t, 2.11

8 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 8 L + i 1 L i L i m i τ i h t M i L i L i L + i h + t 1 2 h t h + t h t 3 4 h t h + t Figure 1. Schema of the potential between L + i 1 and L + i, in the case L i < L i for i N. We also introduce the analogue of V i for m j as follows: Ṽ i x := W κ x W κ m i, x R, i N. We call i th h t -valley the translated truncated potential Ṽ i x, L i x L i, for i 1. The following lemma states that, with an overwhelming high probability, the first n t + 1 positive h t -minima m i, 1 i n t + 1, coincide with the r.v. m j, 1 i n t + 1. We introduce the corresponding event V t := nt+1 i=1 {m i = m i }. Lemma 2.2. Assume < δ < 1. For t large enough, P V t C1 n t e κht/2 = e [ κ/2+o1]ht, C 1 >. Ṽ Moreover, the sequence i x + L + i 1, x L + i L + i 1, i 1, is i.i.d. Proof: This Lemma is proved in [3] : Lemma 2.3. The following remark is used several times in the rest of the paper. Remark 2.3. On V t, we have for every 1 i n t, m i = m i, and as a consequence, Ṽ i x = V i x, x R, τ i h = τ i h and τ ih = τ i h for h >. Moreover, Mi = M i. Indeed, Mi is an h t -maximum for W κ, which belongs to [ m i, m i+1 ] = [m i, m i+1 ] on V t, and there is exactly one h t -maximum in this interval since the h t -maxima and minima alternate, which we defined as M i, so M i = M i. So in the following, on V t, we can write these r.v. with or without tilde. 3. Contributions for hitting and local times 3.1. Negligible parts for hitting times. In the following Lemma we recall results of [3] which tell that the time spent between valleys is negligible compared to the amount of time spent to escape from the valleys. It also gives an upper bound for the number of visited valleys, and the fact that the process never backtracks in a previous visited valley. For any i n t, define U i := H L i H m i, U =

9 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 9 and for any m 1 the events B 1 m := { } m k 1 H m k U i < ṽ t, k=1 where ṽ t := 2t/ log h t. Finally m B 2 m := {H L j H m j < H + L j H m j, H m j+1 H L j < H + L j H L j }, j=1 with H + x j := inf{k > m j, X k = x j } for any x. Lemma 3.1. For any δ small enough and t large enough i=1 P H m 1 < ṽ t P B 1 n t 1 C 2 v t, 3.12 with v t := n t log h t e φt = o1, C 2 >. Also we have : with C 3 > and C 4 >. P B 2 n t 1 C 3 n t e δκht, 3.13 PN t < n t 1 C 4 e δκφt, 3.14 Proof: The first statement is Lemma 3.7 in [3], the second one is proved in Lemmata 3.2 and 3.3 in [3]. Finally 3.14 is proved in Lemma 5.2 of the same paper Negligible parts for local times. We now provide estimations on the local time, more especially we prove that in the complementary of a small interval in the neighborhood of the first n t h t -minima, the local time at each site is negligible compared to t. We split this section into two, the first one deals with coordinate away from the valleys, the second with coordinates in the valleys excluding the points near the bottom Supremum of the local time outside the valleys. The aim of this subsection is to prove that at time t, the maximum of the local time outside the valleys is negligible compared to t. More precisely, define the following events { B3m 1 := B 2 3m := sup LH m 1, x ft x [, m 1 ] m 1 j=1 } m 1 j=1 {sup x Lj LH m j+1, x L H Lj, x ft B 3 m := B 1 3m B 2 3m, ft = te [κ1+3δ 1]φt. In this section we prove { sup LH m j+1, x ft x [ L j, m j+1 ] Lemma 3.2. Assume δ small enough such that κ1 + 3δ < 1. There exists C 5 > such that for any large t with w t := e κδφt. P B 3 n t 1 C 5 w t, }, },

10 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 1 The proof is based on the lemma below, let us introduce a few notation with respect to the environment. τ 1 h := inf{u, W κ u inf [,u] W κ h}, h >, m 1h := inf{y, W κ y = inf [,τ 1 h] W κ }. All along this work C + is a positive constant that may grow from line to line. Lemma 3.3. For large t, P sup x [,m 1 h t] L[Hτ1 h t, x] > te [κ1+3δ 1]φt C n t eκδφt Proof of Lemma 3.3: Thanks to 1.5 and 1.6 there exists a Brownian motion Bs, s, independent of W κ, such that L[Hτ 1 h t, x] = e Wκx L B [τ B Aτ 1 h t, Ax], x R By the first Ray Knight theorem see e.g. Revuz and Yor [29], chap. XI, for every α >, there exists a Bessel processes Q 2 of dimension 2 starting from, such that L B τ B α, x is equal to Q 2 2 α x for every x [, α]. Consequently, using 3.16 and the independence of B and W κ, there exists a 2-dimensional Bessel process Q 2 such that L[Hτ 1 h t, x] = e Wκx Q 2 2[ Aτ 1 h t Ax ] x τ 1 h t In order to evaluate this quantity, the idea is to say that loosely speaking, Q 2 2 grows almost linearly. More formally, we consider the functions kt := e 2κ 1φt, at := 4φt and bt := 6κ 1 φte κht and define the following events { + } A := A := e Wκx dx kt, A 1 := { u, kt], Q 2 2u 2eu [ at + 4 log log[ekt/u] ]}, A 2 := { inf [,τ 1 h t] W κ bt }. We know that P A y C + y κ for y > since 2/A is a gamma variable of parameter κ, 1 see Dufresne [16], so P A C+ kt κ = C + e 2φt. Moreover, P A 1 C+ exp[ at/2] = C + e 2φt by Lemma 6.5. Also we know that inf [,τ 1 h] W κ, denoted by β in Faggionato [19], eq. 2.2 is exponentially distributed with mean 2κ 1 sinhκh/2e κh/2 [19], eq So, P A 2 = P [ inf[,τ 1 h t] W κ > bt] = exp [ btκ/2 sinhκh t /2e κht/2 ] e 2φt for large t. Now assume we are on A A 1 A 2. Due to 3.17, we have for every x < τ 1 h t, since < Aτ 1 h t Ax A kt, L[Hτ 1 h t, x] e Wκx 2e[Aτ 1 h t Ax] { at + 4 log log [ ekt/[aτ 1 h t Ax] ]} We now introduce f i := inf{x, W κ x i} = τ Wκ i, i N, and let x < τ1 h t. There exists i N such that f i x < f i+1. Moreover, we are on A 2, so i bt. Furthermore, x < f i+1, so W κ x i + 1 and then e Wκx e i+1 = e Wκfi+1. All this leads to e Wκx [Aτ1 h t Ax ] τ = e Wκx 1 h t τ e Wκu 1 h t du e e Wκu Wκfi du x f i

11 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 11 To bound this, we introduce the event A 3 := bt i= { τ 1 h t f i } e Wκu Wκfi du e 1 κht btn t e κδφt. We now consider τ1 u, h t := inf{y u, W κ y inf [u,y] W κ h t } τ1 h t for u. We have τ 1 h t τ E e Wκu Wκfi 1 f i,h t du E e Wκu Wκfi du = β h t, f i τ by the strong Markov property applied at stopping time f i, where β h := E 1 h e Wκu du. We know see [3] eq. 3.38, in the proof of Lemma 3.6 that β h C + e 1 κh for large h. Hence for large t by Markov inequality, P bt A 3 i= f i τ 1 h t P e Wκu Wκfi du > e 1 κht btn t e κδφt f i [bt + 1]β h t e 1 κht btn t e κδφt C + n t e κδφt. Now, on 3 j= A j, 3.18 and 3.19 lead to L[Hτ 1 h t, x] 2e 2+1 κht btn t e κδφt{ at + 4 log log [ ekt/[aτ 1 h t Ax] ]}. 3.2 For any x m 1 h t, inf [,τ 1 h t] W κ bt so Aτ 1 h t Ax = τ 1 h t x e Wκu du τ 1 h t m 1 ht e Wκu du e bt [τ 1 h t m 1h t ] e bt on the event 4 i= A i with A 4 := {τ 1 h t m 1 h t 1}. Since m 1 = m 1 h t and τ 1 h t = τ 1 h t on {M } by definition of h t -extrema, we have P A 4 P < M < m 1 + P [τ 1 h t m 1 < 1] 2κh t e κht + P τ Q h t τ Q h t /2 < 1 2κh t e κht + C + exp[ c h 2 t ] due to [3], eq. 2.8, coming from Faggionato [19], Fact 2.1 ii and c is a positive constant that may decrease from line to line in the sequel of the paper. Now, we have ekt/[aτ 1 h t Ax] ekte bt on 4 i= A i, and then, on this event, 3.2 leads to L[Hτ 1 h t, x] 2e 2+1 κht btn t e κδφt{ at + 4 log log [ ekte bt]}. C + tφte [κ1+δ 1]φt e κδφt h t, since φt = olog t, h t = log t φt and n t = e κ1+δφt. We notice that for large t, C + φth t e κδφt since log log t = oφt. Hence, for large t, L[Hτ 1 h t, x] te [κ1+3δ 1]φt, on 4 i= A i for every x m 1 h t. This gives for large t, P sup x [,m 1 h t] L[Hτ1 h t, x] te [κ1+3δ 1]φt P 4 C + i=a i 1 n t e κδφt, due to the previous bounds for P A i, i 4. This proves the lemma. With the help of the previous lemma, we can now prove Lemma 3.2:

12 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 12 Proof of Lemma 3.2: The method is similar to the proof of Lemma 3.7 of [3]. Recall the definition of L i < L i just above 2.8, also let τ i+1h t := inf { u L i, W κ u inf [ L i,u] W κ h t } τi+1 h t, i 1, m i+1h t := inf { u L i, W κ u = inf [ L i, τ i+1 ht] W κ}, i 1, A 5 := nt 1 i=1 { τ i+1 h t = τ i+1 h t }, X i u := X u + H Li, X i u := X u + H L i, u By the strong Markov property, X i and Xi are diffusions in the environment W κ, starting respectively from L i and L i. We denote respectively by L X i, L X i, H Xi and H X i the local times and hitting times of X i and Xi. We have for every x L i, LH m i+1, x LH L i, x LH m i+1, x LH L i, x = L X HX i i m i+1, x. Consequently, on A 5 A 6 with A 6 := nt 1 { } j=1 HXj m j+1 < H Xj L j, for 1 i nt 1, sup LH m i+1, x L H Li, x = sup LH m i+1, x L H Li, x x R L i x m i+1 sup L X HX i i m i+1, x L i x m i+1 sup L X HX i i τ i+1h t, x, 3.22 L i x m i+1 since m i+1 = m i+1 τ i+1 h t = τ i+1 h t on A 5. Now, notice that the right hand side of 3.22 is the supremum of the local times of Xi L i, up to its first hitting time of τ i+1 h t L i, over all locations in [, m i+1 L i ]. Since X i L i is a diffusion in the environment W κ L i + x W κ L i, x R, which has on [, + the same law as W κ x, x because L i is a stopping time for W κ, the right hand side of 3.22 has the same law, under the annealed probability P, as sup x [,m 1 h t] L[Hτ1 h t, x]. Consequently, P nt 1 i=1 { } sup LH m i+1, x LH L i, x > te [κ1+3δ 1]φt x R n t [P sup x [,m 1 h t] L [ Hτ1 h t, x ] > te [κ1+3δ 1]φt + P ] A 5 + P A6 C + e κδφt 3.23 by Lemma 3.3, since P A 5 C+ n t h t e κht by [3], eq. 3.41, P A 6 C+ n t e κht/16 by [3], Lemma 3.3, and since φt = olog t. Notice that, as before, m 1 = m 1 = m 1 h t on V t {M }. Finally, P sup x [, m 1 ] LH m 1, x > te [κ1+3δ 1]φt C + e κδφt + P V t + P < M < m 1 C + e κδφt also by Lemma 3.3, Lemma 2.2, and since P < M < m 1 2κh t e κht due to [3], eq This and 3.23 prove the lemma Local time in the valley [ L j, L j ] but far from m j. Let D j := [ m j r t, m j + r t ], with r t := φt

13 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 13 B 4 m := m j=1 sup x D j [τ j h+ t, L j ] with D j the complementary of D j. LH L j, x LH m j, x < te 2φt, m 1. Lemma 3.4. Assume < δ < 1/8. There exists C 6 > such that P [ B 4 n t ] 1 C 6 n t e 2φt, Proof: We make the proof replacing rt = φt 2 by rt = C φt with C a constant large enough, this is a little more precise than what we need here and may be used for other purposes. Let j [1, n t ]. First under P Wκ m j, there exists a Brownian motion Bs, s, independent of Ṽ j, such that L [ H L j, x ] = e Ṽ j x L B [τ B A j L j, A j x], x R, where A j x := x m j eṽ j s ds. By scaling, there exists another Brownian motion B that we still denote B for simplicity, independent of Ṽ j, such that L [ H L j, x ] = e Ṽ j x A j L j L B [τ B 1, A j x/a j L j ] x R In order to bound the terms L B [ τ B 1,. ] and A j L j in 3.25, we first introduce A 1 := { sup u R L B [τ B 1, u] e 2φt}, A 2 := { A j L j 2e ht+2φt/κ} We have P A 2 2e 2φt for large t by Lemma 6.4 eq and Moreover on V t, we have by Remark 2.3 and Fact 2.1 ii and iii, A j Lj [ τj h t m j ] e h t + L j τ j h t ev j s ds L = e ht τ Q h t + G + h t /2, h t, where recall that Q has law BES3, κ/2 and is independent of G + h t /2, h t, which is defined in 1.3, and with L j := inf{s > τ j h t, V j s = h t /2}. Consequently, P A 2 P τ Q h t > e 2φt/κ + P G + h t /2, h t > e ht+2φt/κ + P V t C+ e 2φt for large t by eq. 6.86, Lemma 6.3 eq and Lemma 2.2, and since φt = olog t and log log t = oφt. Now, we would like to bound the term e Ṽ jx that appears in To this aim, we define A 3 := { τ j [κc φt/8] m j + C φt } { }, A 4 := inf V j κc φt/16. [τ j [κc φt/8],τ j h t] We can prove using Fact 2.1 see [3] Lemma 2.5 for details that P A 3 C+ e [κ2 C φt]/16 2 e 2φt if we choose C large enough. Moreover with Fact 2.1 again see [3], eq applied with h = C φt, α = κ/8, γ = κ/16 and ω = h t /C φt we get P A 4 e κ 2 C φt/16 e 2φt for large t. We notice that inf [ mj +C φt, τ j h t] Ṽ j κc φt/16 on A 3 A 4 V t, thanks to Remark 2.3. We prove similarly that P A 5 C+ e κ2 C φt/ P { } V t 2e 2φt, where A 5 := inf [ τ j ht, m j C φt] Ṽ j κc φt/16. Also by [3], Lemma 2.7, P A 6 { } e κht/8, with A 6 := inf [ τ j h+ t, τ j ht] Ṽ j h t /2. We also know that Ṽ j x h t /2 for all τ j h t x L j by definition of L j. Consequently on 6 i=3 A i V t, for all x D j [ τ j h+ t, L j ], we have e Ṽ j x e κc φt/16.

14 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 14 Hence on 6 i=1 A i V t, we have under P Wκ m j, by 3.25 and 3.26, sup L [ H L j, x ] 2te 1+2/κφt e κcφt/16 te 2φt, x D j [ τ j h+ t, L j ] if we choose C large enough. So, conditioning by W κ and applying the strong Markov property at time H m j, we get [ [ P sup L H Lj, x ] L [ H m j, x ] ] te 2φt x D j [ τ j h+ t, L j ] E P Wκ m j 6 i=1 A i V t 1 C+ e 2φt uniformly for large t due to the previous estimates and thanks to Lemma 2.2. This proves the lemma Convergence of the main contributions. In this section we make a link between the families {[U j := H L j H m j, LH L j, m j ], j n t }, and the i.i.d. sequence {H j, l j, j n t } described in the introduction. Let F ± 1 a, F ± 2 a and F ± 3 a independent copies of F ± a also independent of G ± a, b. Proposition 3.5. Let t > large, for δ > small enough recall that δ appears in the definitions of n t and h + t there exists d 1 = d 1 δ, κ >, D 1 d 1 > and a sequence {S j t, R j t, e j t, j n t } of i.i.d. random variables with S j, R j and e j independent and S 1 L = F + 1 h t + G + h t, h t /2, L R 1 = F 2 h t /2 + F3 h L t/2 and e 1 = E1/2 such that P nt j=1 { U j H j t H j, LH L j, m j l j t l j } 1 e D 1h t, with l j := S j e j, H j := R j l j, t := e d 1h t. The proof of the above Proposition, which is in the spirit of the proofs of Propositions 3.4 and 4.4 in [3] makes use of the following Lemma: Lemma 3.6. Let t > large, i There exists a sequence e j t, j n t of i.i.d. random variables with exponential law of mean 2 and independent of W κ such that there exist constants d >, D = D d > possibly depending on κ and δ such that n t P { U j H j e d h t H j, LH L j, m j = L j } 1 e D h t, 3.27 j=1 Lj where L j := e j m eṽ jx j dx, H j := L j R j, R j := τ + j ht/2 τ j ht/2 e Ṽ jx. Moreover the random variables {L j, H j, j 1} are i.i.d. ii Also there exists a sequence of independent and identically distributed random variables S j, j n t, independent of R j, j n t and e j, j n t such that { n t } Lj P eṽ jx dx S j e d h t S j 1 e D h t L, and S 1 = F + h t j=1 m j

15 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 15 Proof of Lemma 3.6 For 3.27 : by the strong Markov property, formula 1.5 and 1.6 the sequence {U j, LH L j, m j, j n t } is equal to the sequence {H j L j, L j H j L j, m j, j n t }, where H j L j := Lj e Ṽ j u L B j[τ Bj A j L j, A j u]du, L j H j L j, m j := L B j[τ Bj A j L j, ], A j u := u m j eṽ j x dx, 3.29 with B j, j n t a sequence of independent standard Brownian motions independent of W κ, such that B j starts at A j m j = and is killed when it first hits A j L j. Also L B j is the local time associated with B j. Define A j := {max u< L L j B j[τ Bj A j L j, A j u] = }, we can prove with exactly the same method than in the proof of Lemma 3.2 in [3] see the estimation of P Wκ E j that there exists c > possibly depending on κ and δ such that P nt j=1 A j 1 e c h t. So n t } P {H j L j = h j 1 e c κh t, with h j := L j L j are strictly function of B j and Ṽ j x + L + j 1, x < L + j L + i 1 < L j j=1 e Ṽ j x L B j[τ Bj A j L j, A j u]du. We also notice that h j, L j H j L j, m j L + j 1, where by construction so the random variables h j, L j H j L j, m j, j n t are i.i.d by the second fact of Lemma 2.2. By scale invariance of brownian motion B j we have that L B j[τ Bj A j L j, A j u], L j u L j is equal in law to A j L j L Bj[τ B j 1, A j u/a j L j ], L j u L j, where B j is a standard Brownian motion independent of W κ which we still denote by B j in the sequel. In particular h j, L j H j L j, m j L = h j, L j, h j := A j L j Lj L j e Ṽ j x L B j[τ Bj 1, A j u/a j L j ]du, Lj := A j L j L B jτ Bj 1,. 3.3 Then let e j := L B jτ Bj 1, and recall that by the first Ray Knight theorem, e j has for law an exponential variable with parameter 1/2. Note then that L j = L j, so to finish the proof of 3.27 we only have to approximate h j in probability. This is what is done in the proof of Lemma 4.7 of [3] : for any 1 j n t and > τ + P j A h j j ht/2 τ + L j e Ṽ jx e j > e 1 3ht/6 A j j ht/2 L j e Ṽ jx e j C + e c h t. τ j ht/2 τ j ht/2 Recall that C + resp. c is a positive constant that may grow resp. decrease from line to line along the paper. For 3.28, the proof can be found in [3] at the end of the proof of Proposition 4.4. Proof of Proposition 3.5 By the first part i of Lemma 3.6 the sequence e j, H j, L j, j n t is i.i.d, moreover e j is independent of H j, L j. This together with part ii yields P nt j=1 { U j e j S j R j t e j S j R j, LH L j, m j e j S j t e j S j } 1 e D 1h t,

16 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 16 with S j independent of R j. Then as m j, j 1 is a subsequence of m j, j 1, using Fact 2.1 for any j we have R j ; S j = L F 1 + h t + G + h t, h t /2; F2 h t/2 + F3 h t/2. 4. Convergence toward the Lévy process Y 1, Y 2 and Continuity 4.1. Preliminaries. We begin this section by the convergence of certain repartition functions, a key results that are essentially improvement of the second part of Lemma 5.1 in [3]. Lemma 4.1. lim sup t + x [e 1 2φt,+ [ lim sup t + y [e 1 3φt,+ [ x κ e κφt P e 1 S 1 /t > x C 2 =, 4.31 y κ e κφt P e 1 S 1 R 1 /t > y C 2 E [R κ κ ] =, 4.32 with C 2 > a known constant [see below 4.39]. For any α >, e κφt Pl 1 /t x, H 1 /t y converges uniformly when t goes to infinity on [α, + [ [α, + [ to ν [x, + [ [y, + [ [see Section 1.1]. Proof: Proof of 4.31 We first prove that x κ e κφt P S 1 /t > x converges uniformly in x [ e 1 φt, + [ to a constant c 3, that is, we prove that lim t + lim t + sup x [e 1 φt,+ [ For that, let y = e 1 φt x, we have to prove that y κ e κφt P sup y [1,+ [ x κ e κφt P S 1 /t > x c 3 = S 1 /e ht+φt > y c 3 =, 4.34 but this is equivalent to prove that for any function f : ], + [ [1, + [, lim t + ftκ e κφt P S 1 /e ht+φt > ft = c First by definition see Proposition 3.5, S 1 can be writen as a sum of two independent functional, that we denote, for simplicity, as the sum of two generic functionals S 1 /t = F 1 + h t + G + h t, h t /2 /t = e φt e ht F 1 + h t + e ht G + h t, h t / Since we know an asymptotic for the Laplace transform of F + h t /e ht and G + h t /2, h t, the proof of 4.35 is similar to the proof of a Tauberian theorem. First by 6.82 and 6.83 we have [ ] γ >, ω f,t γ := 1 E e γs 1/fte h t +φt γ 1 c 4γ κ 1 ft κ e κφt, 4.37 t + with c 4 a positive constant. Now as ω f,t is the Laplace transform of the measure U f,t := 1 R+ zp S 1 /fte ht+φt > z dz, from 4.37 we have γ >, ω f,t γ ω f,t 1 t + γκ 1. From this we can follow the same line as in the proof of a classical Tauberian theorem see for example [21] volume 2, section XIII.5, page 442. So as for the proof of Theorem 1 in [21] we

17 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 17 can deduce that z >, U f,t [, z] ω f,t 1 t + z 1 κ Γ2 κ. Then as in the proof of Theorem 4 of the same reference page 446, we deduce from the monotony of the densities of measures U f,t that P S 1 /fte ht+φt > z z >, 1 κ ω f,t 1 t + z κ Γ2 κ. Considering this convergence with z = 1 we get exactly 4.35 for c 3 = 1 κ/c 4 Γ2 κ, so 4.33 follows. Let a t := e φt, for any x > at x κ e κφt P e 1 S 1 /t > x, e 1 < a t = 2 1 x/u κ e κφt P S 1 /t > x/u u κ e u/2 du, because e 1 has law E1/2. Taking x arbitrary in [e 1 2φt, + [, we have x/u [e 1 φt, + [, u ], a t ], so thanks to 4.33 we get + xκ e κφt P e 1 S 1 /t > x, e 1 < a t 2 1 c 3 u κ e u/2 du = lim t + sup x [e 1 2φt,+ [ Now for any x > and t large enough such that y 1, y κ e κφt P S 1 /t > y < 2c 3, we have x κ e κφt P e 1 S 1 /t > x, e 1 < a t x κ e κφt P e 1 S 1 /t > x = x κ e κφt P e 1 S 1 /t > x, e 1 a t + = 2 1 x κ e κφt P S 1 /t > x/u e u/2 du a t + = 2 1 u κ x/u κ e κφt P S 1 /t > x/u 1 x u e u/2 du a t u κ x/u κ e κφt P S 1 /t > x/u 1 x>u e u/2 du a t 2 1 e κφt + a t u κ e u/2 du + c 3 + a t u κ e u/2 du. For the second term in the third equality we used the fact that x/u κ e κφt P S 1 /t > x/u < 2c 3 when x u. Since a t = e φt, the last quantities converges to when t goes to infinity. Combining this with 4.38 we get + xκ e κφt P e 1 S 1 /t > x 2 1 c 3 u κ e u/2 du =, 4.39 lim t + sup x [e 1 2φt,+ [ and this is exactly 4.31 with C 2 := 2 1 c 3 + u κ e u/2 du. Proof of 4.32 Let µ R1 the distribution of R 1, for any y, a > and t > by independence of e 1 S 1 and R 1 y κ e κφt P e 1 S 1 R 1 /t > y, R 1 < a = a y/u κ e κφt P e 1 S 1 /t > y/u u κ µ R1 du.

18 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 18 Taking a = a t = e φt and y arbitrary in [e 1 3φt, + [, we have y/u [e 1 2φt, + [, u ], a t ], so, thanks to 4.39 we get + yκ e κφt P e 1 S 1 R 1 /t > y, R 1 < a t C 2 u κ µ R1 du =. lim t + sup y [e 1 3φt,+ [ Also, as + u κ µ R1 du converges to E [R κ κ ], when t goes to infinity y κ e κφt P e 1 S 1 R 1 /t > y, R 1 < a t C 2 E [R κ κ ] =. lim t + sup y [e 1 3φt,+ [ Finally, as the family R 1 t> is bounded in all L p spaces, we can proceed as before to remove the event R 1 < a t and we thus get y κ e κφt P e 1 S 1 R 1 /t > y C 2 E [R κ κ ] =, 4.4 which is lim t + sup y [e 1 3φt,+ [ We now prove the last assertion. For any x, y, a and t >, we have = = e κφt P e 1 S 1 /t > x, e 1 S 1 R 1 /t > y, R 1 < a a e κφt P e 1 S 1 /t > x, e 1 S 1 /t > y/u µ R1 du, a y/x = 1 y κ a y/x + 1 x κ a a y/x a e κφt P e 1 S 1 /t > y/u µ R1 du + a y/x e κφt y/u κ P e 1 S 1 /t > y/u u κ µ R1 du e κφt x κ P e 1 S 1 /t > x µ R1 du. e κφt P e 1 S 1 /t > x µ R1 du, Taking a = a t = e φt and x, y arbitrary in [α, + [ for some α >, we have y/u [e 1 2φt, + [, u ], a t ] whenever t is large enough, so, thanks to 4.39 we get that e κφt P e 1 S 1 /t > x, e 1 S 1 R 1 /t > y, R 1 < a t converges uniformly in x, y [α, + [ [α, + [ toward x κ PR κ > y/x + y κ ER κ κ 1 R y/x. Then as before we can remove the event {R 1 < a t }, we get the last assertion Proof of Proposition 1.4. We start with the finite dimensional convergence Lemma 4.2. For any k N and s i >, i k, Y 1, Y 2 t s i, i k converge in law as t goes to infinity to Y 1, Y 2 si, i k. Proof: Proof is basic here, however we give some details as we deal with a two dimensional walk which increments depend on t itself. As Y1 t s and Y t 2 s are sums of i.i.d sequence we only have to prove the convergence in law for the couple Y 1, Y 2 t s for any s >. Define Y 1 >b, Y 2 >b obtained from Y 1, Y 2 keeping the increments larger than b, Y 1 >b s := 1 se κφt t j=1 l j 1 lj /t>b, and Y 2 >b s := 1 se κφt t j=1 H j 1 Hj /t>b. Also let Y b i := Y i Y i >b. We first prove that for any s >, lim lim t + P Y 1, Y 2 t s > 1 κ2 κ =, 4.41

19 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 19 where for any a R 2, a := max i 2 a i, and 1 κ2 κ > as κ < 1. We compute the first moment of Y 1 s and Y 2 s. Let η > such that κ 1 3η <, applying Fubini we have [ ] e κφt l1 E t 1 l 1 /t = e κφt e1 S 1 E 1 t e1 S 1 /t = e κφt P e 1 S 1 /t > x dx e 1 2ηφt e κφt P e 1 S 1 /t > x dx + e 1 2ηφt e κφt P e 1 S 1 /t > x dx e κ 1 2ηφt + x κ x κ e κφt P e 1 S 1 /t > x dx. e 1 2ηφt The first term converges to when t goes to infinity because κ 1 2η < and, according to 4.31, for t large enough, we have For such t, the second term is less than so we get that x κ e κφt P e 1 S 1 /t > x 2C 2, x e 1 2ηφt. 2C 2 x κ dx = 2C 2 1 κ 1 κ, t 1, ], 1], e κφt l1 E t 1 l 1 /t e κ 1 2ηφt + C + 1 κ Using the same method and applying this time 4.31, we get that t 1, ], 1], e κφt H1 E t 1 H 1 /t e κ 1 3ηφt + C + 1 κ We thus obtain E Y 1 s E Y 2 s se κ 1 2ηφt + C + s 1 κ, 4.44 se κ 1 3ηφt + C + s 1 κ, 4.45 then a Markov inequality yields The next step is to prove that Y 1 >, Y 2 > t s can be written as the integral of a point process which converge to the desired limit. We have s s Y 1 >, Y 2 > t s = xpt 1 dx, dv, xpt 2 dx, dv x> where the measures P 1 t and P 2 t are defined by P 1 t := + i=1 δ t 1 l i,e κφt i, and the same for P2 t replacing l by H. Recall that P 1 t and P 2 t are dependent and now prove that P 1 t, P 2 t converge to a Poisson point measure. For that just use Lemma 4.1 together with Proposition 3.1 in [28] after discretization, it implies that P 1 t, P 2 t converge weakly to the Poisson random measure denoted P 1, P 2 with intensity measure given by ds ν. Then using that for any >, and T < +, on [, T, +, + dsν is finite, we have that Y 1 >, Y 2 > t s converge weakly to s s Y 1 >, Y> 2 s := xp 1 dx, dv, xp 2 dx, dv. x> x> x>

20 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 2 We are left to prove that Y 1 >, Y> 2 converge to Y 1, Y 2 when. This is a straightforward computation, that we detail for completeness. Let ν 1 [x, + [ := + ν [x, + [ [y, + [ dy, we have E x s xp 1 dx, dv = s xν 1 x = C 2 κ, x Then a Markov inequality, proves that for any s >, the process x xp1 dx, dv converge to zero when goes to zero in probability. The same is true for s x xp2 dx, dv, so we obtain that in probability Y 1 >, Y> 2 s converge to Y 1, Y 2 s when. We now prove tightness of the family measures of the processes Y 1, Y 2 t denoted DY 1, Y 2 t t. Lemma 4.3. The family of laws DY 1, Y 2 t t is tight on D[, +, R 2, J 1. Proof: We only have to prove that the family law of the restriction of the process to the interval [, T ], Y 1, Y 2 t [,T ] t is tight. To prove this we use the following restatement of Theorem 1.8 in [8] using Aldous s tightness criterion see Condition 1, and equation page 176 in [8] also used in [9] page 1. We have to check the two following statements: 1 for any >, there exists a such that for any t large enough Psup s [,T ] Y 1, Y 2 t s a. 2 for any >, and η > there exists δ, < δ < T and t > such that for t > t, PωY 1, Y 2 t, δ, T η, with ωy 1, Y 2 t, δ, T := sup r T ωy 1, Y 2 t, δ, T, r, and ωy 1, Y 2 t, δ, T, r is defined by sup r δ u1 <u<u 2 r+δ T {min Y 1, Y 2 t u 2 Y 1, Y 2 t u, Y 1, Y 2 t u Y 1, Y 2 t u 1 }. Also PvY 1, Y 2 t,, δ, T η, and PvY 1, Y 2 t, T, δ, T η, where vy 1, Y 2 t, u, δ, T := sup u δ u1 u 2 u+δ T { Y 1, Y 2 t u 1 Y 1, Y 2 t u 2 }. s We first check 1 since the process is monotone increasing, P sup Y 1, Y 2 t s a = P Y 1, Y 2 t T a PY 1 T a + PY 2 T a s [,T ] Define Y 1 >b obtained from Y 1 where we remove the increments l j /t smaller than b. That is to say Y 1 >b s := 1 se κφt t j=1 l j 1 lj /t>b. Also let Y b 1 := Y 1 Y 1 >b and N u >b := ue κφt i=1 1 lj /t>b. Let < δ 1 < 1, Markov inequality yields PY1 t T a P Y 1 1 T a + P Y 1 >1 T a [ ] a E Y 1 1 T + 1 a δ E N >1 1 T + P Y >1 On {N T >1 a δ 1 } there is at most a δ 1 terms in the sum Y 1 >1 T so P Y 1 >1 T > a/2, N T >1 a δ 1 P 1 i a δ 1 a δ 1 P 1 T a 2, N >1 T a δ 1 l i /t a 1 δ 1 /2 l i /t 1 l 1 /t a 1 δ 1 /2 l 1 /t a δ 1 2 C 2e κφt a κ1 δ 1 2 κ C 2 e κφt = 2 1+κ a δ 1 κ1 δ 1, 4.48 for all t large enough thanks to 4.31 and δ 1 such that δ 1 κ1 δ 1 <.

21 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 21 Also, as for any positive b, N T >b follows a binomial law with parameter T e κφt, Pl 1 /t > b using 4.31 again, and 4.44 we obtain for t is large enough EN >b T 2C 2T b κ, E [ Y b 1 T Collecting 4.48, 4.49 and 4.47 we get the existence of t 1 > such that ] 2C 2 T b 1 κ lim sup PY 1 T a =. 4.5 a + t t 1 The same arguments holds for Y 2 using 4.32 instead of 4.31 and 4.45 instead of 4.44 so 4.5 also holds for Y 2 instead of Y 1. We conclude the proof of 1 by putting 4.5 and its analogous for Y 2 in We now check 2 First as usual we write {ωy 1, Y 2 t, δ, T η} {ωy b 1, Y b 2 t, δ, T η/2} {ωy >b 1, Y >b 2 t, δ, T η/2}. For Y b. we have PωY b 1, Y b 2 t, δ, T η/2 PωY b 1, δ, T η/2 + PωY b 2, δ, T η/2 moreover by positivity of the increments P ωy b 1, δ, T η/2 P k T/δ {Y b 1 k + 1δ Y b 1 kδ η/2} P Y b 1 k + 1δ Y b 1 kδ η/ k T/δ For any k, Y b 1 k + 1δ Y b 1 kδ is the sum of at most δe κφt + 1 i.i.d. random variables having the same law as l 1 /t. We get that for any integer k P Y b 1 k + 1δ Y b 1 kδ η/2 P Y b 1 2δ η/2 8C 2 δb 1 κ /η, where the first inequality holds for t large enough so that δe κφt 1 and the second from the second expression in 4.49 replacing T by 2δ. Combining with 4.51 we get for large t P ωy b 1, δ, T η/2 8C 2 T 1 + δb 1 κ /η, 4.52 [note that δ will be chosen later and will be less than 1]. T and η are fixed so we choose b small enough so that the right hand side of 4.51 is less than /4. A similar estimate can be proved for PωY b 2, δ, T η/2. For Y. >b, again we have PωY >b 1, Y >b 2 t, δ, T η/2 PωY >b 1, δ, T η/2 + PωY >b 2, δ, T η/2. Let us decrease b in order to get b < η/2 so that {ωy 1 >b, δ, T > η/2} implies that two jumps larger than b occur in an intervall smaller than 2δ. That is {ωy 1 >b, δ, T > η/2} T eκφt j=1 T eκφt i>j,i j/e κφt 2δ {l j l i /t > b}. Applying 4.31 for t large enough P T eκφt j=1 T eκφt i>j,i j/e κφt 2δ {l j l i /t > b} 8C 2 δt b 2κ,

22 RENEWAL STRUCTURE AND LOCAL TIME FOR DIFFUSIONS IN RANDOM ENVIRONMENT 22 which can be small choosing this time δ = δb properly. Again the same argument can be used for ωy 2 >b, δ, T. To finish the proof, we have to deal with v, as again our processes are increasing, PvY 1, Y 2 t,, δ, T η P Y 1, Y 2 t δ η we can then proceed as for 1 decreasing δ if needed, this also applies to PvY 1, Y 2 t, T, δ, T η Continuity of certain functionals of Y 1, Y 2, in J 1. In this section, we study the continuity of functionals of the Lévy processes Y 1, Y 2. For our purpose we are interested in the following mappings, first the two we have already mention in the introduction which are the basics J : D R +, R D R +, R I : D R +, R, J 1 D R +, R, U f f f f 1 Then we also need the compositions of these two : let J I and J I J I : D R +, R 2 R f = f 1, f 2 f J I : D R +, R 2 R 1 f f = f 1, f 2 f 1 f J I respectively J I produces the largest jump of f 1, just after respectively before f 2 reach 1. Finally let F define by F : D R +, R 2 R f = f 1, f 2 inf { s [, f2 1 1, f 1s = f 1 f }, we need this variable for the characterization of the favorite sites. Lemma 4.4. J is continuous in the J 1 topology. Proof: This fact is basic, but as we have not found a proof in the literature, we give some details. To prove the continuity on D R +, R, we only have to prove it for every compact subset of R +, see [35] Theorem So let f D R +, R and T > for which f is continuous, let us prove that J T defined by J T : D [, T ], R D [, T ], R g g is continuous at the restriction f [,T ]. Let > and g D [, T ], R such that d T f [,T ], g 2. d T is the usual metric d of the J 1 -topology restricted to the interval [, T ]. By definition of d T there exists a strictly increasing continuous mapping of [, 1] onto itself, e : [, T ] [, T ] such that sup es s s [,T ] 2, and sup g es f [,T ] s s [,T ] 2. So for every s [, T ] we have g es f [,T ] s = g es g es f [,T ] s f [,T ] s g es f [,T ] s + g es f [,T ] s 2 2 =, where hs = hs hs. This implies d T JT f [,T ], JT g. Lemma 4.5. The mapping J I DR +, R 2 such that and J I are continuous for J 1 -topology for every couple f 1, f 2

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1

A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Chapter 3 A Change of Variable Formula with Local Time-Space for Bounded Variation Lévy Processes with Application to Solving the American Put Option Problem 1 Abstract We establish a change of variable

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main

Path Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 22 12/09/2013. Skorokhod Mapping Theorem. Reflected Brownian Motion

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 22 12/09/2013. Skorokhod Mapping Theorem. Reflected Brownian Motion MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.7J Fall 213 Lecture 22 12/9/213 Skorokhod Mapping Theorem. Reflected Brownian Motion Content. 1. G/G/1 queueing system 2. One dimensional reflection mapping

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Observer design for a general class of triangular systems

Observer design for a general class of triangular systems 1st International Symposium on Mathematical Theory of Networks and Systems July 7-11, 014. Observer design for a general class of triangular systems Dimitris Boskos 1 John Tsinias Abstract The paper deals

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences

GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences The Annals of Probability 1996, Vol. 24, No. 3, 1324 1367 GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1 By Bálint Tóth Hungarian Academy of Sciences We consider

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

Stable Lévy motion with values in the Skorokhod space: construction and approximation

Stable Lévy motion with values in the Skorokhod space: construction and approximation Stable Lévy motion with values in the Skorokhod space: construction and approximation arxiv:1809.02103v1 [math.pr] 6 Sep 2018 Raluca M. Balan Becem Saidani September 5, 2018 Abstract In this article, we

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

arxiv:math/ v4 [math.pr] 12 Apr 2007

arxiv:math/ v4 [math.pr] 12 Apr 2007 arxiv:math/612224v4 [math.pr] 12 Apr 27 LARGE CLOSED QUEUEING NETWORKS IN SEMI-MARKOV ENVIRONMENT AND ITS APPLICATION VYACHESLAV M. ABRAMOV Abstract. The paper studies closed queueing networks containing

More information

4.5 The critical BGW tree

4.5 The critical BGW tree 4.5. THE CRITICAL BGW TREE 61 4.5 The critical BGW tree 4.5.1 The rooted BGW tree as a metric space We begin by recalling that a BGW tree T T with root is a graph in which the vertices are a subset of

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Stochastic Processes

Stochastic Processes Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Lecture 19 L 2 -Stochastic integration

Lecture 19 L 2 -Stochastic integration Lecture 19: L 2 -Stochastic integration 1 of 12 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 19 L 2 -Stochastic integration The stochastic integral for processes

More information

Itô s excursion theory and random trees

Itô s excursion theory and random trees Itô s excursion theory and random trees Jean-François Le Gall January 3, 200 Abstract We explain how Itô s excursion theory can be used to understand the asymptotic behavior of large random trees. We provide

More information

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE

INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE INTRODUCTION TO FURSTENBERG S 2 3 CONJECTURE BEN CALL Abstract. In this paper, we introduce the rudiments of ergodic theory and entropy necessary to study Rudolph s partial solution to the 2 3 problem

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

The Skorokhod reflection problem for functions with discontinuities (contractive case)

The Skorokhod reflection problem for functions with discontinuities (contractive case) The Skorokhod reflection problem for functions with discontinuities (contractive case) TAKIS KONSTANTOPOULOS Univ. of Texas at Austin Revised March 1999 Abstract Basic properties of the Skorokhod reflection

More information

Part III. 10 Topological Space Basics. Topological Spaces

Part III. 10 Topological Space Basics. Topological Spaces Part III 10 Topological Space Basics Topological Spaces Using the metric space results above as motivation we will axiomatize the notion of being an open set to more general settings. Definition 10.1.

More information

Course 212: Academic Year Section 1: Metric Spaces

Course 212: Academic Year Section 1: Metric Spaces Course 212: Academic Year 1991-2 Section 1: Metric Spaces D. R. Wilkins Contents 1 Metric Spaces 3 1.1 Distance Functions and Metric Spaces............. 3 1.2 Convergence and Continuity in Metric Spaces.........

More information

Krzysztof Burdzy Robert Ho lyst Peter March

Krzysztof Burdzy Robert Ho lyst Peter March A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent

More information

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE

UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Surveys in Mathematics and its Applications ISSN 1842-6298 (electronic), 1843-7265 (print) Volume 5 (2010), 275 284 UNCERTAINTY FUNCTIONAL DIFFERENTIAL EQUATIONS FOR FINANCE Iuliana Carmen Bărbăcioru Abstract.

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Scaling limits for random trees and graphs

Scaling limits for random trees and graphs YEP VII Probability, random trees and algorithms 8th-12th March 2010 Scaling limits for random trees and graphs Christina Goldschmidt INTRODUCTION A taste of what s to come We start with perhaps the simplest

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

STOCHASTIC PROCESSES Basic notions

STOCHASTIC PROCESSES Basic notions J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving

More information

More Empirical Process Theory

More Empirical Process Theory More Empirical Process heory 4.384 ime Series Analysis, Fall 2008 Recitation by Paul Schrimpf Supplementary to lectures given by Anna Mikusheva October 24, 2008 Recitation 8 More Empirical Process heory

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS

OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS APPLICATIONES MATHEMATICAE 29,4 (22), pp. 387 398 Mariusz Michta (Zielona Góra) OPTIMAL SOLUTIONS TO STOCHASTIC DIFFERENTIAL INCLUSIONS Abstract. A martingale problem approach is used first to analyze

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Branching Brownian motion seen from the tip

Branching Brownian motion seen from the tip Branching Brownian motion seen from the tip J. Berestycki 1 1 Laboratoire de Probabilité et Modèles Aléatoires, UPMC, Paris 09/02/2011 Joint work with Elie Aidekon, Eric Brunet and Zhan Shi J Berestycki

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Applied Stochastic Processes

Applied Stochastic Processes Applied Stochastic Processes Jochen Geiger last update: July 18, 2007) Contents 1 Discrete Markov chains........................................ 1 1.1 Basic properties and examples................................

More information

Exponential functionals of Lévy processes

Exponential functionals of Lévy processes Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 7 MS&E 321 Spring 12-13 Stochastic Systems June 1, 213 Prof. Peter W. Glynn Page 1 of 7 Section 9: Renewal Theory Contents 9.1 Renewal Equations..................................... 1 9.2 Solving the Renewal

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

The Structure of a Brownian Bubble

The Structure of a Brownian Bubble The Structure of a Brownian Bubble Robert C. Dalang 1 and John B. Walsh 2 Abstract In this paper, we examine local geometric properties of level sets of the Brownian sheet, and in particular, we identify

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

The Contour Process of Crump-Mode-Jagers Branching Processes

The Contour Process of Crump-Mode-Jagers Branching Processes The Contour Process of Crump-Mode-Jagers Branching Processes Emmanuel Schertzer (LPMA Paris 6), with Florian Simatos (ISAE Toulouse) June 24, 2015 Crump-Mode-Jagers trees Crump Mode Jagers (CMJ) branching

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

Optimal Stopping and Maximal Inequalities for Poisson Processes

Optimal Stopping and Maximal Inequalities for Poisson Processes Optimal Stopping and Maximal Inequalities for Poisson Processes D.O. Kramkov 1 E. Mordecki 2 September 10, 2002 1 Steklov Mathematical Institute, Moscow, Russia 2 Universidad de la República, Montevideo,

More information