GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences

Size: px
Start display at page:

Download "GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1. Hungarian Academy of Sciences"

Transcription

1 The Annals of Probability 1996, Vol. 24, No. 3, GENERALIZED RAY KNIGHT THEORY AND LIMIT THEOREMS FOR SELF-INTERACTING RANDOM WALKS ON Z 1 By Bálint Tóth Hungarian Academy of Sciences We consider non-markovian, self-interacting random walks (SIRW) on the one-dimensional integer lattice. The walk starts from the origin and at each step jumps to a neighboring site. The probability of jumping along a bond is proportional to w (number of previous jumps along that lattice bond), where w: N R + is a monotone weight function. Exponential and subexponential weight functions were considered in earlier papers. In the present paper we consider weight functions w with polynomial asymptotics. These weight functions define variants of the reinforced random walk. We prove functional limit theorems for the local time processes of these random walks and local limit theorems for the position of the random walker at late times. A generalization of the Ray Knight theory of local time arises. 1. Introduction. In the present paper we consider self-interacting random walks (SIRW) X i on the one-dimensional integer lattice, defined as follows: the walk starts from the origin of the lattice and at time i+1 it jumps to one of the two neighboring sites of X i, so that the probability of jumping along a bond of the lattice is proportional to where w number of previous jumps along that bond w: N R + is a weight function to be specified later. More formally, for a nearest neighbor walk x i = x x 1 x i we define (1.1) (1.2) r x i = { j < i: x j x j+1 = x i x i + 1 or x i + 1 x i } l x i = { j < i: x j x j+1 = x i x i 1 or x i 1 x i } that is, the number r x i [respectively l xi ] shows how many times had the walk x i visited the edge adjacent from the right (respectively from the left) to the terminal site x i. The random walk X i is governed by the law (1.3) P X i+1 = X i + 1 X i = xi = w r x i w r x i + w l xi = 1 P X i+1 = X i 1 X i = xi Received April 1995; revised February Work partially supported by Hungarian National Foundation for Scientific Research grants 7275 and T AMS 1991 subject classifications. 6F5, 6J15, 6J55, 6E99, 82C41. Key words and phrases. Self-interacting random walks, local time, limit theorems, conjugate diffusions. 1324

2 SELF-INTERACTING RANDOM WALKS ON Z 1325 Our aim is to prove limit theorems for these random walks and their local time processes, with proper scaling. The true self-avoiding random walks defined by the weight functions w n = exp g n, g > [respectively, generalizations with weight functions w ( n ) = exp g n κ, g >, κ 1 ], have been considered in Tóth (1995) [respectively, Tóth (1994)]. In the present paper we shall consider weight functions satisfying the following two conditions: 1. Monotonicity. Nonincreasing w defines a self-repelling random walk, while nondecreasing w defines a self-attracting one. 2. Regular polynomial asymptotic behavior. The most convenient way to formulate this condition is (1.4) w n 1 = 2 α α + 1 n α α Bn α 1 + n α 2 where α R and B R are two constant parameters. Since in the definition (1.3) of jump probabilities only ratios of w s play any role, the constant factor in front of the leading term is chosen for convenience only. Note, that the next-to-leading term is assumed asymptotically smooth. The monotonicity condition (1) implies that α < defines self-attracting walks, α > defines self-repelling walks, while for α =, lower order terms determine the character of the self-interaction. Due to Davis (199) the recurrence properties of these walks, with w ( n ) n α, α R are well understood: for α 1, with probability 1, the walk visits infinitely often every site of the lattice, whereas for α 1 the walk eventually sticks to one edge of the lattice jumping back and forth on it, indefinitely. The case α = 1 is somewhat special: w n = w + n, w >, defines the so-called (linearly) reinforced random walk. From the results of Pemantle (1988) it follows that in this case the random walk X i has an asymptotic (random) distribution on, without any scaling. These remarks suggest that only the cases α 1 will show interesting, nontrivial scaling behavior. There are three essentially different regimes according to the value of the parameter α: Case A. Case B. case. Case C. α =. We shall call this the asymptotically free case. α. We shall refer to this as the polynomially self-repelling α 1. This will be called the weakly reinforced case. Cases A and B show similar scaling behavior and will be treated in parallel in the main body of the paper. Case C differs essentially from the first two, the results referring to this case will be presented in a separate note [Tóth (1996)].

3 1326 B. TÓTH A very special case of A, the random walk with partial reflection at extrema or once reinforced random walk has been recently considered in Davis (1994, 1995) and Nester (1994). These walks are defined by the weight function (1.5) 2 w n = δ for n = 1 for n with the parameter δ > 2. Davis and Nester prove pathwise convergence for these very special cases. See also the remark at the end of Section 3. The outline of the paper is as follows: In Section 2 we define the random processes and variables which will appear as weak limits in later sections. In Section 3 we formulate the limit theorems referred to in the title. Section 4 contains the convenient representation of local time processes of the self-interacting random walks considered. In Section 5 we prove the limit theorems referring to the local times of SIRW this section is the core of the paper. In Section 6 the limit theorem for the position of SIRW is proved. In the Appendix we analyse the generalized Ray Knight processes defined in Section 2. The results presented in the Appendix are technically needed in the proof of Theorems 2A and 2B, but they are self-contained and might be interesting from a purely diffusion-theoretic point of view. 2. Generalized Ray Knight processes I. The random processes and variables defined in this section will appear as weak limits in the limit theorems stated in the next section. The squared Bessel process (BESQ δ ) of generalized dimension δ R + is well understood. For exhaustive description of these processes and their properties we refer the reader to Revuz and Yor (1991). For our present purposes it is more convenient to consider 1 2 times the conventional BESQδ : the stochastic process R + t Z δ t R + which solves the SDE (2.1) dz δ t = δ 2 dt + 2Z δ t dw t Z δ R + where W t is a standard Brownian motion. These processes are well defined for any δ. The infinitesimal generator of the Feller semigroup acting on the Banach space (2.2) C = corresponding to the process Z δ is { } f C : lim f x = x (2.3) G δ = x 2 x + δ 2 2 x

4 SELF-INTERACTING RANDOM WALKS ON Z 1327 defined on the domain { δ = f C C 2 : [ G δ f C ] (2.4) (2.5) Let σ be the first hitting of zero by Z δ : [ ] } lim xf x = x σ = σ Z δ = inf t > Z δ t = It is well known that for δ 2, σ = a.s. and for δ < 2, σ < a.s. Furthermore, if δ =, then zero is an absorbing point, that is, for t σ, Z t. For δ < 2, we denote by Z δ the process stopped at σ : (2.6) { Z δ Z t = δ t for t σ, for t σ. The process Z δ is naturally defined for δ <, too, as the solution of the SDE (2.1) until the first hitting of, and identically zero afterward. For δ 2, the generator of the process Z δ is the operator G δ formally given in (2.3), but now defined on the domain { δ = f C C 2 : [ G δ f C ] (2.7) [ ] } lim G δ f x = x Let δ > and two other parameters, a and h, be fixed. We define the process S δ a h by patching together three different, independent BESQ s:, Z δ and Z 2 δ, in the following way: Z 2 δ l (2.8) S δ a h y = r ( Z 2 δ l y Z 2 δ l = h ) for y, ( Z δ y Z δ = h ) for y a, ( Z r 2 δ y a Z r 2 δ = Z δ a ) for y a ; δ, a and h are fixed parameters of the process and y R is the time variable. The process S δ a h is graphically represented on Fig. 1. For reasons which will soon become obvious we call the process S δ a h a generalized Ray Knight process. We denote (2.9) (2.1) ω δ a h ω δ + a h ( ) = ω S δ a h = inf y S δ a h y > ( ) = ω+ S δ a h = sup y a S δ a h y > For any δ >, that is, 2 δ < 2, ω ± are finite a.s.

5 1328 B. TÓTH Fig. 1. The generalized Ray Knight process S δ x h y. (a) δ 2 ; (b) δ 2. Since the process Z 2 δ almost surely hits in finite time and it is stopped at this hitting time, the process S δ a h almost surely has compact support and the total area under S δ a h, (2.11) δ + ω T δ a h = a h S δ ω δ a h y dy = S δ a h y dy a h is almost surely finite. For any δ, a and h, the random variable T δ a h defined in (2.11) has an absolutely continuous distribution. Let (2.12) ϱ δ t a h = t P T δ a h < t be the density of the distribution of T δ a h. From scaling the BESQ processes we easily get (2.13) λ 2 ϱ δ λ 2 t λa λh = ϱ δ t a h

6 SELF-INTERACTING RANDOM WALKS ON Z 1329 for any λ >. Define R + R t x π δ t x R + as (2.14) π δ t x = The scaling property (2.13) of ϱ δ implies (2.15) ϱ δ ( t 2 x h ) dh λπ δ λ 2 t λx = π δ t x We denote by ˆϱ δ and ˆπ δ the Laplace transforms of ϱ δ (respectively, π δ ): (2.16) (2.17) ˆϱ δ s a h = s ˆπ δ s x = s These functions scale as (2.18) exp st ϱ δ t a h dt = se ( exp st δ a h ) exp st π δ t x dt = λ 2 ˆϱ δ λ 2 s λa λh = ˆϱ δ s a h ˆϱ δ 2s x h dh (2.19) λ ˆπ δ λ 2 s λx = ˆπ δ s x In the particular case, δ = 2, S 2 a h is well known: according to the by now classical Ray Knight theorems it is identical to the local time process of standard one-dimensional Brownian motion stopped at appropriately chosen sampling times. More precisely, let t be a standard Brownian motion on R and x t be its local time process. The Ray Knight theorems [see Chapter XI of Revuz and Yor (1991)] state that given x R, h fixed, if we stop the Brownian motion at the stopping time (2.2) x h = inf t : x t > h and consider the (shifted) local time process (2.21) then (2.22) x h y = x sgn x y x h x h = d S 2 x h where = d stands for equality in distribution. Since (2.23) x h = x h y dy clearly holds, we actually have ( ) ( 2 (2.24) x h x h =d S x h T 2 x h) y This is exactly the content of the Ray Knight theorems on Brownian local time. [See XI.2.2. and XI.2.3. of Revuz and Yor (1991).] Now, using the straightforward identity (2.25) ( x h < t ) dh = x t

7 133 B. TÓTH we readily get (2.26) or, equivalently, π 2 t x = P ( t x h < t ) dh = t E( x t ) = 1 ( x 2 ) exp 2πt 2t (2.27) ˆπ 2 s x = s 2 exp s x that is, R x π 2 t x [respectively, R x ˆπ 2 s x ] are the densities of the distribution of the Brownian motion stopped at time t (respectively, stopped at an independent random time of exponential distribution with expectation s 1 ). For the special values δ = 2 and δ = of the dimension parameter, formulas relating to functionals of the squared Bessel process become more explicit and, consequently, (2.26) and (2.27) could have been established by direct computation, without any reference to Ray Knight theorems. However, this is not the case for other values of the parameter δ. Nevertheless, Theorem 4 formulated and proved in the Appendix generalizes the statement made above: from Theorem 4 it follows that indeed given t [respectively s ] fixed, the function x π δ t x [respectively x ˆπ δ s x ] is a probability density, that is, for any t [respectively s ], (2.28) π δ t x dx = 1 = ˆπ δ s x dx The two assertions of (2.28) are, of course, equivalent: ˆπ δ s is the distribution π δ t observed at a random time of exponential distribution with mean value s 1. Furthermore, using the scaling relations (2.15) [respectively (2.19)], one can eliminate the t (respectively s) parameters from these integrals; that is, one has to prove, say, the right-hand side equality with s = 1. However, for δ 2 the integrals in (2.28) seem to evade any attempt at explicit computation. The integrals cannot be computed even for δ = 1, which is another very special case. We shall prove a further generalization of (2.28) in the Appendix. There are no explicit formulas for the distributions π δ ; however, according to recent results of Carmona, Petit and Yor (1995) and Davis (1995) they coincide with the one-dimensional marginal distributions of the Brownian motion perturbed at extrema. See also the remark at the end of Section 3. In Cases A and B (that is, for the asymptotically free and polynomially self-repelling walks) the generalized Ray Knight processes S δ a h will arise as scaling limits of properly defined local time processes, and later the distributions ˆπ δ s x dx will arise as weak limits for the properly scaled position of the SIRW s at late times.

8 SELF-INTERACTING RANDOM WALKS ON Z 1331 Remark. BESQ δ and BESQ 2 δ processes with δ > also have been recently found as local time processes of reflecting Brownian motion perturbed by its local time at zero, that is, X t = t µ t with µ = 2/δ. For details of different approaches to this problem, see, for example, Le Gall and Yor (1986), Carmona, Petit and Yor (1994), Perman (1995) or Werner (1995); the most general approach is that of Carmona, Petit and Yor (1995). 3. Limit theorems. The present section is divided into two subsections: in Section 3.1. we formulate limit theorems for the local time processes and hitting times of the SIRW s. In Section 3.2. we formulate the limit theorems for the position of the SIRW at late times The local time process and hitting times. Our first results are limit theorems for the local time processes of the random walks X i, stopped at appropriately defined stopping times. We define the (bond) local time process (3.1.1) L l i = # j < i: X j = l X j+1 = l 1 l Z i N and stopping times (3.1.2) (3.1.3) T > k 1 = T > k m = inf i > T> k m 1 : X i 1 = k 1 X i = k T < k = T < k m = inf i > T< k m 1 : X i 1 = k + 1 X i = k k > m k m 1 In plain words, L l i is the number of leftward jumps on the bond l l 1 performed by the random walk up to time i, T > k m is the time of the m + 1th arrival to the lattice site k coming from the left and T < k m is the time of the mth arrival to the lattice site k coming from the right. In formula (3.1.4) and thereafter the superscript asterisk (*) stands for either < or >. We consider the following shifted (bond) local time processes of the walk stopped at T k m : (3.1.4) S k m l = L k l T k m S k m l is roughly half of the total number of jumps across the bond k l 1 k l : (3.1.5) # j < T k m : X j X j+1 = k l 1 k l = 2S k m l + k l Denote (3.1.6) (3.1.7) ω k m = ω S k m = inf l : S k m l > ω + k m = ω+ S k m = sup l k: S k m l >

9 1332 B. TÓTH In plain words, k ω + k m (respectively k ω k m 1) is the leftmost (respectively rightmost) site visited by the stopped walk X T k m. From (3.1.5) it clearly follows that (3.1.8) + k m ω T k m = 2 l=ω k m S k m l + k = 2 l= S k m l + k Looking at the formal definitions only, in principle, these local times or hitting times might be infinite, that is, it could happen that the site k Z is never hit. From the results of Davis (199) it follows that in the cases considered in the present paper, with probability 1, this is not the case: all the random variables defined above are finite almost surely. The following two theorems and their corollaries describe the precise asymptotics of the local time processes S k m and hitting times T k m in the asymptotically free and polynomially self-repelling cases. Theorem 1A (Asymptotically free case: α = ). The limit (3.1.9) δ = 2w w 2j 1 w 2j 1 1 j=1 exists; δ 2 for self-repelling walks (i.e., w nonincreasing) and δ 2 for self-attracting walks (i.e., w nondecreasing). Let x, h and = < or > be fixed. In the A limit the following weak convergence holds in the space R R + D : ( ω Ax Ah ω + Ax Ah S Ax Ah Ay ) A A A (3.1.1) ( ω δ x h ω δ + x h S δ x h y ) Theorem 1B [Polynomially self-repelling case: α ]. (3.1.11) β = 1 1 2α + 1 Denote Let x h and = < or > be fixed. In the A limit the following weak convergence holds in the space R R + D : ( ω Ax Aβh ω + Ax Aβh S Ax Aβh Ay ) A A Aβ (3.1.12) ( ω 1 x h ω 1 + x h S 1 x h y ) The generalized Ray Knight process S δ x h and random variables ω δ ± x h appearing on the right hand side of (3.1.1) and (3.1.12) are defined in (2.8) (2.1).

10 SELF-INTERACTING RANDOM WALKS ON Z 1333 Remark. It seems rather surprising (at least for the author) that in the polynomially self-repelling case (Case B), the exponent α is reflected only in the constant scaling factor β and the limit process is unaffected; it is always a squared Wiener process. Immediate corollaries of the previous theorems are the following limit laws for the hitting times defined in (3.1.2) and (3.1.3): Corollary 1A (Asymptotically free case: α = ). in Theorem 1A. Then Let x, h, and δ be as (3.1.13) as A. T Ax Ah 2A 2 T δ x h Corollary 1B [Polynomially self-repelling case: α ]. and β be as in Theorem 1B. Then Let x, h, (3.1.14) as A. T Ax Aβh 2A 2 β T 1 x h The random variables T δ x h appearing on the right-hand side of (3.1.13) and (3.1.14) are defined in (2.11). These corollaries will be used in the proof of Theorems 2A and 2B Local limit theorem for the position of the random walk. The second group of results concerns the limiting distribution of the SIRW X n for late times. We denote by P n k, n N, k Z, the distribution of our selfinteracting random walk at time n, (3.2.1) P n k = P X n = k and by R s k, s R +, k Z, the distribution of the walk observed at an independent random time θ s, of geometric distribution (3.2.2) (3.2.3) P θ s = n = 1 e s e sn R s k = P X θs = k = 1 e s e sn P n k We define the following rescaled densities of the above distributions (3.2.4) (3.2.5) n= π A s x = A 1/2 P At A 1/2 x ˆπ A s x = A 1/2 R A 1 s A 1/2 x t, s R +, x R. It is straightforward that ˆπ A is exactly the Laplace transform of π A.

11 1334 B. TÓTH Theorem 2A (Asymptotically free case: α = ). all x R (3.2.6) ˆπ A s x ˆπ δ s x For any s R + and almost as A where the probability density R x ˆπ δ s x is that defined in 2 17 with δ given in Theorem 2B [Polynomially self-repelling case: α ]. For any s R + and almost all x R (3.2.7) ˆπ A s x β 1/2 ˆπ 1 s β 1/2 x as A where the probability density R x ˆπ 1 s x is that defined in 2 17 with δ = 1 and β is given in These are of course local limit theorems for the self-interacting random walks, observed at an independent random time θ s/a of geometric distribution with mean e s/a 1 e s/a 1 A/s. In particular, the (integral) limit laws follow: Case A, for α =, P A 1/2 X θs/a < x Case B, for α, P A 1/2 X θs/a < x x β 1/2 x ˆπ δ s y dy ˆπ 1 s y dy These are a little bit short of stating the limit theorems for deterministic time: Case A, for α =, P A 1/2 X At < x Case B, for α, P A 1/2 X At < x x β 1/2 x π δ t y dy π 1 t y dy Of course, we can conclude that the sequence A 1/2 X At, with t R + fixed and A, is tight and if it converges in distribution, then (3.2.1)/(3.2.11) also hold. Remark. Carmona, Petit and Yor (1995) and Davis (1995) considered the Brownian motion perturbed at extrema, that is, the stochastic process formally defined by (3.2.12) Y t = B t + α sup Y s + β inf Y s s t s t It turns out that for α = β = 2/δ these processes have the same family of local time processes as those appearing in our Theorems 1A and 1B. Furthermore, Davis (1995) proves that the properly scaled once reinforced random walk, defined by (1.5), converges to Y as a process. In Theorems 2A and 2B we prove only convergence of one-dimensional distributions, but for a much wider family of self-interacting random walks. Actually, our limiting one-dimensional

12 SELF-INTERACTING RANDOM WALKS ON Z 1335 distributions coincide with the one-dimensional marginal distributions of the process Y t. The problem of higher-dimensional distributions of our general SIRW s remains open. 4. Representation of the local time process in terms of Pólya urns Generalized Pólya urn schemes. Given two weight functions (4.1.1) (4.1.2) r: N R + b: N R + a generalized Pólya urn scheme is a Markov chain ρ i β i on N N with transition probabilities (4.1.3) (4.1.4) P ρ i+1 β i+1 = k + 1 l ρ i β i = k l = P ρ i+1 β i+1 = k l + 1 ρ i β i = k l = r k r k + b l b l r k + b l and no other transitions allowed. Usually the initial values ρ β = are assumed and β i and ρ i are interpreted as the number of blue and red marbles, respectively, drawn from the urn up to time i. Denote by τ m the time when the mth red marble is drawn and by µ m the number of blue marbles drawn before the mth red one: (4.1.5) (4.1.6) τ m = min i ρ i = m µ m = β τm The following functions are essential in the study of the Pólya urn scheme defined above: (4.1.7) n 1 R p n = r j p j= p N (4.1.8) B p n = n 1 j= ( b j ) p p N We shall be particularly interested in p = Lemma 1. For any m N and λ < min r j : j m 1 the following identity holds: (4.1.9) E ( µ m 1 j= ( 1 + λ b j )) m 1( = j= 1 λ r j ) 1

13 1336 B. TÓTH In particular, (4.1.1) (4.1.11) E B 1 µ m = R 1 m E B 1 µ m E B 1 µ m 2 = R 2 m + E B 2 µ m E B 1 µ m E B 1 µ m 4 (4.1.12) = 6R 4 m + 9R 2 2 m + 6R2 1 m R 2 m 12R 1 m E B 1 µ m B 2 µ m + 8R 1 m E B 3 µ m + 12 R 2 1 m + R 2 m E B 2 µ m + 6 E B 2 1 µ m B 2 µ m 8 E B 1 µ m B 3 µ m + 6 E B 4 µ m + 6 E B 2 µ m 2 Proof. The proof of (4.1.9) follows from standard martingale considerations. One possibility is using Rubin s representation of the generalized Pólya urn scheme, given in the Appendix of Davis (199). Expanding (4.1.9) to fourth order in λ yields (4.1.1) (4.1.12). We leave the standard details of this proof as an exercise for the reader. Remark. The explicit form of the expressions on the right-hand sides of (4.1.1) and (4.1.11) will be used later. The right-hand side of (4.1.12) looks rather discouraging, but in our concrete application, we shall need only estimates on its order of magnitude The local time process. For the sake of definiteness we consider the case of superscript >, that is, we stop the SIRW at the hitting time T > k m. The case of superscript < is done in a very similar way, with straightforward slight changes. Let ρ l i β l i, l Z, be independent Pólya urn schemes with weight functions (4.2.1) r l j = w 2j + 1 b l j = w 2j for l k + 1 (4.2.2) r l j = w 2j b l j = w 2j + 1 for l 1 k 1 (4.2.3) r l j = w 2j + 1 b l j = w 2j for l = k

14 SELF-INTERACTING RANDOM WALKS ON Z 1337 Denote by µ l m the random variables defined in (4.1.6), the superscript l shows to which of the urn schemes it belongs. The extension to self-interacting walks of Knight s (1963) description of the local time process S > k m l, l, as a Markov chain is formally exhaustively explained in several papers; see, for example, Davis (199) or Tóth (1995). According to these arguments S > k m l, l, is obtained by patching together three homogeneous Markov chains in the following way: 1. In the interval l k 2, that is, steps k 2 k 1, (4.2.4) S > k m = m S> k m l + 1 = µ l+1 S > k m l + 1 l = 1 k 2 This process will be the object of a first Ray Knight theorem. 2. The single step k 1 k is exceptional: (4.2.5) S > k m k 1 = given by (4.2.4) S > k m k = µ k S > k m k In the intervals l [respectively, l k + 1 ], that is, steps 1, 1 2, 2 3 [respectively, k k + 1, k + 1 k + 2 k + 2 k + 3 k + 3 k + 4 ], (4.2.6) S > k m = m S> k m l 1 = µ l S > k m l l = 1 2 respectively (4.2.7) S > k m k = given by (4.2.5) S > k m l + 1 = µ l+1 S > k m l l = k k + 1 k + 2 Due to (4.2.1) these last two Markov chains have the same transition laws. These will be the object of a second Ray Knight theorem. The proof of Theorems 1A and 1B will consist of proving limit theorems for the Markov chains given in (4.2.4) [respectively, (4.2.6) (4.2.7)] and proving that the single exceptional step given in (4.2.5) does not have any effect on the limit (i.e., it does not spoil continuity of the limit process). 5. Proof of Theorems 1A and 1B Preparations. As suggested by the representation of the local times given in the previous section, we consider two homogeneous Markov chains l and l, l = 1 2, on the state space N, defined as follows: (5.1.1) l + 1 = µ l+1 l + 1 l + 1 = µ l+1 l

15 1338 B. TÓTH where the processes µ l l N are those defined in (4.1.5) and (4.1.6), belonging to i.i.d. Pólya urn schemes ρ l i β l l N, with weight functions (5.1.2) r j = w 2j i b j = w 2j + 1 Similarly, the processes µ l l N ρ l i β l l N, with weight functions i belong to i.i.d. Pólya urn schemes (5.1.3) r j = w 2j + 1 b j = w 2j We shall also need the hitting time (5.1.4) σ = σ = inf l: l = From (5.1.1), (4.1.5) and (4.1.6) we see that σ is actually the extinction time of : (5.1.5) l for l σ Lemma 1 suggests the introduction of the functions (5.1.6) n 1 U p n = w 2j p j= p = (5.1.7) V p n = n 1 j= ( w 2j + 1 ) p p = Using formulas (4.1.1) and (4.1.11) of Lemma 1 and the functions introduced above, we get the identities (5.1.8) E V 1 l + 1 l = n = U 1 n + 1 (5.1.9) D 2 V 1 l + 1 l = n = U 2 n+1 + E V 2 l+1 l =n (5.1.1) E U 1 l + 1 l = n = V 1 n (5.1.11) D 2 U 1 l + 1 l = n = V 2 n + E U 2 l + 1 l = n As both functions n U 1 n and n V 1 n are bijections between N and their ranges it is more convenient to consider the Markov chains (5.1.12) l = V 1 l l = U1 l l = 1 2 instead of l [respectively l ]. With this change of variable, formulas (5.1.8) (5.1.11) transform as (5.1.13) E l + 1 l = x = U 1 V1 1 x + 1

16 SELF-INTERACTING RANDOM WALKS ON Z 1339 (5.1.14) (5.1.15) (5.1.16) D 2 l + 1 l = x = U 2 V 1 1 x + 1 E l + 1 l = x = V 1 U 1 1 x D 2 l + 1 l = x = V 2 U 1 1 x + E V 2 V 1 1 l + 1 l = x + E U 2 U 1 1 l + 1 l = x We introduce the functions F, G: Ran V 1 R and F, G: Ran U 1 R: (5.1.17) (5.1.18) (5.1.19) (5.1.2) F x = E l + 1 l = x x = U 1 V1 1 x + 1 x G x = E l + 1 E l + 1 l = x 2 l = x = U 2 V1 1 x E V 2 V1 1 1 = x F x = E l + 1 l = x x = V 1 U1 1 x x G x = E l + 1 E l + 1 l = x 2 l = x = V 2 U 1 1 x + E U 2 U = x Since and are Markov chains, from (5.1.13) (5.1.2) it follows that the processes (5.1.21) l = l l 1 j= F j l = l l 1 j= F j are martingales with quadratic variation processes (5.1.22) l = l 1 j= l 1 l = j= G j G j Later, when proving tightness, we shall also need the functions H: Ran V 1 and H: Ran U 1 R: (5.1.23) H x = E l + 1 E l + 1 l = x 4 l = x (5.1.24) H x = E l + 1 E l + 1 l = x 4 l = x

17 134 B. TÓTH 5.2. Asymptotics of the relevant functions. In the present subsection we give the asymptotics of the relevant functions F, G, H, F, G and H to be used in the proof of Theorems 1A and 1B. All formulas are valid for large values of the variable and are obtained from (1.4) in a rather straightforward way. The three cases listed at the end of the Introduction show essentially different asymptotic behavior. These essentially different asymptotics explain why exactly these are the different regimes. Asymptotically free case: α = (Case A). From (1.4) we get (5.2.1) U 1 n = n B log n + u + n 1 (5.2.2) V 1 n = n B log n + v + n 1 (5.2.3) V 2 n = n + log n = U 2 n In (5.2.1) [respectively, (5.2.2)], u and v are two real constants. We define (5.2.4) δ = 2 lim n U n + 1 V n = u v In the self-repelling case, that is, w k + 1 w k, we write (5.2.5) δ 2 = w 1 + w 2j 1 w 2j 1 1 j=1 = 1 w 2j w 2j 1 j= and hence we conclude (5.2.6) < 2w 1 δ 2 On the other hand, in the case of self-attraction, that is, w k + 1 w k, we write (5.2.7) δ 2 = w 1 w 2j 1 1 w 2j 1 j=1 = 1 + w 2j 1 w 2j j= which implies (5.2.8) 2 δ 2w 1 < The asymptotics of the functions F, F, G, G, H and H is given in the next lemma.

18 SELF-INTERACTING RANDOM WALKS ON Z 1341 Lemma 2A (Asymptotically free case: α = ). hold for x 1: The following asymptotics (5.2.9) (5.2.1) (5.2.11) F x = δ 2 + x 1 F x = 2 δ 2 + x 1 G x = 2x + log x = G x (5.2.12) H x = x 2 = H x Proof. (5.2.13) Clearly F x = U 1 n + 1 V 1 n with n = V 1 1 x (5.2.14) From (5.2.1) it follows that (5.2.15) F x = V 1 n U 1 n with n = U 1 1 x U 1 1 x = x + log x = V 1 1 x Using (5.2.1), (5.2.2), (5.2.15) and (5.2.13) [respectively, (5.2.14)], we easily get (5.2.9) [respectively, (5.2.1)]. To prove (5.2.11) note first that (5.2.16) U 2 U 1 1 x = x + log x = V 2 V 1 1 x Hence, using (5.1.17) (5.1.19), (5.2.9)/(5.2.1) and Jensen s inequality, (5.2.17) E U 2 U1 1 1 = x = x + log x = E V 2 V = x Using this in the expressions (5.1.18) (5.1.2), we get (5.2.11). The details of the proof of (5.2.12) are lengthy and not very illuminating. The first three terms on the right-hand side of (4.1.12) are estimated directly. For the remaining seven terms, one applies repeatedly the method of the proof of (5.2.23). We omit these details. Polynomially self-repelling case: α (Case B). In this case, (1.4) implies ( B U 1 n = n α+1 + α + α + 1 ) (5.2.18) n α + n α (5.2.19) ( B V 1 n = n α+1 + α α ) n α + n α 1 1 (5.2.2) U 2 n = α α + 1 n2α+1 + n 2α = V 2 n

19 1342 B. TÓTH We shall denote (5.2.21) β = 1 2α + 1 Using (5.2.18) (5.2.2), we get the following asymptotic expressions for the functions F, F, G, G, H and H. Lemma 2B [Polynomially self-repelling case: α ]. asymptotics hold for x 1: The following (5.2.22) (5.2.23) (5.2.24) F x = α xα/ α+1 + x α 1 / α+1 1 = F x α G x = 2 2α + 1 x 2α+1 / α+1 + x 2α/ α+1 = G x H x = x 4α+2 / α+1 = H x Proof. (5.2.25) From (5.2.18) and (5.2.19) we get U 1 1 x = x1/ α = V 1 1 x Now, from (5.2.18), (5.2.19), (5.2.25) and (5.2.13) [respectively, (5.2.14)], the asymptotic formulas (5.2.22) follow directly. The derivation of (5.2.23) is slightly more complicated: first note that from (5.2.19) and (5.2.25) it follows that there are two finite constants, say C 1 and C 2, so that for any x > and z, (5.2.26) C 1 x α/ α+1 z x V 2 V 1 1 z V 2 V 1 1 x C 1 x α/ α+1 z x + C 2 x 1/ α+1 z x 2 We insert this in the definition (5.1.18) of the function G and get (5.2.27) C 1 x α/ α+1 F x G x U 2 V 1 1 x V 2 V 1 1 x C 1 x α/ α+1 F x + C 2 x 1/ α+1 G x From these bounds and the explicitly known asymptotics of the functions involved, the asymptotics (5.2.23) of the function G now follows in a straightforward way. An identical derivation holds also for the function G. The proof of (5.2.24) goes through very similar steps, but it is considerably longer. Again, the first three terms on the right-hand side of (4.1.12) are estimated directly and the remaining seven terms are estimated by considerations similar to (5.2.26) and (5.2.27). As the details are lengthy and of no particular interest, we do not present them here.

20 SELF-INTERACTING RANDOM WALKS ON Z Scaling. The proper scaling of the processes and is determined by the dominant terms in the asymptotics of the functions F, G (respectively F, G). The scaling of the processes and will be later determined by the functional relations (5.1.12). Asymptotically free case: α = (Case A). Equations (5.2.9) (5.2.11) suggest the scaling (5.3.1) Y A t = A 1 At Ỹ A t = A 1 At M A and their quadratic variation pro- The rescaled martingales M A and cesses will be (5.3.2) (5.3.3) (5.3.4) (5.3.5) M A t = A 1 At = Y A t Y A A 1 At F AY A s ds M A t = A 1 At A 1 At = Ỹ A t Ỹ A F AỸ A s ds M A M A t = A 2 At = A 1 At A 1 G AY A s ds M A M A t = A 2 At = A 1 At A 1 G AỸ A s ds Polynomially self-repelling case: α (Case B). (5.2.23) suggest Now, (5.2.22) and (5.3.6) Y A t = βa α+1 At Ỹ A t = βa α+1 At The rescaled martingales M A and M A and their quadratic variation processes will be now M A t = βa α+1 At (5.3.7) (5.3.8) (5.3.9) = Y A t Y A M A t = βa α+1 At = Ỹ A t Ỹ A A 1 At A 1 At M A M A t = βa 2 α+1 At = A 1 At β 1 βa α F βa α+1 Y A s ds β 1 βa α F βa α+1 Ỹ A s ds β 1 βa 2α+1 G βa α+1 Y A s ds

21 1344 B. TÓTH (5.3.1) M A M A t = βa 2 α+1 At = A 1 At β 1 βa 2α+1 G βa α+1 Ỹ A s ds The functional relations (5.1.12), the asymptotics (5.2.15) [respectively, (5.2.25)] and the scaling (5.3.1) [respectively, (5.3.6)] determine the proper scaling of the processes and : Asymptotically free case: α = (Case A): (5.3.11) Z A t = A 1 At Z A t = A 1 At Polynomially self-repelling case: α (Case B): (5.3.12) Z A t = βa 1 At Z A t = βa 1 At 5.4. Tightness. Given the asymptotic estimates (5.2.9) (5.2.12) [respectively, (5.2.22) (5.2.24)], the proof of tightness is rather standard: we have to check Kolmogorov s criterion; that is, the conditions of Theorem 12.3 from Billingsley (1968). We give the details of the proof for the processes Y A in the asymptotically free case (α = ); the proof for the other cases is completely identical. Let M l, l = 1 2 be an arbitrary discrete parameter martringale and write (5.4.1) ξ l = M l M l 1 l = The following identity holds: (5.4.2) l E M l M k 4 = 6 E ξj 2 M j 1 M k 2 j=k+1 l l + 4 E ξj 3 M j 1 M k + E ξj 4 j=k+1 j=k+1 which, via Jensen s inequality, yields (5.4.3) E M l M k l j=k+1 l j=k+1 l j=k+1 E E ξ 2 j j 1 M j 1 M k 2 E E ξ 4 j j 1 3/2 M j 1 M k 2 E E ξ 4 j j 1

22 SELF-INTERACTING RANDOM WALKS ON Z 1345 Applied to the martingale M A defined in (5.3.2) this gives E M A t M A s 4 (5.4.4) 6 At /A As /A + 4A 1/2 At /A E A 1 G AY A r M A r M A s 2 dr As /A E A 2 H AY A r 3/2 M A r M A s 2 dr At /A + A 1 E A 2 H AY A r dr As /A From the asymptotics (5.2.11) and (5.2.12) it follows that there exists a finite constant C, such that for any y >, (5.4.5) A 1 G Ay < C y + 1 A 2 H Ay < C y Consider the stopping times (5.4.6) τ y A = inf t : Y A t y From (5.4.4) we easily get for t, s τ y A, (5.4.7) E M A t τ y A M A s τ y A 4 K y t s A 1 2 where K y is a constant depending on y only. Hence, applying Theorem 12.3 of Billingsley (1968), we get the tightness of the martingales M A t τ y A for any y <. This also implies tightness of the martingales M A t. From (5.3.2) it is straightforward to see that tightness of the processes M A and Y A is equivalent Identification of the limiting processes. Assume, with some abuse of notation, that Y A and Ỹ A are weakly convergent subsequences: (5.5.1) Y A Y Ỹ A Ỹ From this it follows that the martingales M A and too: M A converge weakly, (5.5.2) M A M M A M Asymptotically free case: α = (Case A). We use (5.3.2) (5.3.5) and the asymptotics (5.2.9) (5.2.11) and get (5.5.3) M t = Y t Y δ 2 t t M M t = 2 Y s ds (5.5.4) M t = Ỹ t Ỹ 2 δ 2 t M M t = 2 t Ỹ s ds

23 1346 B. TÓTH These relations yield the following SDE s for Y [respectively, Ỹ ]: (5.5.5) dy t = δ 2 dt + 2Y t dw t dỹ t = 2 δ 2 dt + 2Ỹ t d W t which are valid as long as Y t > [respectively, Ỹ t > ]. These are precisely the SDE s of the BESQ δ 2 δ (respectively, BESQ ) processes described in Section 2. Polynomially self-repelling case: α (Case B). Now, (5.3.7) (5.3.1), (5.2.22), (5.2.23) and the explicit value of β given in (5.2.21) lead to (5.5.6) (5.5.7) (5.5.8) (5.5.9) M t = Y t Y α + 1 2α t M M t = 2 α Y s 2α+1 / α+1 ds M t = Ỹ t Ỹ α + 1 2α t M M t = 2 α Ỹ s 2α+1 / α+1 ds We write these relations again as SDE s: (5.5.1) dy t = dỹ t = t t α + 1 2α + 1 Y t α/ α+1 dt α + 1 Y t 2α+1 / 2α+2 dw t α + 1 2α + 1 Ỹ t α/ α+1 dt α + 1 Ỹ t 2α+1 / 2α+2 d W t Y s α/ α+1 ds Ỹ s α/ α+1 ds These SDE s are again valid as long as Y t > [respectively, Ỹ t > ]. The SDE s in (5.5.1) are easily identified as the SDE s of the α + 1 th power of BESQ 1. The SDE s (5.5.5) and (5.5.1) determine uniquely the limit processes Y and Ỹ as long as these processes do not hit the boundary = R +. In order to identify completely the limit processes we have to describe precisely their behavior at the boundary Reflection at the boundary. In this subsection we prove that the limit processes Y are reflected instantaneously at = R +. For y R + let us denote (5.6.1) τ y = inf l : l y (5.6.2) τ y A = inf t : Y A t y

24 SELF-INTERACTING RANDOM WALKS ON Z 1347 We shall prove that for any η >, (5.6.3) lim lim sup P τ y A > η Y A = = y from which the assertion follows. Asymptotically free case: α = (Case A). In this case, (5.6.4) τ y A = A 1 τ Ay We apply directly the optional sampling theorem [see Breiman (1968)] to the martingale l defined in (5.1.21) with the stopping time τ y : (5.6.5) = E τ y l = ( τy l 1 ) = E τ y l = E F j = j= However, from (5.2.5) and (5.2.7) we see that { } 2 (5.6.6) inf F x = min x w 2 > From (5.6.5) and (5.6.6) we conclude { w E τ y l = max 2 1 } (5.6.7) E τ 2 y l = Using this inequality it follows that for any t <, (5.6.8) (5.6.9) lim sup E τ y A t Y A = = lim sup E A 1 τ Ay At = { w max 2 1 } lim sup E A 1 τ 2 Ay At = In order to estimate the right-hand side of (5.6.8) and (5.6.9), note first that the following simple bound holds for the biggest jump of before τ y l: ( ) ( ) 1/4 (5.6.1) E max k k 1 l max H z 1 k τ y l z y Hence, using the asymptotics (5.2.12) of the function H, we get the overshoot bound ( ) (5.6.11) lim sup E A 1 τ Ay At = y which combined with (5.6.9) leads us to { w (5.6.12) lim sup E τ y A t Y A = max } y

25 1348 B. TÓTH which is valid for any t <. Hence, via Markov s inequality, (5.6.3) follows for the asymptotically free case. Polynomially self-repelling case: α (Case B). This is slightly more complicated. In this case, (5.6.13) τ y A = A 1 τ βa α+1 y For any x >, z, the following inequality holds: (5.6.14) z 1/ α+1 x 1/ α+1 1 α + 1 x α/ α+1 z x 1 α 2 α x 2α+1 / α+1 z x 2 From this it follows that (5.6.15) E l + 1 1/ α+1 l = x x 1/ α+1 1 α + 1 x α/ α+1 F x 1 α 2 α x 2α+1 / α+1 G x = 1 2 2α x 1/ α+1 x 1/ α+1 Fix x so that for x x, (5.6.16) E l + 1 1/ α+1 l = x x 1/ α+1 > 1 4 2α + 1 and denote x 1 = x 1/ α α+1. We consider the sequence of sampling times (5.6.17) k = k l = min i > k l 1 : i > x and the time-changed process (5.6.18) Due to (5.6.16) the process l = k l l 1 (5.6.19) l = l 1/ α α + 1 l is submartingale. We define the following sequence of stopping times: (5.6.2) s = (5.6.21) t i = min l > s i 1 : l x 1 (5.6.22) s i = min l > t i : l x

26 For y x 1 fixed, let SELF-INTERACTING RANDOM WALKS ON Z 1349 (5.6.23) ˆτ y = inf l > : l y (5.6.24) r y = max i: s i τ y (5.6.25) ˆτ y = r y +1 t i s i 1 i=1 In plain words, ˆτ y denotes the time spent in the interval x, r y denotes the number of downcrossings of the interval x x 1 and ˆτ y denotes the time spent in the interval x 1 by the process l, before τ y. Clearly (5.6.26) and (5.6.27) τ y ˆτ y + ˆτ y r y + 1 ˆτ y Applying the optional sampling theorem for the submartingale in (5.6.19), we get l defined (5.6.28) Hence (5.6.29) E ˆτ y l = 4 2α + 1 E ˆτ y l 1/ α+1 = 4 2α + 1 E ˆτ y l = 1/ α+1 lim sup E A 1 ˆτ βa α+1 y At = 4 2α + 1 lim sup E A 1 ˆτ βa α+1 y At = 1/ α+1 An estimate identical to (5.6.1) on the biggest jump of the process and the asymptotics (5.2.24) of the function H yields now the overshoot bound (5.6.3) lim sup E βa α+1 ˆτ βa α+1 y At = y which combined with (5.6.29) leads to (5.6.31) lim sup E A 1 ˆτ βa α+1 y At = 4 2α + 1 y 1/ α+1 Hence, by Markov s inequality, (5.6.32) lim y lim sup P A 1 ˆτ βa α+1 y > η = = To estimate ˆτ y, note first that the random variables (5.6.33) t i s i 1 i = 1 2 3

27 135 B. TÓTH are uniformly stochastically bounded: (5.6.34) m = max x x E τ x1 = x is finite since the Markov chain is not trapped in any finite interval. Using in turn this fact, Markov s inequality and (5.6.32), we find (5.6.35) lim y lim sup P A 1 ˆτ βa α+1 y > η = lim lim sup y ( P A 1 ˆτ βa α+1 y > η A 1 r βa α+1 y + 1 ε ) = + lim lim sup P A 1 r βa α+1 y + 1 > ε = y mε η + lim lim sup P A 1 ˆτ βa y α+1 y > ε = = mε η Letting ε on the right-hand side of (5.6.35), we get (5.6.36) lim y lim sup P ( A 1 ˆτ βa α+1 y > η = ) = Finally, (5.6.26), (5.6.32) and (5.6.36) imply (5.6.3) for the polynomially selfrepelling case, α Absorption at the boundary. We prove that the limit processes Ỹ are absorbed at = R +. When proving this latter assertion we also get the weak convergence of the extinction times σ A. For x R + we denote (5.7.1) σ x = inf l : l x (5.7.2) We prove now that for any η >, (5.7.3) lim y σ x A = inf t : Ỹ A t x lim sup P σ A > η Ỹ A = y = Asymptotically free case: α = (Case A). to (5.7.4) lim y lim sup P σ > Aη = Ay = In this case (5.7.3) is equivalent δ > 2 and < δ 2 are treated separately. Case A1. First we consider the self-attracting cases with δ > 2. From (5.2.1) it follows that there exists an x < such that for x x, (5.7.5) F x 2 δ 4 <

28 and thus (5.7.6) SELF-INTERACTING RANDOM WALKS ON Z 1351 l = l + δ 2 4 l is supermartingale as long as l x. Applying the optional sampling theorem to the supermartingale l we get for y > x, (5.7.7) Now we prove (5.7.4): (5.7.8) lim y E σ x = y 4 δ 2 y lim sup P σ > Aη = Ay lim lim sup P σ x > Aη/2 = Ay y + lim sup sup P σ > Aη/2 = x x x Applying Markov s inequality and (5.7.7) we get ( lim lim sup P σ x > Aη ) (5.7.9) y 2 = Ay lim y 8y δ 2 η = On the other hand, since x is constant independent of A, the second limit on the right hand side of (5.7.8) clearly vanishes. Hence (5.7.4) for this case. Case A2. Next we deal with the cases when δ 2. Choose (5.7.1) γ < δ 2 1 For any x >, z, the following inequality holds: z γ x γ γx γ 1 γ 1 γ z x x γ 2 z x 2 2 (5.7.11) γ γ 1 γ 2 + x γ 3 z x 3 6 From this it follows that E l + 1 γ l = x x γ γx γ 1 γ 1 γ F x x γ 2 G x 2 (5.7.12) γ γ 1 γ 2 + x γ 3 H x 3/4 6 γ δ 2γ = x γ 1 + x γ 3/2 2 where we have used the asymptotics (5.2.1) (5.2.12). Fix x < such that for x x, (5.7.13) E l + 1 γ l = x x γ γ δ 2γ x γ 1 4

29 1352 B. TÓTH Thus (5.7.14) l = l γ + γ δ 2γ 4 l 1 k= k γ 1 is supermartingale as long as l > x, Hence, for y x, ( [ ] γ 1 ) E σ x sup k = y (5.7.15) k σ x 1 ( σ x 1 E k= 4 γ δ 2γ yγ k γ 1 = y ) Since k γ, k σ x, itself is a supermartingale, we also have ( ) (5.7.16) P sup k > λ = y yγ k σ x 1 λ γ Finally, using (5.7.15) and (5.7.16) we derive ( P σ x > Aη ) 2 = Ay ([ P σ x > Aη ] [ sup 2 k σ x 1 (5.7.17) Hence (5.7.18) ( + P ( P sup k σ x 1 σ x [ ( + P sup k σ x 1 sup k σ x 1 8 γ δ 2γ lim y y γ ) k > Aλ = Ay ] ) k Aλ = Ay ] γ 1 k > Aγ λ γ 1 ) η 2 = Ay ) k > Aλ = Ay yγ + ηλγ 1 λ γ lim sup P σ x > Aη/2 = Ay = and from an argument identical to (5.7.8) we get (5.7.4). Polynomially self-repelling case: α (Case B). In this case (5.7.3) is equivalent to (5.7.19) lim y lim sup P σ > Aη = A α+1 y =

30 SELF-INTERACTING RANDOM WALKS ON Z 1353 The proof is completely identical to the previous one, with the choice of (5.7.2) γ < We omit the repetition of the details. 1 2 α + 1 < End of proof. Collecting the results of Sections 5.4, 5.5, 5.6 and 5.7 we conclude the following: In the asymptotically free case, α = (Case A), (5.8.1) in the space D and Z A Z δ (5.8.2) Z A σ A Z 2 δ σ in the space D, where Z δ is the BESQ δ process and Z 2 δ 2 δ is the BESQ process defined in Section 2. The parameter δ is given in (5.2.4). In the polynomially self-repelling case, α (Case B), (5.8.3) in the space D and (5.8.4) Z A Z 1 Z A σ A Z 1 σ in the space D. Given the representation of the local time process described in Section 4.2, Theorem 1A (respectively, Theorem 1B), follows directly from (5.8.1) and (5.8.2) [respectively (5.8.3) and (5.8.4)] after noting that due to (4.1.11) it is easily seen that the single exceptional step (4.2.5) does not spoil the continuity of the limit process S x h y at y = x. Corollaries 1A and 1B follow directly from Theorems 1A and 1B, respectively. Note that the joint convergence of the processes S Ax Ah A /A and extinction times ω ± Ax Ah /A is needed in this proof. 6. Proof of Theorems 2A and 2B. We prove a general abstract version of Theorems 2A and 2B: Let R y S a h y R + be a generalized Ray Knight process, as defined in Section A.5 of the Appendix and let T a h be the total area under S a h. Define the functions ϱ t a h, π t x, ˆϱ s a h and ˆπ t x via formulas (2.12), (2.14), (2.16), and (2.17), respectively, with the superscript δ erased. According to Theorem 4 (proved in the Appendix) the functions R x π t x R + and R x ˆπ s x R + are probability densities. Consider a nearest neighbor, self-interacting random walk X i on with law given by (1.1) (1.3), with arbitrary weight function w and denote by T k m the hitting times defined in (3.1.2) and (3.1.3). Further, let P n k and R s k

31 1354 B. TÓTH be the distributions of the random walk X i at time n and at the geometrically distibuted time θ s, respectively, as given in (3.2.1) (3.2.3). Theorem 2. Assume there are two constants β > and γ > so that for any x > h > and = < or > (6.1) T Ax A γ βh 2βA 1+γ T x h as A. Then for any s > and almost all x R (6.2) ˆπ A s x β 1/ 1+γ ˆπ s β 1/ 1+γ x as A where ˆπ A s x is the properly rescaled distribution of the random walk: (6.3) ˆπ A s x = A 1/ 1+γ R A 1 s A 1/ 1+γ x Proof. We note first that (6.4) P n k = P X n = k = P T > k m = n + P T< k m = n m= On the other hand, from the definition (6.3) of ˆπ A, (6.5) ˆπ A s x = 1 e s/a s/a sa γ/ 1+γ e ns/a P n A 1/ 1+γ x n= Combining (6.4) and (6.5) we are led to (6.6) ˆπ A s x = 1 exp s/a s/a sa γ/ 1+γ m= [ ( { E exp s }) A T> A 1/ 1+γ x m ( { + E exp s })] A T< A 1/ 1+γ x m Defining (6.7) ( { ˆϱ A s x h = s E exp s 2βA T A 1/ 1+γ x A γ/ 1+γ βh }) (6.6) reads (6.8) ˆπ A s x = 1 e s/a s/a 1 2 ( ˆϱ > A 2βs x h + ˆϱ < A 2βs x h ) dh From (6.1) it follows that for any s >, x R and h >, (6.9) ˆϱ A s x h ˆϱ s x h

32 SELF-INTERACTING RANDOM WALKS ON Z 1355 as A and the functions ˆϱ and ˆπ scale as follows: λ >, (6.1) λ ˆϱ λ 1 s λ 1/ 1+γ x λ γ/ 1+γ h = ˆϱ s x h (6.11) λ 1/ 1+γ ˆπ λ 1 s λ 1/ 1+γ x = ˆπ s x by Faton s lemma, equations (6.8), (6.9) and (6.11) imply for any x R, (6.12) lim inf ˆπ A s x ˆϱ 2βs x h dh = ˆπ βs x = β 1/ 1+γ ˆπ s β 1/ 1+γ x On the other hand, by Theorem 4, (6.13) ˆπ A s x dx = 1 = b 1/ 1+γ ˆπ s b 1/ 1+γ x dx From (6.12) and (6.13) follows the statement of Theorem 2. Clearly, the statements of Theorems 2A and 2B are just particular cases of Theorem 2. APPENDIX Generalized Ray Knight processes II. This appendix is devoted to the proof of (2.28). Actually, we define a more general notion of Ray Knight process and we prove (2.28) in a much more general context. This Appendix is completely self-contained and we think that it might be interesting on its own, from a purely diffusion-theoretic point of view. A.1. Conjugate diffusions. Let (A.1.1) a: R + b: R + be twice continuously differentiable functions and define the second order differential operators (A.1.2) Gf x = 1 2 a x f x a x + b x f x (A.1.3) Hf x = 1 2 a x f x a x b x f x We call the operators G and H a conjugate pair of diffusion generators on R +. The analytic content of this conjugacy is the (equivalent) pair of commutation relations (A.1.4) d dx G = H d dx d dx H = G d dx

33 1356 B. TÓTH where G and H are the formal Lebesgue adjoints of G and H, respectively: (A.1.5) (A.1.6) G f x = 1 2 a x f x a x b x f x a x b x f x H f x = 1 2 a x f x a x + b x f x a x + b x f x Integration by parts yields the identities (A.1.7) (A.1.8) x1 x G f y g y dy = x1 x H f y g y dy = x1 x f y [ Gg ] y dy + H f y g y 1 2 a y f y g y x 1 x x1 x f y Hg y dy + G f y g y 1 2 a y f y g y x 1 x where by f we denoted the function f y = y f z dz and we adopted the notation h y x 1 x = h x 1 h x. Define the functions { 2 x } u x = a x exp 2b y (A.1.9) a y dy (A.1.1) { 2 x v x = a x exp 1 1 } 2b y a y dy With the help of these functions we can express the differential operators G, H, G and H as (A.1.11) (A.1.12) G = 1 d 1 d v dx u dx H = 1 d 1 d u dx v dx G = d 1 d 1 dx u dx v H = d 1 d 1 dx v dx u We consider two diffusion processes on R + : X t and Y t with generators G (respectively, H). More precisely, the generators of X t (respectively, Y t ) restricted to smooth functions with compact support in R + act as G (respectively, H). The diffusions X t and Y t are uniquely determined by these generators as long as they do not hit the boundary = R +. The ambiguity in the behavior of the processes X t and Y t at is eliminated in the following way: X t is reflected instantaneously at [see Definition VII in Revuz and Yor (1991)] and Y t is stopped at (A.1.13) τ = inf t: Y t =

34 SELF-INTERACTING RANDOM WALKS ON Z 1357 We call the processes X t and Y t conjugate diffusions. The scale functions and speed measures of the processes X t (respectively, Y t ) are r x and n dx [respectively, s x and m dx ]. According to standard results about onedimensional diffusions [see Exercise VII.3.2. in Revuz and Yor (1991)], on R + we have (A.1.14) x r x = u y dy n dx = v x dx (A.1.15) x s x = v y dy m dx = u x dx The lower limits in the integrals defining r and s are chosen in an arbitrary way. Formulas (A.1.14) and (A.1.15) give the probabilistic content of conjugacy of the diffusions X t and Y t : the derivative of the scale function of one is the Radon Nikodym derivative of the speed measure of the other, and vice versa. In accordance with the behavior at the boundary described in the previous paragraph, we define (A.1.16) n = m = The conjugacy of a pair of diffusions is invariant under diffeomorphisms of R + : let (A.1.17) : R + R + be a C 2 bijection which has a C 2 inverse 1, and preserves the orientation of the half-line R + : (A.1.18) Consider the diffusions (A.1.19) lim x x = X t = X t lim x x = Ỹ t = Y t It is easy to check that if X t and Y t are a conjugate pair of diffusions on R +, then so are X t and Ỹ t, with (A.1.2) ã = 2 a 1 (A.1.21) b = b 1 Our notion of conjugacy of the pair of diffusions X t, Y t is closely related to the conjugacy notion introduced in Biane (1985). By time-reversing the process Y t, (A.1.22) Ŷ t = Y τ t t τ we get a transient diffusion Ŷ t stopped at its last hitting of y R +. The diffusions Ŷ t and X t are conjugate in Biane s sense. Biane introduced his notion of conjugacy in order to generalize the Cieselski Taylor identities. This

35 1358 B. TÓTH fact suggests that the identity proved in Section A.4 [and, consequently, (2.28) too] must have some connection to the Cieselski Taylor identities that we have not been able to elucidate yet. A.2. Boundary conditions. Throughout this Appendix we shall use the following stopping times: for y, (A.2.1) σ y = inf t: X t = y (A.2.2) τ y = inf t: Y t = y with the usual convention inf =. We impose some conditions on the behavior of the diffusions X t and Y t near and : Condition at. We give two equivalent formulations one referring to the diffusion X t and the other to Y t of the same single condition 1 ( 1 ) 1 (A.2.3) u z dz v y dy = r 1 r y n dy < (A.2.4) 1 y ( z ) v y dy u z dz = 1 [ s z s ] m dz < [We could have chosen any positive number instead of 1 as upper limit of integration in (A.2.3)/(A.2.4).] The left-hand sides in the two formulas (A.2.3) and (A.2.4) are clearly the same, so we emphasize again that these are just two different formulations of the same condition. In probabilistic terms, these conditions are equivalent to (A.2.5) σ x X = P as x (A.2.6) τ Y = x P as x where P stands for convergence in probability. In plain words, (A.2.5) [respectively, (A.2.6)] means that X t does not stick to (respectively, Y t can hit ) in finite time. In particular, from (A.2.4) it also follows that (A.2.7) and we can choose (A.2.8) s x = s x s = x x v y dy < v y dy that is, s = Conditions at. Again, we give two equivalent formulations of the boundary condition at infinity. The first formulation refers to the diffusion X t, the

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Branching Processes II: Convergence of critical branching to Feller s CSB

Branching Processes II: Convergence of critical branching to Feller s CSB Chapter 4 Branching Processes II: Convergence of critical branching to Feller s CSB Figure 4.1: Feller 4.1 Birth and Death Processes 4.1.1 Linear birth and death processes Branching processes can be studied

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Exponential martingales: uniform integrability results and applications to point processes

Exponential martingales: uniform integrability results and applications to point processes Exponential martingales: uniform integrability results and applications to point processes Alexander Sokol Department of Mathematical Sciences, University of Copenhagen 26 September, 2012 1 / 39 Agenda

More information

Applications of Ito s Formula

Applications of Ito s Formula CHAPTER 4 Applications of Ito s Formula In this chapter, we discuss several basic theorems in stochastic analysis. Their proofs are good examples of applications of Itô s formula. 1. Lévy s martingale

More information

ERRATA: Probabilistic Techniques in Analysis

ERRATA: Probabilistic Techniques in Analysis ERRATA: Probabilistic Techniques in Analysis ERRATA 1 Updated April 25, 26 Page 3, line 13. A 1,..., A n are independent if P(A i1 A ij ) = P(A 1 ) P(A ij ) for every subset {i 1,..., i j } of {1,...,

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Exercises in stochastic analysis

Exercises in stochastic analysis Exercises in stochastic analysis Franco Flandoli, Mario Maurelli, Dario Trevisan The exercises with a P are those which have been done totally or partially) in the previous lectures; the exercises with

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Solving the Poisson Disorder Problem

Solving the Poisson Disorder Problem Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Stochastic Volatility and Correction to the Heat Equation

Stochastic Volatility and Correction to the Heat Equation Stochastic Volatility and Correction to the Heat Equation Jean-Pierre Fouque, George Papanicolaou and Ronnie Sircar Abstract. From a probabilist s point of view the Twentieth Century has been a century

More information

Stochastic integration. P.J.C. Spreij

Stochastic integration. P.J.C. Spreij Stochastic integration P.J.C. Spreij this version: April 22, 29 Contents 1 Stochastic processes 1 1.1 General theory............................... 1 1.2 Stopping times...............................

More information

Bayesian quickest detection problems for some diffusion processes

Bayesian quickest detection problems for some diffusion processes Bayesian quickest detection problems for some diffusion processes Pavel V. Gapeev Albert N. Shiryaev We study the Bayesian problems of detecting a change in the drift rate of an observable diffusion process

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

An essay on the general theory of stochastic processes

An essay on the general theory of stochastic processes Probability Surveys Vol. 3 (26) 345 412 ISSN: 1549-5787 DOI: 1.1214/1549578614 An essay on the general theory of stochastic processes Ashkan Nikeghbali ETHZ Departement Mathematik, Rämistrasse 11, HG G16

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

1 Differentiable manifolds and smooth maps

1 Differentiable manifolds and smooth maps 1 Differentiable manifolds and smooth maps Last updated: April 14, 2011. 1.1 Examples and definitions Roughly, manifolds are sets where one can introduce coordinates. An n-dimensional manifold is a set

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Alternative Characterizations of Markov Processes

Alternative Characterizations of Markov Processes Chapter 10 Alternative Characterizations of Markov Processes This lecture introduces two ways of characterizing Markov processes other than through their transition probabilities. Section 10.1 describes

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion

Brownian Motion. An Undergraduate Introduction to Financial Mathematics. J. Robert Buchanan. J. Robert Buchanan Brownian Motion Brownian Motion An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Background We have already seen that the limiting behavior of a discrete random walk yields a derivation of

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Exponential Mixing Properties of Stochastic PDEs Through Asymptotic Coupling

Exponential Mixing Properties of Stochastic PDEs Through Asymptotic Coupling Exponential Mixing Properties of Stochastic PDEs Through Asymptotic Coupling September 14 2001 M. Hairer Département de Physique Théorique Université de Genève 1211 Genève 4 Switzerland E-mail: Martin.Hairer@physics.unige.ch

More information

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P =

1/2 1/2 1/4 1/4 8 1/2 1/2 1/2 1/2 8 1/2 6 P = / 7 8 / / / /4 4 5 / /4 / 8 / 6 P = 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Andrei Andreevich Markov (856 9) In Example. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 P (n) = 0

More information

One dimensional Maps

One dimensional Maps Chapter 4 One dimensional Maps The ordinary differential equation studied in chapters 1-3 provide a close link to actual physical systems it is easy to believe these equations provide at least an approximate

More information

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974

LIMITS FOR QUEUES AS THE WAITING ROOM GROWS. Bell Communications Research AT&T Bell Laboratories Red Bank, NJ Murray Hill, NJ 07974 LIMITS FOR QUEUES AS THE WAITING ROOM GROWS by Daniel P. Heyman Ward Whitt Bell Communications Research AT&T Bell Laboratories Red Bank, NJ 07701 Murray Hill, NJ 07974 May 11, 1988 ABSTRACT We study the

More information

Doléans measures. Appendix C. C.1 Introduction

Doléans measures. Appendix C. C.1 Introduction Appendix C Doléans measures C.1 Introduction Once again all random processes will live on a fixed probability space (Ω, F, P equipped with a filtration {F t : 0 t 1}. We should probably assume the filtration

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Interest Rate Models:

Interest Rate Models: 1/17 Interest Rate Models: from Parametric Statistics to Infinite Dimensional Stochastic Analysis René Carmona Bendheim Center for Finance ORFE & PACM, Princeton University email: rcarmna@princeton.edu

More information

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3. 1. GAUSSIAN PROCESSES A Gaussian process on a set T is a collection of random variables X =(X t ) t T on a common probability space such that for any n 1 and any t 1,...,t n T, the vector (X(t 1 ),...,X(t

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM RNDOM WLKS IN Z d ND THE DIRICHLET PROBLEM ERIC GUN bstract. Random walks can be used to solve the Dirichlet problem the boundary value problem for harmonic functions. We begin by constructing the random

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

Walsh Diffusions. Andrey Sarantsev. March 27, University of California, Santa Barbara. Andrey Sarantsev University of Washington, Seattle 1 / 1

Walsh Diffusions. Andrey Sarantsev. March 27, University of California, Santa Barbara. Andrey Sarantsev University of Washington, Seattle 1 / 1 Walsh Diffusions Andrey Sarantsev University of California, Santa Barbara March 27, 2017 Andrey Sarantsev University of Washington, Seattle 1 / 1 Walsh Brownian Motion on R d Spinning measure µ: probability

More information

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1. Sep. 1 9 Intuitively, the solution u to the Poisson equation S chauder Theory u = f 1 should have better regularity than the right hand side f. In particular one expects u to be twice more differentiable

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

Some Tools From Stochastic Analysis

Some Tools From Stochastic Analysis W H I T E Some Tools From Stochastic Analysis J. Potthoff Lehrstuhl für Mathematik V Universität Mannheim email: potthoff@math.uni-mannheim.de url: http://ls5.math.uni-mannheim.de To close the file, click

More information

2. Transience and Recurrence

2. Transience and Recurrence Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times

More information

Squared Bessel Process with Delay

Squared Bessel Process with Delay Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

Sums of exponentials of random walks

Sums of exponentials of random walks Sums of exponentials of random walks Robert de Jong Ohio State University August 27, 2009 Abstract This paper shows that the sum of the exponential of an oscillating random walk converges in distribution,

More information

Prime numbers and Gaussian random walks

Prime numbers and Gaussian random walks Prime numbers and Gaussian random walks K. Bruce Erickson Department of Mathematics University of Washington Seattle, WA 9895-4350 March 24, 205 Introduction Consider a symmetric aperiodic random walk

More information

The Uses of Differential Geometry in Finance

The Uses of Differential Geometry in Finance The Uses of Differential Geometry in Finance Andrew Lesniewski Bloomberg, November 21 2005 The Uses of Differential Geometry in Finance p. 1 Overview Joint with P. Hagan and D. Woodward Motivation: Varadhan

More information

Stability of Stochastic Differential Equations

Stability of Stochastic Differential Equations Lyapunov stability theory for ODEs s Stability of Stochastic Differential Equations Part 1: Introduction Department of Mathematics and Statistics University of Strathclyde Glasgow, G1 1XH December 2010

More information

Latent voter model on random regular graphs

Latent voter model on random regular graphs Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with

More information

Convergence of generalized entropy minimizers in sequences of convex problems

Convergence of generalized entropy minimizers in sequences of convex problems Proceedings IEEE ISIT 206, Barcelona, Spain, 2609 263 Convergence of generalized entropy minimizers in sequences of convex problems Imre Csiszár A Rényi Institute of Mathematics Hungarian Academy of Sciences

More information

arxiv: v1 [math.pr] 6 Jan 2014

arxiv: v1 [math.pr] 6 Jan 2014 Recurrence for vertex-reinforced random walks on Z with weak reinforcements. Arvind Singh arxiv:40.034v [math.pr] 6 Jan 04 Abstract We prove that any vertex-reinforced random walk on the integer lattice

More information

Kolmogorov Equations and Markov Processes

Kolmogorov Equations and Markov Processes Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network. Haifa Statistics Seminar May 5, 2008

Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network. Haifa Statistics Seminar May 5, 2008 Positive Harris Recurrence and Diffusion Scale Analysis of a Push Pull Queueing Network Yoni Nazarathy Gideon Weiss Haifa Statistics Seminar May 5, 2008 1 Outline 1 Preview of Results 2 Introduction Queueing

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

1 Brownian Local Time

1 Brownian Local Time 1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =

More information

Backward Stochastic Differential Equations with Infinite Time Horizon

Backward Stochastic Differential Equations with Infinite Time Horizon Backward Stochastic Differential Equations with Infinite Time Horizon Holger Metzler PhD advisor: Prof. G. Tessitore Università di Milano-Bicocca Spring School Stochastic Control in Finance Roscoff, March

More information

The Pedestrian s Guide to Local Time

The Pedestrian s Guide to Local Time The Pedestrian s Guide to Local Time Tomas Björk, Department of Finance, Stockholm School of Economics, Box 651, SE-113 83 Stockholm, SWEDEN tomas.bjork@hhs.se November 19, 213 Preliminary version Comments

More information

arxiv: v1 [math.pr] 1 Jan 2013

arxiv: v1 [math.pr] 1 Jan 2013 The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch

More information

Regularization by noise in infinite dimensions

Regularization by noise in infinite dimensions Regularization by noise in infinite dimensions Franco Flandoli, University of Pisa King s College 2017 Franco Flandoli, University of Pisa () Regularization by noise King s College 2017 1 / 33 Plan of

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

On countably skewed Brownian motion

On countably skewed Brownian motion On countably skewed Brownian motion Gerald Trutnau (Seoul National University) Joint work with Y. Ouknine (Cadi Ayyad) and F. Russo (ENSTA ParisTech) Electron. J. Probab. 20 (2015), no. 82, 1-27 [ORT 2015]

More information

Maximum Process Problems in Optimal Control Theory

Maximum Process Problems in Optimal Control Theory J. Appl. Math. Stochastic Anal. Vol. 25, No., 25, (77-88) Research Report No. 423, 2, Dept. Theoret. Statist. Aarhus (2 pp) Maximum Process Problems in Optimal Control Theory GORAN PESKIR 3 Given a standard

More information

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching

Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting Switching Discrete Dynamics in Nature and Society Volume 211, Article ID 549651, 12 pages doi:1.1155/211/549651 Research Article Existence and Uniqueness Theorem for Stochastic Differential Equations with Self-Exciting

More information

Point Process Control

Point Process Control Point Process Control The following note is based on Chapters I, II and VII in Brémaud s book Point Processes and Queues (1981). 1 Basic Definitions Consider some probability space (Ω, F, P). A real-valued

More information

Stochastic Partial Differential Equations with Levy Noise

Stochastic Partial Differential Equations with Levy Noise Stochastic Partial Differential Equations with Levy Noise An Evolution Equation Approach S..PESZAT and J. ZABCZYK Institute of Mathematics, Polish Academy of Sciences' CAMBRIDGE UNIVERSITY PRESS Contents

More information

{σ x >t}p x. (σ x >t)=e at.

{σ x >t}p x. (σ x >t)=e at. 3.11. EXERCISES 121 3.11 Exercises Exercise 3.1 Consider the Ornstein Uhlenbeck process in example 3.1.7(B). Show that the defined process is a Markov process which converges in distribution to an N(0,σ

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

Notes for Expansions/Series and Differential Equations

Notes for Expansions/Series and Differential Equations Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

4.5 The critical BGW tree

4.5 The critical BGW tree 4.5. THE CRITICAL BGW TREE 61 4.5 The critical BGW tree 4.5.1 The rooted BGW tree as a metric space We begin by recalling that a BGW tree T T with root is a graph in which the vertices are a subset of

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Itô s excursion theory and random trees

Itô s excursion theory and random trees Itô s excursion theory and random trees Jean-François Le Gall January 3, 200 Abstract We explain how Itô s excursion theory can be used to understand the asymptotic behavior of large random trees. We provide

More information